FaceSense: Mind-reading from MIT
By Maria Popova
We have a longstanding fascination with the human face and the wealth of data that it holds. Now, the Affective Computing Group at the MIT Media Lab (another Brain Pickings darling) has developed FaceSense — a software application that detects head and face gestures in real time, analyzes them and deduces information about the person’s emotional disposition and mental state.
The principle, of course, is nothing new — back in the late 70’s, legendary psychologist Paul Ekman pioneered FACS, the Facial Actions Coding System, which is used to this day by anyone from academic researchers to the CIA to draw information about cognitive-affective states based on the micromuscle contractions of the human face. Though the MIT project doesn’t explicitly disclose it, we bet the data encoding is based, at least to some degree, on FACS.
But what makes FaceSense different and important is that it enables the extraction of such cognitive-affective information from pre-recorded video. And in the midst of all the neuromarketing hype — which is, for the most part, just that: hype — it offers an interesting model for collecting consumer psychology insight remotely, a scalable and useful tool for the age of telecommuting and sentiment analysis. What’s more, it helps bypass the quintessential unreliability of self-report in product testing.
Its applications can, of course, extend far beyond the marketing industry. An accurate disposition detection model for video can be used in anything from analyzing politicians’ televised appearances to testing news anchors for bias. And, judging by the abundance of all things video at CES this year, FaceSense has firmly planted its feet in a rich and ever-expanding space.
—
Published January 12, 2010
—
https://www.themarginalian.org/2010/01/12/facesense/
—
ABOUT
CONTACT
SUPPORT
SUBSCRIBE
Newsletter
RSS
CONNECT
Facebook
Twitter
Instagram
Tumblr