?

Log in

No account? Create an account
Trevor Stone's Journal
Those who can, do. The rest hyperlink.
Intuition, Technology, and Communication at CWA 
7th-Apr-2010 02:56 pm
Trevor glowing grad macky auditorium
Key takeaways from Intuition, Technology, and Communication at The 2010 Conference on World Affairs.

Intuition is a human skill for coming to a conclusion given all the varied data we've gathered. Technology provides us with a lot more data so hopefully humans can have better intuitions. For instance, a human with a laptop and a chess program can beat a chess supercomputer that a human alone cannot beat.

Intuitions are often culturally conditioned. American intuition when the phone rings is to answer it. Russian (Soviet?) intuition when the phone rings is not to, because it's probably someone who wants you to fix something.

The most useful recordings for building Roger Ebert's new TTS voice were commentary tracks from DVDs. Future movie commentaries will literally be recycled from his old movie commentaries.

There are TTS systems that let you add markup for inflection, but it makes typing even slower, you'll far further behind in the conversation. What if there were Unicode characters that indicated sense and emotion? ;-) is okay at indicating irony and ALL CAPS can do emphasis, but there's a lot more nuance that could be added.

Some of the best writers are lousy face-to-face conversationalists.
Comments 
8th-Apr-2010 03:54 am (UTC)
perhaps getting intonation into TTS realtime is best handled by learning to input via two chorded keyboards, one in each hand. one for the "words", and another that lets you combine 'emotional primitives'.

Another idea would be inputting words in a way of your choice, and writing an automated detector of facial cues to insert 'appropriate' intonation. In the same way that speech recognition is made easier by training it on a voice, you could train a face-based emotion detector on the contours and features of the individual's face. It seems like if the work of Paul Ekman has any objective value at all, then it should be quite possible to write something that detects "microexpressions" based on live pictures of a face whose 'default state' is already modeled. Once such a thing is built use the presence/absence (or magnitude if you're not feeling boolean) of microexpressions to build a statistical model of the moods that those facial cues most likely represent.

Seriously, I make it sound so easy you could do this in your 20% time.

Of course, were such a project done, you know that some people would begin to be trained by the failures of the system to avoid some expressions, overdo some others, etc.. Because as we know, technology is very effective at training humans.
8th-Apr-2010 04:16 am (UTC)
Unfortunately, folks like Roger Ebert have a limited ability for facial expressions in a way that's linked to the fact that they can't speak.
8th-Apr-2010 08:54 am (UTC) - affect mark-up language
Somewhere during my cog sci course i had the same question and searched for emotion markup language...Actually found a few preliminary attempts too...with 5 categories of emotion and a +/- values to each of them...but they were all about classifying existing text...... A google search for emotional mark-up language gives me this... http://en.wikipedia.org/wiki/Emotion_Markup_Language hmm... interesting...
This page was loaded Apr 22nd 2018, 6:32 pm GMT.