With immersive technologies on the brink of a breakthrough, what would it take for machines to read and understand our gestures?
Human beings communicate by making sounds and moving body parts — a combination that helps us connect the inside of our minds with the outside world. We show our spatial conceptualisation to each other, simply by the wave of a hand. However, we are not able to gesture when we interact with machines. Yet.
To be able to fully interpret us, speech robots need to learn more than the words we use. Body language is fundamental — both for…
“Call me an ambulance, now”
– ”From now on, I’ll call you ‘An ambulance’. OK?”
Picture a world where humans can communicate with machines — one where a soothing voice reads you the weather report, orders food and makes shopping lists, without you needing to lift a finger. We’re nearly there, with the emergence of voice assistants. Just one minor detail is holding us back — Siri, Alexa, Google Assistant and other speech robots don’t really understand anything.
And how could we expect them to?
Linguist and writer for Bakken & Bæck – it’s all semantics to me.