A natural human-computer interface

Fascinating article on why The human body will be the next computer interface, by service design consultancy Fjord. To extract the key points:

“Why swipe your arm when you can just rub your fingers together. What could be more natural than staring at something to select it, nodding to approve something? This is the world that will be possible when we have hundreds of tiny sensors mapping every movement, outside and within our bodies. For privacy, you’ll be able to use imperceptible movements, or even hidden ones such as flicking your tongue across your teeth.”

“The possible interactions are almost limitless and move us closer and closer to a natural human-computer interface. At this point, the really intriguing thing is that the interface has virtually disappeared; the screens are gone, and the input devices are dispersed around the body.”

And meanwhile, what if digital interfaces became more tactile…

Mapping new senses for health monitoring

The feelSpace is a belt designed to indicate where North is by vibration. The incredible effect occurs after a while when the wearer would start to develop a strong intuitive sense of direction. Similar projects involve augmenting a sense of balance to people with damaged inner ears, and spatial awareness to the blind through sending electric shocks to the tongue. The incredible flexibility of the human brain means we can plug in new senses through existing ones such as touch or sight.

So what else can we plug in? An existing trend in healthcare is making data easier to visualize and therefore more meaningful. We could go one step further and use this sensory augmentation to effectively sense key health indicators and gain a greater awareness of our body and the effect of our lifestyle. This is profound.

For example, elevated insulin is one of the primary causes of heart disease, as well as diabetes, osteoporosis, cancer and  other health conditions. Stable blood glucose levels also have a significant effect on mental performance, mood, and general well being. It may seem strange that we aren’t doing more to measure and act on this unless we’re diabetic or our life depends on it. But testing blood glucose testing hasn’t been cheap, fun or easy.  And for continuous feedback, the sensor typically needs to be embedded underneath your skin in a way that it is exposed to constant bloodflow which has complications.

The C8 Medisensors turns this on it’s head by using light to accurately and non-invasively measure blood glucose. It won’t be long before one of these devices can be small enough to be integrated into our clothing at which time it will be trivial to gain a realtime awareness of our blood glucose and we can start to gain a better understanding of the effect of our lifestyle habits on our health. If observation is the best behavioral change mechanism, we could learn how to live healthier lives and potentially prevent these diseases before they develop.

Combining Myo, Minuum and Google Glass

A few projects have been surfacing that hint that in the near future we probably won’t be walking around the world, looking down and poking at 4″ touchscreens. Follow the sequence below…

Meet Minuum – a project to linearise the keyboard and in doing so allow it to easily be mapped to new inputs.

Now meet Myo – an armband that lets you use the electrical activity in your muscles for gesture control. Unlike the Leap it doesn’t require your hand to be in front of a camera for it to pick up your gestures and is therefore more appropriate for when you are not behind your desk. Mapping minor gestures to a new type of keyboard, like Minuum above, is a good use case.

And we all know Glass – the wearable, head-mounted display from Google X Lab, that (relevant to this use case) has been criticized for the potential awkwardness of voice commands in urban or noisy environments.

Stick these three together and you have a new immersive, yet discreet, way of interacting technology, where one can use gestures to type messages, control actions, and interact with the world around us. This type of interaction has shown up before, in the form of a MIT Media Lab project called SixthSense, consisting of a projector and camera that hangs around your neck. It projects a display onto everyday objects you encounter, using the camera to recognize hand gestures by tracking color markers on your fingers. But two fundamental things will be different:

  • Instead of requiring a surface to project onto, the display becomes more personal and versatile for everyday applications. Only you can see it and it’s overlayed in the top corner of your eye – there when you need it.
  • Instead of measuring your hand moving in physical space with a camera, one can measure the intent of moving your hand through electromyography. This means gestures can be more subtle and potentially more accurate as you can map commands directly to nerve signals, measuring closer to the source.

The specifics of this new model for how we carry around and interact with technology will probably vary. Having an actual projection may in some cases be useful, as well as less costly than Google Glass. Minuum are even working on hardware themselves, such as rings, bands and other wearable technology, which will most likely also use EMG signals and accelerometers.

However any other combinations will probably result in the same general outcome: mapping basic movements to an interface in a way that the technology is no longer just contained within one device but spread across a few seamlessly integrated systems. This is a pretty good example of the trend of ubiquity (think invisible omnipresence) in computing that we’ll see more of in the future. 

The market winner will be the one that manages to combine this tech into a persuasive and complete product, designing an interaction that manages to avoid the “Segway effect” thats bound to come with a new technology that involves you wearing strange headsets or making weird movements in public, no matter how inconspicious they are.