User Experience, or UX, is exactly what it says on the tin. Much advancement has come in the form of better design, easier navigation and a user-orientated approach to implementations. However, with the advancement of voice and gesture recognition in the near future, perhaps UX will be further revolutionised to a point where users have a completely effortless experience.
Voice and gesture recognition currently are seen as something ‘cool’ or ‘nice to have’, in many devices. However as they advance, they could become a more central feature in everyday use, and UX as we know it could change forever…
Many have seen the Siri responses on our iPhones, but the future is taking the right steps towards having a real conversation with our devices. Siri and S-Voice tend to let you ask them what to do, not real conversations, just responding to commands. However the future points towards NLU (natural language understanding), which is more human-like, flexible and understanding of what the user says.
Artificial intelligence is developing through machine learning, and there is certainly a push for technologies to move in this direction, and the competition to get there will be fierce amongst the Microsoft/ Google/ Apple companies.
With the launch of the iWatch last year, as well as screen-less wearables surfacing, voice recognition will become an even more integral part of user experience in the near future- however the functionality will have to advance further then us telling our phones to do a google search.
Machine learning and NLU will be crucial for this. The future simply holds endless possibilities; we might be talking to our kitchens and furniture! Our cars will have fully integrated voice recognition. Imagine this combined with driverless cars- the experience would be completely pleasurable, unique and luxurious. When I think about the future of voice recognition it is easy to get lost in the extremes: The movie ‘Her’ shows masses of people walking down the street having conversations with their devices, characters in video games speaking to you; and the most strangely, a relationship with a self-learning voice.
Micro-interactions are very important when users build a relationship with an application; these are contained moments within the app, such as swiping and tapping. At times users build psychological movement associations to the app with these actions.
Touch screens marked the beginning of this trend; however it is still developing as an aspect of UX. Take the Samsung Galaxy which knows when you are looking at the phone, and the S4 which knows when the user is doing things above the phone and responds to it appropriately. This is potentially why there is a rumour that Apple is looking to have a new iPhone with ‘eyeball recognition locking system’. TV’s already have gesture recognition, as well as voice recognition where you might request various channels. This solves the problems such as when you can’t find a TV remote and urgently need to pause the program, for example.
What is the future? Devices such as the ‘Sixth Sense’ are good indicators of this- using gesture recognition in a creative way- in this example having a tiny projector that you can wear around your neck and having the ability to play around with the graphics in front of you with sensors on your fingertips. This is exactly what believe the future holds, of something wider and more creative use that already exists.
What does this mean for UX in the future?
Firstly, our perspectives of UX could change. Right now, visual interfaces on our devices are a key part of the UX world, with voice and gesture recognition being the cool extensions of that. In the future, these features may become central elements in UX. Some physical interfaces in many cases could even disappear!
We currently see the smart phones of the future increasing in size. However, with the additional functionality and advancements in voice recognition, we could see a decrease in size as complex navigations are simplified and the need to tap/swipe our phone screen is less as we can just speak to it… Besides Apple has proved with the iWatch that smaller interfaces can be created stylishly and clearly.
Gesture movement will also make usability more pleasurable in the future. It will solve simple problems, like when you have something on your hands and can’t touch the screen, or if you are wearing gloves - just wave over it and get the desired result. Memorable micro-interactions will be revolutionised as a key part of apps and entertaining and engaging users. Usability can also be shortened by voice recognition, where multiple steps can be reduced to a simply request. In a nutshell it will be slicker, quicker and fun.
Cool visualisations, typography, pictorial elements and layout will always be important in UX, however as these displays advance in the future with stuff like Oculus Rift, the Sixth Sense projector and Google Glass- they must work hand-in-hand with gesture and voice recognition to be effective.