Archive for May, 2012

Future of mobile: Part 3

May 13, 2012

Today I have 3 GPS devices, 4 Cameras, 3 Video cameras, 3 movie players, 5 music players and the list goes on. All of these are in a variety of devices that I use in different places for different purposes.

Drilling down into the detail what I actually have is a phone, a desktop home PC, a laptop, an iPod, a car stereo, in-car GPS, a TV+HD/DVD Player, a digital SLR, that’s just me and not including what the family has.

This presents a number of challenges, risks as well as a lot of cost…

Most of us want as little duplication of cost as possible. Already even though cars come with stereos many people are now plugging in their MP3 players, utilising the speakers in the car only. Many people will also use their phone’s GPS rather than the car’s. Newer TV’s have wireless access to browsing and social apps. I’m tempted by the hype of tablet computing, but have to ask myself, why? I have all the compute options I need?

More devices mean more synchronisation issues for personal settings and personal data. While cloud based services will resolve many of these issues, it is still early days to move everything into the cloud as users of MegaUpLoad found.

In 1999 I went to a tech show in Vegas, where I saw a potential solution to the problem from Sony. They were demonstrating the concept of “apps on sticks”. Basically these were memory sticks (max 32mb at the time) with other devices, like GPS, radio and even camera on the stick. The idea was simple you’d simply plug your GPS stick into your phone, laptop, car or any other device, rather than have that function in multiple devices. This approach would have required a lot of standardisation and clearly is a concept that never came to fruition.

More recently Asus have launched their PadFone, this is a smartphone that comes with a tablet screen. When you need to work with a bit more screen estate, you simply slot your phone into the back of the screen and hey presto you have a tablet that can use the 3G or wireless connection on your phone. Apart from being able to charge your phone, the tablet screen also integrates with the phone itself so voice and video calls can be made/received using the tablet screen.

This concept really works for me, and I could see myself buying into the family of products: TV, Car Stereo, projector. This with the ability to have my data in the cloud so losing the phone is not the end of the world, makes for a great solution. Whether the phone slots in, or connects wirelessly the ability to drive a different screen from my phone, either works for me as a concept. Maybe the idea could be taken even further so that the circuitry for the device could be slotted into the phone itself?

As I’ve discussed in my previous blogs there are many new avenues for phones, in shape, size and function. It would be difficult to predict the future with so many possibilities, but one thing for sure is that for gadget geeks like me, the phone is going to be the constant source of innovation we thrive on

Advertisements

Click, touch, wave and talk: UI of the future

May 10, 2012

First there was the Character User Interface (CUI, pronounced cooo-eey) typified by green letters on a black background screen. Then the Graphical User Interface (GUI, pronounced goo-eey) came along with a mouse and icons. Pen interfaces existed in the era of GUI, but now smartphones and tablets are driving many more interaction approaches using touch interfaces.

Now the GUI itself is going through a re-birth on mobile platforms with many more new types of user interface controls than we have seen in the past, we have gone way beyond simple buttons, drop-down lists and edit fields.

Many devices also support the ability to support voice driven operations, and although voice recognition has been around for over two decades, the experience is poor and more recently drastically oversold by the likes of Apple. However this is an area that will is likely to improve radically in the coming years.

The Microsoft Kinnect gaming platform provides yet another innovation in user interaction, a touchless interface using a camera to recognise gestures and movement. Microsoft are already making moves to take this form of user interaction into the mainstream outside of gaming (http://www.bbc.co.uk/news/technology-16836031), as are many other suppliers and we should see phones and TV’s supporting these this year.

However even some old methods of interaction are being given a new lease of life such as Sony’s inclusion of a rear touchpad and dual joysticks.

So, with all these modes of interaction what does this mean to User Interface Designers? Shouldn’t they really be called User Interaction Designers? How do you decide what is the best mode of interaction for an application? Should you support multiple modes of interaction? Should you use different widgets for different interaction? Should the user choose their preferred mode of interaction and the application respond accordingly? Should the mode of interaction be decided by what the device supports? Are there standards for ALL these modes of interaction?

This emerging complexity of different user interaction methods will raise many more questions than I’ve listed above. So far I have only found little research in this area, but this is a moving target. The other evidence from the mobile world is the rapid change in user behaviour as users get used to working in different ways.

Initially I would imagine most applications to use basic interactions like touch/click so that the widest possible range of devices can be used. However those targeting specific devices will be the “early adopters” for the common interaction mode for that specific device (e.g. 3D gestures on Xbox Kinnect).

In the very long term standards will evolve and interaction designers and usability experts will combine to design compelling new applications that are “multi-interactive”, choosing the most appropriate interaction method for each action and sometimes supporting multiple types of interaction methods for a single action.

Multi-interactive interfaces will make users lives easier, but are you ready to provide them?

Is apple Siri-ous about voice?

May 4, 2012

When I first saw the new iPhone ads that featured voice interaction all I thought was WOW, Apple have done it, they have mastered voice interaction. What appeared to be natural voice interaction is the nirvana many people have been waiting for to replace point and click interfaces.

Speech recognition is not new and certainly both Microsoft and Android had speech driven interfaces before Apple. However it was the ability to talk naturally without breaks and without having to use specific key words that seemed to set Apple apart from the competition.

Instead we were all fooled by another Jobs skill, his “reality distortion field”. Indeed one person went as far as to sue Apple for selling something that did not perform as advertised.

What exactly is the difference? Well both Windows and Android recognise specific commands and actions rather like just “talking the menu” e.g. Saying “File, Open, text.doc”. The Apple promise was that you could simply talk as you would normally speak e.g. “File, Open, text.doc” would be the same from Apple if you said “get me text.doc”.

So why aren’t we there? The challenge is creating a dictionary that can understand the synonyms and colloquialisms that people may use in conversational speech as opposed to the very specific commands used in menu’s and buttons in graphical user interfaces.

Whilst this may seem like a daunting task, I believe the first step to solve this puzzle is to reduce the problem. So rather than creating this super dictionary for absolutely any application, dictionaries should be created for specific types of applications e.g. word processing or banking. This way the focus on creating synonyms and finding colloquialisms and linking them is more manageable.

The next step would be to garner the help of the user community to build the dictionary, so that as words are identified, the user is asked what alternatives could be used so that synonyms/ colloquialisms are captured.

I know this is not a detailed specification but this is an approach that Apple might use to give us what they promised – and what the world is waiting for… speech driven user interfaces.