Posts Tagged ‘Front end’

Future of mobile: Part 3

May 13, 2012

Today I have 3 GPS devices, 4 Cameras, 3 Video cameras, 3 movie players, 5 music players and the list goes on. All of these are in a variety of devices that I use in different places for different purposes.

Drilling down into the detail what I actually have is a phone, a desktop home PC, a laptop, an iPod, a car stereo, in-car GPS, a TV+HD/DVD Player, a digital SLR, that’s just me and not including what the family has.

This presents a number of challenges, risks as well as a lot of cost…

Most of us want as little duplication of cost as possible. Already even though cars come with stereos many people are now plugging in their MP3 players, utilising the speakers in the car only. Many people will also use their phone’s GPS rather than the car’s. Newer TV’s have wireless access to browsing and social apps. I’m tempted by the hype of tablet computing, but have to ask myself, why? I have all the compute options I need?

More devices mean more synchronisation issues for personal settings and personal data. While cloud based services will resolve many of these issues, it is still early days to move everything into the cloud as users of MegaUpLoad found.

In 1999 I went to a tech show in Vegas, where I saw a potential solution to the problem from Sony. They were demonstrating the concept of “apps on sticks”. Basically these were memory sticks (max 32mb at the time) with other devices, like GPS, radio and even camera on the stick. The idea was simple you’d simply plug your GPS stick into your phone, laptop, car or any other device, rather than have that function in multiple devices. This approach would have required a lot of standardisation and clearly is a concept that never came to fruition.

More recently Asus have launched their PadFone, this is a smartphone that comes with a tablet screen. When you need to work with a bit more screen estate, you simply slot your phone into the back of the screen and hey presto you have a tablet that can use the 3G or wireless connection on your phone. Apart from being able to charge your phone, the tablet screen also integrates with the phone itself so voice and video calls can be made/received using the tablet screen.

This concept really works for me, and I could see myself buying into the family of products: TV, Car Stereo, projector. This with the ability to have my data in the cloud so losing the phone is not the end of the world, makes for a great solution. Whether the phone slots in, or connects wirelessly the ability to drive a different screen from my phone, either works for me as a concept. Maybe the idea could be taken even further so that the circuitry for the device could be slotted into the phone itself?

As I’ve discussed in my previous blogs there are many new avenues for phones, in shape, size and function. It would be difficult to predict the future with so many possibilities, but one thing for sure is that for gadget geeks like me, the phone is going to be the constant source of innovation we thrive on

Advertisements

Click, touch, wave and talk: UI of the future

May 10, 2012

First there was the Character User Interface (CUI, pronounced cooo-eey) typified by green letters on a black background screen. Then the Graphical User Interface (GUI, pronounced goo-eey) came along with a mouse and icons. Pen interfaces existed in the era of GUI, but now smartphones and tablets are driving many more interaction approaches using touch interfaces.

Now the GUI itself is going through a re-birth on mobile platforms with many more new types of user interface controls than we have seen in the past, we have gone way beyond simple buttons, drop-down lists and edit fields.

Many devices also support the ability to support voice driven operations, and although voice recognition has been around for over two decades, the experience is poor and more recently drastically oversold by the likes of Apple. However this is an area that will is likely to improve radically in the coming years.

The Microsoft Kinnect gaming platform provides yet another innovation in user interaction, a touchless interface using a camera to recognise gestures and movement. Microsoft are already making moves to take this form of user interaction into the mainstream outside of gaming (http://www.bbc.co.uk/news/technology-16836031), as are many other suppliers and we should see phones and TV’s supporting these this year.

However even some old methods of interaction are being given a new lease of life such as Sony’s inclusion of a rear touchpad and dual joysticks.

So, with all these modes of interaction what does this mean to User Interface Designers? Shouldn’t they really be called User Interaction Designers? How do you decide what is the best mode of interaction for an application? Should you support multiple modes of interaction? Should you use different widgets for different interaction? Should the user choose their preferred mode of interaction and the application respond accordingly? Should the mode of interaction be decided by what the device supports? Are there standards for ALL these modes of interaction?

This emerging complexity of different user interaction methods will raise many more questions than I’ve listed above. So far I have only found little research in this area, but this is a moving target. The other evidence from the mobile world is the rapid change in user behaviour as users get used to working in different ways.

Initially I would imagine most applications to use basic interactions like touch/click so that the widest possible range of devices can be used. However those targeting specific devices will be the “early adopters” for the common interaction mode for that specific device (e.g. 3D gestures on Xbox Kinnect).

In the very long term standards will evolve and interaction designers and usability experts will combine to design compelling new applications that are “multi-interactive”, choosing the most appropriate interaction method for each action and sometimes supporting multiple types of interaction methods for a single action.

Multi-interactive interfaces will make users lives easier, but are you ready to provide them?

Is apple Siri-ous about voice?

May 4, 2012

When I first saw the new iPhone ads that featured voice interaction all I thought was WOW, Apple have done it, they have mastered voice interaction. What appeared to be natural voice interaction is the nirvana many people have been waiting for to replace point and click interfaces.

Speech recognition is not new and certainly both Microsoft and Android had speech driven interfaces before Apple. However it was the ability to talk naturally without breaks and without having to use specific key words that seemed to set Apple apart from the competition.

Instead we were all fooled by another Jobs skill, his “reality distortion field”. Indeed one person went as far as to sue Apple for selling something that did not perform as advertised.

What exactly is the difference? Well both Windows and Android recognise specific commands and actions rather like just “talking the menu” e.g. Saying “File, Open, text.doc”. The Apple promise was that you could simply talk as you would normally speak e.g. “File, Open, text.doc” would be the same from Apple if you said “get me text.doc”.

So why aren’t we there? The challenge is creating a dictionary that can understand the synonyms and colloquialisms that people may use in conversational speech as opposed to the very specific commands used in menu’s and buttons in graphical user interfaces.

Whilst this may seem like a daunting task, I believe the first step to solve this puzzle is to reduce the problem. So rather than creating this super dictionary for absolutely any application, dictionaries should be created for specific types of applications e.g. word processing or banking. This way the focus on creating synonyms and finding colloquialisms and linking them is more manageable.

The next step would be to garner the help of the user community to build the dictionary, so that as words are identified, the user is asked what alternatives could be used so that synonyms/ colloquialisms are captured.

I know this is not a detailed specification but this is an approach that Apple might use to give us what they promised – and what the world is waiting for… speech driven user interfaces.

HTML5 will be key to a MAD world

February 16, 2012

According to researchers, over 5 billion devices connect to the Internet today and by 2020 over 22 billion devices including 6 billion phones, 2 billion TVs will be connected. By 2014 sales of new internet connected devices excluding PC’s will be over 500 million units a year.

We’re moving into a connected world where people expect internet access any time, any place and on anything, and so many of us will have Multiple Access Devices (MAD).

Therefore it still amazes me to find large corporate with a separate Internet strategy and Mobile strategy. I won’t name and shame but you know who you are! What next, a strategy for tablets and a separate one for Internet TVs?

One of the key principles of HTML5 is that it aims to give you the tools to write once, deploy everywhere; that is, to create applications and content that can be developed to run appropriately for every device. Of course where it makes sense an application might need to take advantage of a specific devices capability (e.g. a camera or GPS), but even then conditional behaviour can be developed to provide such differentiation rather than develop a whole new application for a specific devices.

One of the key enablers here is adopting an approach whereby the content/application VIEW (look and feel) is fully controlled by CSS, and namely version 3. CSS3 has a number of features that allow you to control layout and look and feel according to screen dimensions/estate. The enabling technology is Media Queries, which allow you to create different VIEWS of an application based on screen dimensions. I’ll be writing more about this soon.

Creating a MAD (Multiple access devices) strategy is not all about technology though. Organisations will have to monitor and sometimes drive changes in customer behaviour, look at how much more time youth spend on smartphones than watching TV or in fact any other device. With each person having multiple access devices, different devices will likely to be used specifically for different parts of the customer buying cycle. If a purchase requires research and thought then most likely this will be done on PC’s/Laptops, for instant updates (news, stock prices, weather, scores and so on) smartphones for entertainment maybe tablets will prevail.

There are many more challenges to be discussed and I hope to cover these in more depth in follow up blogs, but for now small or large organisations need to create a single MAD strategy that encompasses customer/user needs, monitor behavioural changes and trends and investigate the capabilities of enabling technologies.

The change will be profound, as organisations realise the total impact to processes, skills and technologies to really master Customer Experience in a MAD world, a journey which the visionaries have already started.

HTML 5 makes the browser smarter

January 26, 2012

The unsung hero of the web has always been Javascript, without which the standards-based web would be completely static. Javascript enables functionality to be executed in the browser, and has been used to create all sorts of effects otherwise not possible with HTML alone.

In the early days, Javascript implementations weren’t entirely standard, requiring developers to have to write variants for different browsers; this isn’t really an issue any more.

For applications, developers will either use libraries or develop their own validation routines. This Javascript code adds significantly to the amount of code downloaded.

With HTML5, developers will need to write less Javascript, as the browser provides features to do things for itself rather than rely extra scripting.

Validation is the main area of improvement. HTML5 now provides a number of new validation features such as mandatory checking, type checking, range and field length validation. The validation is done within the browser, and developers can opt to decide how to process errors.

Obviously validation has to be repeated on the server for security, to ensure that data hasn’t been hacked in the browser or in transmission. This then means that validation has to be maintained in two places and kept in sync.

HTML5 also provides a number of new input field types such as tel, email, color, datetime. This empowers the browser, by applying it to display a date picker, or a colour chooser for example. More importantly for mobile applications it would allow the browser to show an appropriate keyboard layout e.g. a numeric layout for tel, and an alphabetic keyboard for email type.

There are also a number of new attributes which previously required Javascript such as autocomplete, placeholder and pattern which will prove very useful.

There will be some organisations that will not want the browser to affect their carefully designed user experience; for these people the answer is simple, just don’t use the new features.

For the rest, you will enjoy having to write less Javascript for HTML5 browsers, but of course you will still need to have backwards compatibility for non-HTML5 browsers which will rely on Javascript.

Using Polyfill to cover up the cracks in HTML5

October 23, 2011

Long gone are the days when Internet Explorer had 95% of the browser market. We have lived in multi-browser world since the start of the web. Whilst this has its plus point, it also has its downsides – none more so than ensuring backwards compatibility. Using HTML5 today is not simply a case of does the browser support it or not, but what aspects of the huge specification does it support and to what extent. A good site for seeing the various levels of support across browser releases, against different areas of the HTML5 specification can be found at CanIUse.com.

The W3C’s answer to developers creating solutions with HTML5 is that the new features of the spec should “gracefully degrade” when used in older browsers. Essentially this means the new markup or API is ignored and doesn’t cause the page to crash. Developers should test and develop backwards compatibility. This can be an onerous task. However help is at hand with libraries like Modernizr you can detect what features of HTML5 the browser supports.

Once you know that the browser doesn’t support a HTML5 feature you have used you can write or use a 3rd party “polyfill”. In HTML, a polyfill is essentially code that is used to provide alternative behaviour to simulate HTML5 features in a browser that does not support that particular feature. There are lots of sites providing polyfills for different parts of the HTML5 spec, a pretty good one can be found here it lists lots of libraries covering almost all parts of the specification.

For me a big concern is that I’ve not yet been able to find a single provider that gives you polyfills for the whole of HTML5, or even the majority of the specification. This could mean that you have to use several different libraries, which may or may not be compatible with each other. Another big concern is that each polyfill will provide varying levels of browser backwards compatibility i.e. some will support back to IE 6 and some not.

With users moving more of their browsing towards smartphones and tablets which typically have the latest browser technology supporting HTML5, backwards compatibility may not be an issue. However it will be several years before the HTML5 spec is complete, and even then there are new spec’s being created all the time within the W3C. So far from being a temporary fix the use of polyfills will become a standard practice in web development, unless of course you take the brave stance of saying your application is only supported on HTML5 browsers.

However this does raise another question, if you can simulate HTML5 behaviour do you need to start using HTML5 at all to create richer applications? The answer is quite possibly not, but having one will certainly improve your user experience and make development of your rich internet applications simpler.

HTML5 The proprietary standard

October 16, 2011

The good thing about standards is that they are uniform across different vendor implementation. Well that is at least the primary goal. So how does a vendor make a standard proprietary?

Well it’s quite easy really you provide extensions to the standard for features that are not yet implemented in the standard. Vendors wouldn’t be that unscrupulous would they? For example would they create application servers following standards but add their own extensions to “hook you in”, sorry I mean to add value beyond what the standards provide ;o)

I’m sure Microsoft’s announcement at Build to allow developers to create Windows 8 Metro applications using HTML5 and Javascript took many Microsoft developers by surprise. What is Microsoft’s game plan with this?

Optimists will cry that it opens Metro development out to the wider base of web developers rather than just to the closed Microsoft community. Cynic’s will argue that it is an evil ploy for Microsoft to play the open card whilst actually hooking you into their proprietary OS. In the cynics corner a good example is Microsoft’s defiant stance of Direct3D versus the open standard alternative OpenGL. This has lead to Google developing Angle, effectively allowing OpenGL calls to be translated into Direct3D ones so that the same programmes can be run on Microsoft platforms.

Whatever it is developers aiming for cross platform conformance will need to stay sharp to ensure that proprietary extensions do not make the application incompatible in different environments.

Adobe’s recent donation of CSS Shaders shows a more charitable approach whereby extensions are donated back to the standards bodies to make the “value added” features available to every platform. This is largely the approach in which standards evolve, with independent committee’s validating vendor contributions.

So what is Microsoft’s game? It’s too early to really say whether there is an altruistic angle on their support for HTML5 and JS, but history has shown us that the empire is not afraid to strike back. Look at their collaboration with IBM on OS/2 leading them to leave IBM in lurch with their own launch of Windows NT. A similar approach occurred not long after with with Sybase and Sql Server.

I maybe a cynic, but having been a Windows developer from Windows 1.0 to Windows NT and following a road of promises and U turns has made me that way when it comes to Microsoft. It’s great to see increasing support for HTML5 but I am always a little concerned with the motivations of the Redmond camp. However perhaps I myself need to be “open” to a different Microsoft, one that is embracing standards even though it may cannibalize it’s own Silverlight technology.

The BBC does a U turn on HTML5

October 12, 2011

Rewind to August 13th 2010 and Erik Huggers Director of BBC Future and Technology blogged that the the BBC were committed to open standards but “HTML5 is starting to sail off course”.

At the time I thought this was a brave statement to make especially as the late Steve Jobs had already announced in April that year and despite a billion app downloads, that HTML5 negated the need for many proprietary browser plug-ins. It was clear at the time this was squared largely at Flash (and possibly Silverlight too).

For me this was a faux pas too far as Huggers continued his blog with statements about proprietary implementations of HTML5 by Apple and fractions of opinions within the W3C and WHAT-WG (who initiated the development of HTML5).

Now fast forward to almost exactly a year later, and Gideon Summerrfield, Executive Product Manager for BBC iPlayer announces the launch of a HTML5 version of iPlayer. Initially this is just aimed at PS3, but will roll out to other devices in the future.

If we take a slight diversion and look at the developer conferences for both Microsoft and Adobe in the last four weeks, both made big announcements about tools and support for HTML5. However committed developers with years of invested skills in SilverLight and Flash were left deflated with the lack of announcements on the future of these technologies.

So have the sleeping giants finally woke up? It seems like it to me.

However, in the case of the BBC, Summerfield’s blog states that they will also launch new versions of the iPlayer for Flash and Air. This may be a short term decision to wait for wider support for HTML5, but there is little clarity about what they see as the future for iPlayer.

To Hugger’s credit he did foresee the benefits of HTML5 could bring to the BBC in reducing development timescales and having a common skill set.

However I applaud those that have the courage and conviction to take bold steps forward and put their money where their mouth is. The FT is a shining example, ditching their AppStore versions for iDevices and completely moving to HTML5.

There is work to be done on HTML5 and it will evolve for some time yet, but the bandwagon has started to roll and as a good friend of mine said to me at the start of the .com era, “When you see the bandwagon starting to move, you have a choice to jump on, or stand in the way of a tonne of metal !”.

For me it is clear. I’m not standing in the middle of the road, I’m jumping squarely onto the HTML 5 bandwagon. The question is, are you?

The web battle: HTML5 vs Silverlight vs Flash

July 7, 2011

My previous posts about “The end of Silverlight” and “The end of Flash” both raised active debate. The general view was that I knew too little about Silverlight and Flash to make such brash claims and whilst there is some truth in that it also transpired that the general awareness of what HTML5 can do today, and what it promises when complete is poor, and that is the issue that my run of posts on HTML5 has really sought to address. Hopefully for those that haven’t had any exposure to HTML5, my posts have been of value.

However, we know that Adobe is already building / supporting HTML5 development through tools like Dreamweaver and that Microsoft is also doing the same with Visual Studio. So at the very least in the short to medium term both will have dual strategies.

The longer term is much more difficult to forecast, there is a place for both especially for rich multimedia applications and gaming, but for business applications there is going to be small minority of applications that could possibly require them. In the report by Gartner (“The (not so) Future Web”, they too agree saying that “Gartner expects leading RIA vendors to maintain a pace of innovation that keeps them relevant, but for a gradually shrinking percentage of Web applications.”

However one can’t completely ignore that web technology is evolving fast and that new spec’s are filling in the gaps for HTML5 already for example work is already in progress for TV and Gestures as well as previously mentioned 3D graphics. We are seeing major new releases of browsers with greater support for HTML5 being launched at a faster rate than ever before, coupled with a battle for the fastest JavaScript engine. A new release of JavaScript promises much better standardization as well as new features.

The developer forums are now awash with an outcry from loyal Microsoft developers demanding to know the future of Silverlight in Microsoft’s grand plans, where once there was no doubt that Silverlight is core to Microsoft. IMHO I doubt Microsoft will make a U turn on Silverlight, but I will re-iterate that the need for Silverlight in business applications will lessen as HTML5 matures.

Whilst I’ve been an active follower and advocate for HTML5, what I see lacking is a roadmap and vision for HTML, a lot more detail about how the semantic web will evolve and what it means to developers in the short and medium term. This is something the vendors seem much better at and is no wonder developers buy-in to certain technologies over others.

In the end as always the real question is not which is the better technology but what is the appropriate technology for what you need to achieve and the audience and platforms you are targeting.

http://www.adobe.com/devnet/dreamweaver/articles/dwhtml5pt1.html

http://visualstudiogallery.msdn.microsoft.com/d771cbc8-d60a-40b0-a1d8-f19fc393127d

http://www.w3.org/standards/webofdevices/tv

http://www.w3.org/standards/webofdevices/multimodal

http://www.khronos.org/registry/webgl/specs/1.0/

http://www.ecma-international.org/publications/standards/Ecma-262.htm

HTML5 Audio and Video comes as standard

June 26, 2011

Movie and Audio features in HTML5 are like many of the features I have discussed previously, they:
•    Have a history of controversy over Codec support
•    Specifications are too large to do real justice in these short posts
•    Are an exciting, powerful new addition that will transform the web

To date the most popular media player on the web has been Adobe’s flash player, and arguably it has been it’s most popular use. Apple’s lack of support in their devices for Flash has created a small crack in Adobe’s party, but this crack could open further into a chasm that their flash drops into! However there have been many other shenanigans in this story and rather than delve into to those murky stories I’m going to again give a brief overview of the capabilities of these new features. The good news is that HTML5 will remove the need for proprietary plug-in’s like Flash and Quicktime for playing sound and movies.

audio and video are both media elements in HTML5, and as such share common API’s for their control. In fact you can load video content into an audio element and vice versa, audio into a video element – the only difference is that the video element has a display area for content, whereas the audio element does not. Defining an audio element  and source file is pretty straightforward

<audio controls src=”my musicfile.mp3”
My audio clip
</audio>

You can actually assign multiple source ( src ) files. This is to allow you to provide the audio in multiple formats, so that you can support the widest array of browsers. The browser will go through the list in sequential order and play the first file it can support, so it’s important you list them in order of the best quality first rather than by most popular format.

To load a movie you simply replace the audio element with video. Video’s can also define multiple sources. You may additionally specify the height and width of the video display area.

Next to control media you can use the following API’s:  load(), play(), pause(), I think what they do is self explanatory. canPlayType(type) can be used to check whether a specific format is supported.

Some read only attributes can be queried such as duration, paused, ended, currentSrc to check duration of the media, whether it has been paused or ended and which src is being played.

You can also set a number of attributes such as autoplay, loop, controls, volume  to automatically start media, repeat play the media, show or hide media controls and to set the volume.

These aren’t exclusive lists of API’s or attributes as there are many more but they are some of the most common features of the audio and video people will use. With video especially there are many more great things you can achieve like creating timelines and displaying dynamic content as specific points in the video (no doubt this will be used for advertising amongst other more interesting uses).

Clearly the web will get richer with full multimedia content without the perquisite of plug-ins. However developers should be aware of the various formats supported by specific browsers and aim to provide media in as many formats as possible.

Many sites today do use sound and movies, but I believe with native support and greater imagination a new world of dynamic rich media sites will change the user experience in the same way that Ajax transformed static content into the dynamic web. With it we will see new online behaviors, a topic I will cover soon, and whilst some have said the future of TV is online the web may just give it a new lease of life !

Further reading:
http://dev.w3.org/html5/spec/Overview.html#media-elements