Posts Tagged ‘GUI’

Vertical User Experience Platform

July 5, 2012

Whilst discussing what a UXP is and who the key players are with a customer I was asked an interesting question, “is there a need for industry (banking, retail, government …) specific UXP ?”.

My immediate reaction was that the technologies in a UXP were generic horizontal solutions that should be agnostic to the industry they were implemented in. The fact that they were specialised solutions and are not industry specific to me was a key advantage. So why would you want a content management solution or collaboration tool that was specific to banking or retail?

The response was interesting: For many smaller companies the complexity of managing their web presence is huge, even if they buy into a single vendor approach for example using Microsoft Sharepoint they still have a huge task to set up the individual components (content management, collaboration, social tools and apps) and this is only made harder with the need to support an increasing array of devices (phone, tablet, TV etc…).

It seems there is a need for an offering that provides an integrated full UXP that can be set-up easily and quickly without the need for an army of developers. Compromises on absolute flexibility are acceptable provided a rich set of templates (or the ability to create custom templates) were provided, such that the templates handled device support automatically. Further the UXP might offer vertical specific content feeds out of the box.

As in my previous blog “The End of Silo Architectures” using a UXP front end technology to create industry specific apps is a great idea. Such a solution could not only provide the business functionality (e.g. Internet banking, insurance quotes/claims, stock trading) but the technical issues of cross device and browser support, security and performance.

So whilst I can understand the requirement and the obvious benefit, the idea of a vertical UXP to me seems like providing a vertical specific CRM or Accounting package. The real answer is that it makes sense to provide vertical apps and use generic Content, Collaboration and social tools from a UXP. Ideally the generic components are integrated and have easy to configure templates.

As I have highlighted before though the UXP is complex not just from a technology perspective but also from the perspective of skills, processes and standards. The first step for any organisation must be to create a strategy for UXP: audit what you currently have, document what you need (take into consideration current trends like social, gamification and mobile) and then decide how you move forward.

Unfortunately this area currently seems ill serviced by the consultancy companies so it may just be up to you to roll your own strategy.

Advertisements

Future of mobile: Part 3

May 13, 2012

Today I have 3 GPS devices, 4 Cameras, 3 Video cameras, 3 movie players, 5 music players and the list goes on. All of these are in a variety of devices that I use in different places for different purposes.

Drilling down into the detail what I actually have is a phone, a desktop home PC, a laptop, an iPod, a car stereo, in-car GPS, a TV+HD/DVD Player, a digital SLR, that’s just me and not including what the family has.

This presents a number of challenges, risks as well as a lot of cost…

Most of us want as little duplication of cost as possible. Already even though cars come with stereos many people are now plugging in their MP3 players, utilising the speakers in the car only. Many people will also use their phone’s GPS rather than the car’s. Newer TV’s have wireless access to browsing and social apps. I’m tempted by the hype of tablet computing, but have to ask myself, why? I have all the compute options I need?

More devices mean more synchronisation issues for personal settings and personal data. While cloud based services will resolve many of these issues, it is still early days to move everything into the cloud as users of MegaUpLoad found.

In 1999 I went to a tech show in Vegas, where I saw a potential solution to the problem from Sony. They were demonstrating the concept of “apps on sticks”. Basically these were memory sticks (max 32mb at the time) with other devices, like GPS, radio and even camera on the stick. The idea was simple you’d simply plug your GPS stick into your phone, laptop, car or any other device, rather than have that function in multiple devices. This approach would have required a lot of standardisation and clearly is a concept that never came to fruition.

More recently Asus have launched their PadFone, this is a smartphone that comes with a tablet screen. When you need to work with a bit more screen estate, you simply slot your phone into the back of the screen and hey presto you have a tablet that can use the 3G or wireless connection on your phone. Apart from being able to charge your phone, the tablet screen also integrates with the phone itself so voice and video calls can be made/received using the tablet screen.

This concept really works for me, and I could see myself buying into the family of products: TV, Car Stereo, projector. This with the ability to have my data in the cloud so losing the phone is not the end of the world, makes for a great solution. Whether the phone slots in, or connects wirelessly the ability to drive a different screen from my phone, either works for me as a concept. Maybe the idea could be taken even further so that the circuitry for the device could be slotted into the phone itself?

As I’ve discussed in my previous blogs there are many new avenues for phones, in shape, size and function. It would be difficult to predict the future with so many possibilities, but one thing for sure is that for gadget geeks like me, the phone is going to be the constant source of innovation we thrive on

Click, touch, wave and talk: UI of the future

May 10, 2012

First there was the Character User Interface (CUI, pronounced cooo-eey) typified by green letters on a black background screen. Then the Graphical User Interface (GUI, pronounced goo-eey) came along with a mouse and icons. Pen interfaces existed in the era of GUI, but now smartphones and tablets are driving many more interaction approaches using touch interfaces.

Now the GUI itself is going through a re-birth on mobile platforms with many more new types of user interface controls than we have seen in the past, we have gone way beyond simple buttons, drop-down lists and edit fields.

Many devices also support the ability to support voice driven operations, and although voice recognition has been around for over two decades, the experience is poor and more recently drastically oversold by the likes of Apple. However this is an area that will is likely to improve radically in the coming years.

The Microsoft Kinnect gaming platform provides yet another innovation in user interaction, a touchless interface using a camera to recognise gestures and movement. Microsoft are already making moves to take this form of user interaction into the mainstream outside of gaming (http://www.bbc.co.uk/news/technology-16836031), as are many other suppliers and we should see phones and TV’s supporting these this year.

However even some old methods of interaction are being given a new lease of life such as Sony’s inclusion of a rear touchpad and dual joysticks.

So, with all these modes of interaction what does this mean to User Interface Designers? Shouldn’t they really be called User Interaction Designers? How do you decide what is the best mode of interaction for an application? Should you support multiple modes of interaction? Should you use different widgets for different interaction? Should the user choose their preferred mode of interaction and the application respond accordingly? Should the mode of interaction be decided by what the device supports? Are there standards for ALL these modes of interaction?

This emerging complexity of different user interaction methods will raise many more questions than I’ve listed above. So far I have only found little research in this area, but this is a moving target. The other evidence from the mobile world is the rapid change in user behaviour as users get used to working in different ways.

Initially I would imagine most applications to use basic interactions like touch/click so that the widest possible range of devices can be used. However those targeting specific devices will be the “early adopters” for the common interaction mode for that specific device (e.g. 3D gestures on Xbox Kinnect).

In the very long term standards will evolve and interaction designers and usability experts will combine to design compelling new applications that are “multi-interactive”, choosing the most appropriate interaction method for each action and sometimes supporting multiple types of interaction methods for a single action.

Multi-interactive interfaces will make users lives easier, but are you ready to provide them?

Is the growth of Mobile Apps overhyped?

March 22, 2012

There are numerous statistics on the growth of mobile apps in the various stores, and also about the number of downloads. Apple claims over 500,000 apps in its store and Google claims over 450,000 (this time last year it had only 150,000). The number of apps, downloads and rate of growth is phenomenal.

Is this just a temporary fever or will this growth continue, and if so what will drive it?

I believe this growth has only just started and that there are two key trends that will drive this growth further.

Firstly, development for smartphones will get simpler. VisionMobile’s latest survey profiles over a hundred development tools for creating mobile apps. My guess is that is a very conservative estimate of the actual number of tools out there.

A common goal for many of these providers is to make programming simpler so that more people can code. For some, this goes further, to the extent that tools are being created for children to develop apps at school. So more developers will mean more apps!

Secondly and this for me is the more exciting aspect, is that phones will do more, which means that apps will get more innovative.

Today there are a wide variety of apps already, some of which use features of the phone itself like the camera, GPS or microphone. Coming down the line are many more features that will get embedded into phones, for example the ability to detect a user emotions and the ability to monitor a users health. Such features will drive yet more applications and innovations from personal healthcare to fraud detection.

Apart from new features, phones will start interact with other devices such as your TV. At a simple level, your smartphone can be already be used as a remote control for your TV or to join in with live TV quiz shows. Already phones are interacting with cars, and this integration will inevitably go further, so that your engine management system feeds your phone with data that an app can use.

Recent surveys from recruitment agencies highlight the growing demand for mobile developers, and more interestingly the re-skilling of developers to position themselves for this growth.

Exciting times are ahead for developers and entrepreneurs who will show that Angry Birds isn’t the only way to make big money in mobile.

Mobile Apps: When to go native

March 15, 2012

Let me say from the outset, that there is no right answer for everybody. The battle between cross-platform solutions and native mobile applications is going to continue for years to come; I know I have blogged about this before, and probably will again.

For many corporate applications, native code offers the marketing group richer customer experiences, the business the chance to innovate solutions using device-specific solutions, and IT some new development tools.

However, if an organisation has to support the widest range of phones possible, the development of native apps becomes cumbersome, since you then need to write apps for each of the major mobile platforms available.

Part of this decision depends on whether you decide to support older phones, i.e. non-smartphones. For non-smartphone support you’ll need to build in support for features from SMS text services to basic text browsers.

Typically this is aimed at operating in developing countries. In developed countries like the UK, the growth of smartphones means that there is now a critical mass of users crowding out lower-featured handsets.

If you decide to target smartphones, then you still have a choice. You can either:

  1. Build for each platform, using it’s own development tools
  2. Use a cross platform mobile development solution, or
  3. Write your app as a browser solution.

So how does an organisation decide which way to go?

I found this useful little questionnaire developed by InfoTech. It takes you through a set of questions about your needs, and then suggests the best way forward between a native solution and a web-based solution.

As a quick guide to review a specific tactical requirement, I thought it was pretty good and asked very pertinent questions. Obviously this is something that an IT department could expand or specialise for their own needs, and so provides a useful structured approach to making impartial decisions without any emotional bias.

Where support for multiple platforms is crucial, a more difficult decision will be whether to use a cross-platform mobile development solution or to go for a pure web (and possibly HTML5) solution.

I’ll discuss this issue in a future blog, but for the time being, check out the questionnaire to start thinking about your mobile approach.

The BBC does a U turn on HTML5

October 12, 2011

Rewind to August 13th 2010 and Erik Huggers Director of BBC Future and Technology blogged that the the BBC were committed to open standards but “HTML5 is starting to sail off course”.

At the time I thought this was a brave statement to make especially as the late Steve Jobs had already announced in April that year and despite a billion app downloads, that HTML5 negated the need for many proprietary browser plug-ins. It was clear at the time this was squared largely at Flash (and possibly Silverlight too).

For me this was a faux pas too far as Huggers continued his blog with statements about proprietary implementations of HTML5 by Apple and fractions of opinions within the W3C and WHAT-WG (who initiated the development of HTML5).

Now fast forward to almost exactly a year later, and Gideon Summerrfield, Executive Product Manager for BBC iPlayer announces the launch of a HTML5 version of iPlayer. Initially this is just aimed at PS3, but will roll out to other devices in the future.

If we take a slight diversion and look at the developer conferences for both Microsoft and Adobe in the last four weeks, both made big announcements about tools and support for HTML5. However committed developers with years of invested skills in SilverLight and Flash were left deflated with the lack of announcements on the future of these technologies.

So have the sleeping giants finally woke up? It seems like it to me.

However, in the case of the BBC, Summerfield’s blog states that they will also launch new versions of the iPlayer for Flash and Air. This may be a short term decision to wait for wider support for HTML5, but there is little clarity about what they see as the future for iPlayer.

To Hugger’s credit he did foresee the benefits of HTML5 could bring to the BBC in reducing development timescales and having a common skill set.

However I applaud those that have the courage and conviction to take bold steps forward and put their money where their mouth is. The FT is a shining example, ditching their AppStore versions for iDevices and completely moving to HTML5.

There is work to be done on HTML5 and it will evolve for some time yet, but the bandwagon has started to roll and as a good friend of mine said to me at the start of the .com era, “When you see the bandwagon starting to move, you have a choice to jump on, or stand in the way of a tonne of metal !”.

For me it is clear. I’m not standing in the middle of the road, I’m jumping squarely onto the HTML 5 bandwagon. The question is, are you?

HTML5 Audio and Video comes as standard

June 26, 2011

Movie and Audio features in HTML5 are like many of the features I have discussed previously, they:
•    Have a history of controversy over Codec support
•    Specifications are too large to do real justice in these short posts
•    Are an exciting, powerful new addition that will transform the web

To date the most popular media player on the web has been Adobe’s flash player, and arguably it has been it’s most popular use. Apple’s lack of support in their devices for Flash has created a small crack in Adobe’s party, but this crack could open further into a chasm that their flash drops into! However there have been many other shenanigans in this story and rather than delve into to those murky stories I’m going to again give a brief overview of the capabilities of these new features. The good news is that HTML5 will remove the need for proprietary plug-in’s like Flash and Quicktime for playing sound and movies.

audio and video are both media elements in HTML5, and as such share common API’s for their control. In fact you can load video content into an audio element and vice versa, audio into a video element – the only difference is that the video element has a display area for content, whereas the audio element does not. Defining an audio element  and source file is pretty straightforward

<audio controls src=”my musicfile.mp3”
My audio clip
</audio>

You can actually assign multiple source ( src ) files. This is to allow you to provide the audio in multiple formats, so that you can support the widest array of browsers. The browser will go through the list in sequential order and play the first file it can support, so it’s important you list them in order of the best quality first rather than by most popular format.

To load a movie you simply replace the audio element with video. Video’s can also define multiple sources. You may additionally specify the height and width of the video display area.

Next to control media you can use the following API’s:  load(), play(), pause(), I think what they do is self explanatory. canPlayType(type) can be used to check whether a specific format is supported.

Some read only attributes can be queried such as duration, paused, ended, currentSrc to check duration of the media, whether it has been paused or ended and which src is being played.

You can also set a number of attributes such as autoplay, loop, controls, volume  to automatically start media, repeat play the media, show or hide media controls and to set the volume.

These aren’t exclusive lists of API’s or attributes as there are many more but they are some of the most common features of the audio and video people will use. With video especially there are many more great things you can achieve like creating timelines and displaying dynamic content as specific points in the video (no doubt this will be used for advertising amongst other more interesting uses).

Clearly the web will get richer with full multimedia content without the perquisite of plug-ins. However developers should be aware of the various formats supported by specific browsers and aim to provide media in as many formats as possible.

Many sites today do use sound and movies, but I believe with native support and greater imagination a new world of dynamic rich media sites will change the user experience in the same way that Ajax transformed static content into the dynamic web. With it we will see new online behaviors, a topic I will cover soon, and whilst some have said the future of TV is online the web may just give it a new lease of life !

Further reading:
http://dev.w3.org/html5/spec/Overview.html#media-elements

HTML5: The web just got richer

June 23, 2011

HTML5 graphics features will drive gaming and rich media to the web

My post about HTML5 games is a great way to experience one of the features in HTML5 that will no doubt make a huge difference to browser experiences. It’s hard to imagine in our rich multimedia world that without proprietary techniques or plug-ins that web standards do not support the simple task of drawing a line, box, circle or any shape! All that is changing with HTML5 with the Canvas API.

<canvas> </canvas>
  This is the basic canvas element notation which will set the world web alight with animation and richer experiences. It is already transforming the art of the possible in gaming on the web. The original 2D API was created within webkit by Apple who then provided it to the standards body with a royalty free patent licence. In February this year the Khronos Group issued version 1.0 of it’s WebGL API providing 3D rendering capabilities within browsers.

So what is a Canvas? A canvas is essentially a rectangular area on the screen that you can add and manipulate graphics. Simple? If only.  The basic action of drawing a line requires: creating a canvas, getting the canvas context, starting a path (beginPath()), defining a start point ((moveTo(x, y)), moving to the end of the line (lineTo( x, y )), closing the path to say we finished drawing our line (closepath()) and finally drawing the line (stroke()). If this sounds complex let me tell you that to do even this simple action previously involved some very imaginative code and image manipulation.

You can create simple or complex shapes using “paths“. Drawing a line following the path only occurs when you call stroke() . If the path create a shape it can also be “filled” using fill() . A fill can have colours and styles (e.g. a pattern, gradient or image).
With more curvy shapes you need to look to quadraticCurveTo(), bezierCurveTo(), arcTo() and arc() to create a path that curves.

Apart from drawing  and colouring (“filling”) images can be transformed: scaled (scale()), rotated (rotate())and skewed (transform ()).

Apart from shapes, of course you can draw and fill in text on the Canvas. And there are also API’s to create set the attributes of shadow’s:  shadowColor sets the color of the shadow, shadowOffsetX and shadowOffsetY positions the shadow from the original shape or text and shadowBlur allows you to blur the shadow.

There is so much more that can be said about Canvas that I almost didn’t bother with this simple overview for fear of not doing it any justice. However in the end I felt it needed a mention. The latest draft of the specification can be found here. Beyond this WebGL will provide a whole new set of more powerful capabilities however it is currently way behind the basic Canvas features in terms of browser support.

Further reading
http://dev.w3.org/html5/2dcontext/
https://www.khronos.org/registry/webgl/specs/1.0/

HTML5 gets a database

June 9, 2011

As a relative late comer to HTML5 trying to catch up on a spec that spans over a 1000 pages is no mean feat, let alone the fact that the definition of what makes up HTML5 is covered across several specs (see previous blog on standards spaghetti). If you’ve been following this series then you’ll have worked out I have a few favourite features that I think will radically change the perception of web applications, and you guessed it HTML5’s support for database access is another.

The specification started out as early as 2006 with WebSimpleDB (aka WebSQL), and went as far as implementation into many browsers including webkit, Safari, Chrome and Firefox. From what I can find Oracle made the original proposal in 2009 and the W3C made a switch to Indexed DB sometime in 2010. Although Mozilla.org already had their own implementation using SQL-Lite, they too preferred IndexedDB). The current status as of April 2011 of the IndexedDB spec is that it is still in draft, and according to www.caniuse.com early implementations exist in Chrome 11 and Firefox 4. Microsoft have released a prototype on their html labs site at to show their current support .

Clearly it is not ready for live commercial applications in the short term, but it is certainly something worth keeping your eye on and to plan for. When an application requires more than simple key value pairs or requires large amounts of data, IndexDB should be your choice over HTML 5’s WebStorage api’s (localStorage and sessionStorage).

The first important feature about IndexDB is that it is not a relational database but in fact an object store. Hence there are no tables, rows or columns and there is no SQL for querying the data. Instead data is stored as Javascript objects and navigated using cursors. The database can have indexes defined however.

Next there are two API modes of interaction, Asynchronous and Synchronous API’s. As you would imagine synchronous API’s DO block the calling thread (i.e each call waits for a response before returning control and data). Therefore it follows that the asynchronous API’s do NOT block the calling thread. When using asynchronous API’s a callback function is required to respond to the events fired by the database after an instruction has been completed.

Both approaches provide API’s for opening, closing and deleting a database. Databases are versioned, and each database can have one or more objectstores. There are CRUD API’s for datastore access (put, get, add, delete) as well as API’s to create and delete index’s.

Access to the datastore is enveloped in transactions, and a transaction can be used to access multiple data stores, as well as multiple actions on a datastore.

At a very high level, there you have it, IndexDB is a feature that allows you to manage data in the browser. This will not only be useful for online applications (e.g. a server based warehouse could export data cubes for local access) but also for offline applications to hold data until a connection can be established. I’d fully expect a slew of Javascript frameworks to add value ontop of what the standards provide, indeed persistence.js is one such example.

It’s good to see early implementations and prototypes for IndexDB and whilst the date for finalising this spec is unclear, I for one will be monitoring it’s progress closely and waiting with baited breath for it’s finalisation.

http://www.w3.org/TR/webdatabase/

http://www.w3.org/TR/IndexedDB/

http://hacks.mozilla.org/2010/06/beyond-html5-database-apis-and-the-road-to-indexeddb/

http://trac.webkit.org/export/70913/trunk/LayoutTests/storage/indexeddb/tutorial.html

HTML5 gets cross with domains

June 2, 2011

In my last post I overviewed a number of new communications features in HTML5 that in my mind will no doubt transform web applications moving forward. One of those features was Cross Document Messaging which is one of the most exciting new features I’ve found in the specification.

One of the big constraints of web applications has been allowing separate applications to talk to each other. For example let’s say you have a phone book application and you have a news feed application. If you wanted to show news for a person you selected in your phone book you couldn’t get the phone book to tell the news application which person you had selected, you would have enter the person’s name in the news application and then search: a series of manual steps as automation wasn’t possible.

Along came portal technology, followed by portal standards (JSR168 then JSR286), which enabled applications to do exactly this, share data (of course they have many other features and benefits to). Whilst a portal page can consist of applications (portlets) running off different servers, the main constraint of them sharing data is that they have to be on the same portal page i.e. you couldn’t have two applications running in separate browser windows or tabs and sharing data or passing messages between them.

Cross Document Messaging overcomes this. Essentially an application can use PostMessage to send a message:

 my_iFrame.contentwindow.postmessage(‘Hi there’, ‘http://www.myapp.com/’);

NOTE: you can’t send message from a https to http (or vice versa) application.

 The targeted application must implement an event listener:

 window.addEventListner(“message”, msgHandler, true);

and obviously create the msgHandler function to process the message.

  function msgHandler (e) {…..handling code…..}

The receiving application will be notified of the senders web address (origin) and can therefore choose to ignore the message if it is coming from an unrecognised source. It is best practice for the receiving application to maintain a “white list” trusted origins and check these before processing messages.

Even with trusted origins messages coming in as strings need to be validated as they could contain script and open themselves up to an “inject attack”. This is where the string contains script rather than data, if the script is evaluated it essentially issues a set of commands to the receiving apps server.

The two applications can be in entirely separate browser windows, hence overcoming the constraint of a portlet approach.

The concept of Origins is used by other new features such as XMLHttpRequest. As per my last post previously this API was only able to talk it’s own origin. Now it can talk to other origins, essentially allowing for content to be aggregated in the client rather than just at the server as is the case today.

 This is a simple posting of a powerful capability, certainly an area web developers should consider delving deeper into.