Posts Tagged ‘mash-ups’

HTML5: The right time right place for mobile?

February 9, 2012

Most people by now understand that main challenge for developing mobile applications is creating a solution that runs on as many platforms as possible. This challenge can range from supporting browsers that only support text, up to fully fledged smartphones.

For organisations that are targeting users in the developed world, many are simplifying this challenge to target smartphones only. However even here to create local native applications requires solutions that support Apple’s iOS, Windows, Android and Java (Blackberry).

There are many mobile development platforms available to assist with creating “write once deploy everywhere” apps. The main constrains here are that you end up with deployments to many different stores, and that quite often still write platform specific code to take advantage of platform specific features.

HTML5 has long been a strong candidate for mobile applications, but is it ready? Are mobile browsers upto date with HTML5?

The answer to this question can be a simple “No”, no mobile browser supports the full HTML5 specification. Or a “Maybe” depending on what features (camera, phone book and GPS) of the phone you require you may have support from HTML5.

Push that up to a resounding “Yes”, if you want to move an application that currently runs on the web to run on mobile. Of course, I should also caveat the above with ‘there are grey areas’ in between these responses, not very helpful I know.

For corporates looking to support mobile users with line of business applications I believe there are some great examples that prove HTML5 is ready for them. For a start Facebook is one such application taking full advantage of HTML5, and promoting its use for Facebook apps.

The key areas of HTML5 that are supported across mainstream mobile browsers are offline storage, geolocation, multimedia, graphics (canvas), touch events and large parts of CSS3. The mobile HTML5 site provides a list of mobile browser capabilities.

In the past marketers are argued that presence on App Stores adds value to “brand awareness”, and whilst this is true, there is nothing stopping an organisation having using both native apps and HTML. For example, take LloydsTSB. You can download their app, which effectively once downloaded then runs a “browser” version of their Internet banking service.

There are also some great libraries out there that make cross platform mobile development much easier and provide features that make your web applications feel much more like a native phone app. JQueryMobile is a great example.

So what are you waiting for?

HTML5 knows where you are!

May 12, 2011

A few years back I was deemed a heretic by many of my colleagues and friends  when I suggested that HTML5 will remove the need for writing many mobile applications. I was pummelled with questions like:

  • But how will they work offline?
  • Are you saying a browser user experience can rival a platform native one like Apples?
  • You do realise that most games require “threading” how you going to do that?
  • What about storing data locally, can you do that?

I was able to fend of most of these, but the one I couldn’t at the time was about accessing the device applications like Camera and GPS. Well things have moved on and whilst I am no longer deemed a heretic there are still some corridor’s whispering doubt.

One of the big features of mobile technology used by many apps is the phones location and location based services and application have already been through a huge hype cycle.

Under the catch-all banner of HTML5, although it is a separate subspec, the W3C Geo Location working group are making location based applications a reality for web developers. It has been around a while and hence is fairly mature and stable now.

A device (even a desktop) can provide location information in a number of ways:

  • IP Address (this is typically the location of the ISP rather than your machine, but ok if you simply want to check which country the user is in)
  • Cell Phone triangulation (only fairly accurate, very dependent on the phone signal so could be problematic in the countryside or inside buildings)
  • GPS (very accurate, takes longer to get location, dependant on hardware support and can be unreliable inside buildings)

Location data can also be simply user defined: however this is dependent on the user entering accurate information.

Of course one of the key concerns will be privacy but the spec covers this with an approach that the requires a user to give permission for location information to be passed to an application. Note the application can only access location information through the browser and not directly e.g. from the GPS device. Hence the browser enforces the user permissions for access.

The Geo Location API allow for both one off request to get the users current location or for repeated updates on the user’s position, developers write simple callback routines for both approaches. The key information provided includes: latitude, longitude and accuracy. Accuracy is a %value of how close the longitude and latitude values are to the user. Depending on the device you may also get additional information such and speed, heading (direction of travel) and altitude.

As per any quality application you process errors accordingly, especially responding to a failure to get hold of location data because of signal issues or other reasons. Hence retrieving location information is fairly simple, the real hardwork is in processing that information and that requires good old fashioned quality programming ;o)

This specification presents a huge opportunity for web developers to create applications once deemed only the domain of platform specific code, and I for one am very excited !

The future is mass mobile and niche native apps

December 7, 2010

Design once and stop. That development strategy seems like a route to a software dead end, yet it is an approach that is representative of many apps created for mobile devices.

 Individuals and businesses are rushing to develop their specialist iPhone and Android apps, software that runs on one particular device and which fills a particular niche in the market. In the short-term, your development approach can afford to be based on point solutions.

 Such a development approach allows you to get used to the fast-developing market. For larger organisations, short-termism allows the IT team to dabble and create a marketing buzz. In many cases, the app is a means to show your company is cool, rather than a new and realistic revenue stream.

 In the long-term, that strategy will fail. Mobile devices will be the home of web- enabled work and play. Betting your strategy on one particular platform is not a realistic approach. After all, the market is fracturing across multiple smart phone operating systems, such as Apple, Research in Motion, Symbian and Windows.

 That fracturing cannot last. Native mobile apps constructed for a single platform might feel better and run faster. But to quote Google’s DeWitt Clinton (see further reading, below), such nativity is a bug and not a feature.

 Just as in the case of the desktop, developers have had to find ways to make their software run across multiple operating systems. And in the mobile era, you and your business will have to move towards an integrated point.

 Do you really want different sets of developers for each and every platform? Do not differentiate too much because at some point you are going to have to aim for convergence.

 Advancements in mobile web browsing continue to take place. Take jQuery Mobile, a recently announced web framework for smart phones that will provide a unified user interface system across all popular mobile device platforms.

 Further progress comes in the form of HTML5, which is currently under development as the next major revision of the hypertext markup language standard. The platform will promote deployment across multiple platforms and includes features that previously required third-party plug-ins, such as Flash.

 The result is that the dream of building once and deploying everywhere could soon become a reality. The future of development is the mobile web.

Further reading


Gestures to Help the Business

June 3, 2010

Business IT is now all about the consumer. The CIO faces a series of demands from employees keen to use high-end consumer hardware and software in the business.

 Such demands present significant challenges, such as technology integration, fears over openness and potential security risks. When it comes to the continued development of these challenges for leading executives, there is good news and bad news.

 The bad news is that consumers – particularly those entering the business – are only likely to become more demanding. With converged technology in their pockets and detailed personas online, blue chip firms will find it difficult to lay down the law for tech-savvy users.

 However, the good news is that the next wave of consumer technology is also likely to produce significant benefits to the business. Take Project Natal, Micorosoft’s controller-free entertainment system for the Xbos 360 console that should be released by the end of the year.

 Motion-controlled technology has been in-vogue for gamers since the launch of Nintendo’s Wii in late 2006. The system, which allows the user to control in-game characters wirelessly, has been a a huge commercial and technical success.

 Natal is likely to take such developments to the next level, giving Xbox 360 users the opporuntity to play without a game controller – and to interact through natural gestures, such as speaking, waving and pushing.

 Maybe that sounds a bit too far-fetched, a bit too much like a scene from The Matrix? Think again – early demonstrations show how the technology could be used in an interactive gaming environment.

 But that’s really just the beginning. With Microsoft pulling the strings behind the technology, Natal is likely to be provide a giant step towards augemnted business reality – where in-depth information can be added and layered on top of a real physical environment.

 The future of the desktop, for example, will be interactive. Employees will be able to use gestures to bring up video conferencing conversations and touch items on the desktop to bring up knowledge and data.

 Employees in the field, on the other hand, will be able to scan engineering parts using their mobile devices. Information send back to the head office will allow workers to call in specific parts and rectify faults.

 The implications for specific occupations are almost bewlidering. Surgeons will be able to use Natal-like interactions to gain background information on ill patients; teachers will be able to scan artefacts and provide in-depth historical knowledge to students.

 The end result is more information that can be used to help serve customers better. And that is surely the most important benefit of next-generation consumerisation.

 Further reading


Voice Interaction: The Next User Interface?

April 29, 2010

Let’s start with the obvious; voice interaction – despite recent developments discussed below – is nothing new. It could, however, be the next user interface.

Software that can convert spoken words into written text has been available since the early 1980s, with modern commercially available systems claiming accuracy upwards of 98% (see further reading, below). Such results are not bad, but not great either.

In almost 30 years of use and refined development, speech recognition still can’t match the recognition levels of human beings. For the most part, such issues mean most people are still confined to the traditional input methods of keyboard and mouse.

But lurking on the horizon is another upheaval and a potential boon for speech recognition. Google’s recently announced Nexus One handset includes voice recognition technology that allows the individual to control the device (see further reading).

The intuitive system – which learns in relation to individual queries – allows the user to interact with a range of services, from composing an electronic mail in Gmail to visiting the world’s site via Google Maps.

The search giant isn’t alone in developing more intuitive voice services. Microsoft’s Windows operating system uses voice control technology to help the user control Vista. The technology has been refined further in the recently released Windows 7 platform.

Such developments, however, have not been without issues. A famously unsuccessful demonstration of Vista voice recognition software in 2006 led to a simple note of “Dear Mom” being translated as “Dear Aunt, let’s set so double the killer delete select all” (see further reading).

Reuters reported that Microsoft chief executive Steve Ballmer blamed the failed speech recognition product demonstration on “a little bit of echo” in the room, which confused the speech-to-text system.

For users of voice technology, such confusion has been a common concern. But do the refinements in Windows 7, and the progress made by Google, show that we are actually getting to a point where voice recognition is usable?

Voice controlled car stereo systems – which have often suffered due to background noise – are now viewed as increasingly reliable. And early feedback from Nexus One users suggests the voice recognition technology is “amazing” (see further reading).

After three decades, then, we might finally have reached the tipping point for voice control. But as the speech interface becomes commonplace, one final word of warning: be careful where you talk.

Straining ears could pick up confidential information from spoken dictation. Worse still, confused members of the public might question your sanity! So, be careful as you embark on the road to a spoken revolution.

Further reading


Write once, present many times

January 12, 2010

More and more organisations, especially in Financial Services, looking to take existing products and deliver them to different brands, a business approach called “white-labelling”. For example LV provides car insurance through their own brand but also to a number of other brands like Nationwide and What Car.

 Whilst this isn’t a new concept, how this is being supported by IT is certainly changing. In the early days the product provider would simply take their existing pages, change the logo’s, phone number and possibly the font and background. Everything else, layout, question style, text etc… simply stayed exactly the same.

 However today it is the User Experience that differentiates an organisation on-line and is core to the branding. No longer is an online brand defined simply by logo, font and colours. Thus a “white-labelled” product has to be designed specific to the brands user experience.

 However this can present a challenge for IT, because in the old approach to white label a product you simply changed the style sheet and logo files for each brand, the form stayed the same. This way anytime a screen had to change it could be maintained for all brands with one change.

 So how do we get the flexibility to have different form layout, pagination, question order, question text i.e. a “custom user experience” per brand without duplicating form logic so that changes can still be maintained in once place?

 This problem is not solved through style sheets or by frameworks like MVC.

 The answer is not easy unless you take a tools based approach. Such an approach separates the form logic (what questions, validation etc..) from the form presentation (font, colour, text, layout etc…).

 A tools based approach is required to manage the link between the logical form and all the individual presentations of that form.

 This problem is actually not specific to white-labelling but to any screens that have to be presented in different ways e.g. to manage different views of a form for different: channels, user types, countries/languages.

 Financial services firms have already taken a lead in white labelling and multi-channel strategies; other sectors are likely to follow, branding applications for particular purposes. It is not difficult to see why the need to “write once present many times” will increase in importance.

 Modifying individual applications for different user experiences is an expensive (both time and money) approach in IT development. White labelling your products – as in the case of financial services – makes implementation cheaper and faster.

 Such capability increases flexibility and allows the IT team to write once and present many times. And with employees becoming more demanding, flexibility really is the biggest selling point for under pressure IT leaders.


Small is most definitely beautiful

July 1, 2009

Compliance remains a crucial technology issue. IT leaders have been smothered by a raft of regulatory requirements in the last few years, and the combined hit of environmental concerns and the economic downturn is only likely to make matters worse.

Take the finance sector, where a recent survey by the International Securities Association for Institutional Trade Communication noted that 25% of firms have already been affected by increased compliance requirements due to the economic crisis.

Understanding and dealing with compliance is, therefore, crucial. But be warned, big vendors and system integrators are likely to push issues like governance, quality assurance and lifecycle management.

While important in the right business context, such issues are also likely to provide an opportunity to become tied to processes and standards. And an obsession with standards creates the need for big models and increased complexity.

Such an obsession is likely to be a hindrance to what is actually useful for the business. And at a time of increased regulatory compliance, further processes and standards are just what your business does not need.

The chief executive will need you to cut through the waffle and provide a simple means for staying up-to-date and compliant. Thankfully, the composite nature of service-oriented architecture (SOA) provides a way round complex compliance and allows you to create small, successful systems.

Rather than creating vast and unconnected applications, SOA allows the IT leader to re-use resources and create applications on-demand. Such agility will allow you to promote a flexible architecture that is ready for fast-changing compliance requirements.

Forget the fear that you will have to fit systems to laws retrospectively. SOA will allow the IT department to integrate with the business and create compliant systems as new regulations emerge.

And the front-tier of SOA will be particularly crucial, allowing you to create a useful presentation layer that allows line-of-business executives to monitor information and ensure new targets are being met.

Take note, then, of agility, integration, presentation – the three watchwords that will help you use SOA to ensure your business responds flexibly to changing compliance demands.


Wake up to the power of the web browser

May 11, 2009

Are you still using the desktop; still choosing to access enterprise applications through Windows?

It can be difficult to break away from accepted ways of working. Managing such a break is even more complicated when the business is bamboozled by a series of marketing buzzwords.

The big hype of the moment is cloud computing, a generic term used to describe the provision of scalable enterprise services over the web. Rather than having to access applications through a traditional desktop interface, businesses can use the cloud to host applications and store data.

As many as nine out of ten C-level executives know what cloud computing is and what it can do, according to a recent survey by consultancy Avanade and Kelton Research (see further reading, below).

But at the same time, 61% of senior managers are not currently using cloud technologies. For the majority, it is probably time you woke up to the power of the web browser.

Working through a web browser is no longer a niche activity. and Google Apps are high profile and popular examples of how users can access applications through a web browser.

Such cloud-based software suites mean users can enter the browser and work collaboratively on essential documents. The high quality of services also means users can also benefit from the functionality of traditional desktop software, such as drag and drop, and multiple interfaces.

There are still issues to overcome, of course. Some businesses remained concerned about hosting information outside the corporate firewall. And recent problems with Google Mail show how failure of the cloud could derail essential business processes.

Such issues mean providers will have to develop secure methods for accessing browser-based applications offline, as well as online. However, such problems are minimal given the quick development of cloud computing.

Businesses often need a high profile sponsor to help push new technologies. When it comes to browser-based apps, there can be no more prestigious supporter than Vivek Kundra, the new CIO of the United States and a confirmed fan of Google Apps (see further reading).

What’s more, the recession is likely to push interest in cost effective and hosted applications. The Avanade and Kelton research also found that 54% of executives use technology to cut costs.

In these economically sensitive times and with an increasing high level of functionality, the web browser can help your IT department provide a great customer experience.

Further reading

Cloud computing is a two-edged sword

The new US CIO is a fan of Google Apps

The rise of interactive convergence

February 4, 2009

“Games are the convergence of everything,” began a recent article in business magazine Forbes, the premise of which was that the integration of social and internet components is allowing players to experience a new era of immersive game play.


Sounds exciting, but what does such interactive covergence mean for business? How will the convergence of user interfaces – such as touch, voice and gesture – change the way we work and use information?


Let’s return briefly to the games industry, where media specialist iConecto reports that health electronic games represent 16% of the video gaming industry, amounting to a $6.6bn annual global market.


Owners of a Nintendo games console will have already seen the potential fun and health benefits that can be garnered by playing such games as Wii Sports and Wii Fit. These converged simulations require individuals to make best use of a combination of movements and gestures.


But Nintendo is not the only business wise to the positive effects of convergence. North American insurers are also realising the benefits of gaming, with healthcare specialist Cigna creating Remission – a game that helps young cancer patients build an adherence to oral chemotherapy.


Beyond gaming, some organisations are already developing a leading edge in covergence. Take the BBC, which is planning to work alongside ITV and British Telecom to further develop its popular iPlayer service.


Internet TV subscribers will grow five-fold to 71.6 million worldwide by 2012, according to analyst Research and Markets. The firm reports that convergence features, linking TVs with PCs and mobile phones, are helping to push demand for an increasing range of content.


Over the Atlantic, the recently completed CES conference was dominated by issues surrounding the convergence of content and technology. The conference shows technology companies and finding innovative ways to push information across a series of converged platforms.


IT leaders that are not already thinking about developing their own converged systems will also need to prepare for a raft of new converged applications and technologies. The result will be multiple methods for using mashed-up content, many of which have not even yet been considered.


Planning for an unplannable future just got even more diificult. Good luck.


Why create portals when you can create a composite application?

January 21, 2009

Why create portals when you can create a composite application?


As I mentioned in an earlier post on usability issues, next generation portals will be unlikely to provide useful access to information unless IT managers take control of infrastructure issues.


By their inherent nature, portals require users to run multiple sessions on screen; each portal application requires a different connection to the back-end infrastructure.


If you’re running many applications in one portal, the strain on your network and hardware can be unbearable. So, why bother with portals?


Well, a well-implemented portal can help present essential information to essential users. But you have to get your approach right.


First things first – identify the process you want to solve. Whether it’s supply-chain management or customer service, recognise the information that will help drive increased business intelligence.


As you strive to create the right approach, don’t think of a portal as a bunch of separate applications that are best served by in-house storage assets.


If you do, the aforementioned information strain on your servers is likely to be unbearable. Instead, look to client side session management and keep data in the browser, rather than on the server (see my earlier blog posting in ‘Further reading’, below, for more details).


Then recognise that most of your users will have a small screen estate that cannot readily support four or five open windows.


Even if your back-end infrastructure can stand the strain of running simultaneous applications, your users’ eyes won’t – and as soon as key executives are straining to see detail, hopes for increased usability and high efficiency start to disappear.


Remember, that one process should mean one application. Pushing multiple sessions is not an intelligent way to provide clarity on your key business process.


Don’t think of your portal as a jigsaw, where small elements create a bigger and more effective whole. Instead, start with the process in-mind, and create a composite picture that includes all the functionality needed to solve your business concern through a single mashed-up application.


Analyst Gartner suggests mash-ups could emerge as an alternative to horizontal portals. I would go further and suggest that composite applications are the future of portals.


Giving users the power to create mash-ups through the browser will increase the effectiveness of your information push and reduce the strain on your servers.



Further reading: