Archive for the ‘UXP & RIA’ Category

The end of silo architectures

June 28, 2012

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

Programming with soldering irons

June 21, 2012

The last time I can remember seeing a programmer with a soldering iron was a long time ago in the hay days of micro computing in the late 70’s. A picture of Steve Wozniak and Steve Jobs in a garage having built the Apple 1 micro-computer. The component parts totalled a few hundred pounds, and the machine sold for $666.66. At this level the cost made this a niche phenomenon, however a handful of very successful entrepreneurs emerged out of this era.

So moving fast forward to 2012, Raspberry PI foundation http://www.raspberrypi.org have launched a $25 computer running Linux, has sockets for Ethernet, HDMI, USB, RCA Video and SD Cards. At this price and capability this is unlikely to be a niche phenomenon, indeed February this year more people were searching for Raspberry PI than they were the world’s most famous pop artist, Lady Gaga ! On launch the interest in Raspberry Pi flooded their website and had to be taken down, and the first batch these was sold out in 1hour.

So is this just about a cheap computer to surf the internet on your TV and to do your basic home computing tasks like word processing on? Well by the time you add a keyboard, mouse and decent memory card the cost will be very close to a low android tablet, so this isn’t the reason to buy one.

The target market for Raspberry Pi Foundation is the education sector where it is hoped these devices will encourage children (11years+) to learn programming which in turn should produce more developers from university. At this cost it is hoped that schools / parents can afford to provide a platform upon which kids will want to learn to program.

However the bulk of the demand I would say will come from techies that want to build low cost solutions that require compute power and software applications. Home automation projects are likely to be popular: creating a solution that allows you to switch lights and other things on/off remotely via the internet. Solutions that require integration with other hardware such as a GPS or camera will also be popular.

Already projects looking to create their own weather balloons, remote controlled robots, music studios and many more have started ahead of people actually having the kit in their hands. So you can see more idea’s here http://www.raspberrypi.org/forum/projects-and-collaboration-general/the-projects-list-look-here-for-some-ideas .

It is early days but to me it’s obvious the options and therefore opportunities for innovation are going to be huge, some will be very practical, some will be fun (for example the guy that created a device that allows him to feed his dog via the internet, some will be useless but in all there will be lots of new ideas using these devices in the coming years.

The next Steve Jobs, Mark Zuckerberg or Bill Gates is amongst our children already so the time is right to get in the queue for your piece of Raspberry Pi.

Future of mobile: Part 3

May 13, 2012

Today I have 3 GPS devices, 4 Cameras, 3 Video cameras, 3 movie players, 5 music players and the list goes on. All of these are in a variety of devices that I use in different places for different purposes.

Drilling down into the detail what I actually have is a phone, a desktop home PC, a laptop, an iPod, a car stereo, in-car GPS, a TV+HD/DVD Player, a digital SLR, that’s just me and not including what the family has.

This presents a number of challenges, risks as well as a lot of cost…

Most of us want as little duplication of cost as possible. Already even though cars come with stereos many people are now plugging in their MP3 players, utilising the speakers in the car only. Many people will also use their phone’s GPS rather than the car’s. Newer TV’s have wireless access to browsing and social apps. I’m tempted by the hype of tablet computing, but have to ask myself, why? I have all the compute options I need?

More devices mean more synchronisation issues for personal settings and personal data. While cloud based services will resolve many of these issues, it is still early days to move everything into the cloud as users of MegaUpLoad found.

In 1999 I went to a tech show in Vegas, where I saw a potential solution to the problem from Sony. They were demonstrating the concept of “apps on sticks”. Basically these were memory sticks (max 32mb at the time) with other devices, like GPS, radio and even camera on the stick. The idea was simple you’d simply plug your GPS stick into your phone, laptop, car or any other device, rather than have that function in multiple devices. This approach would have required a lot of standardisation and clearly is a concept that never came to fruition.

More recently Asus have launched their PadFone, this is a smartphone that comes with a tablet screen. When you need to work with a bit more screen estate, you simply slot your phone into the back of the screen and hey presto you have a tablet that can use the 3G or wireless connection on your phone. Apart from being able to charge your phone, the tablet screen also integrates with the phone itself so voice and video calls can be made/received using the tablet screen.

This concept really works for me, and I could see myself buying into the family of products: TV, Car Stereo, projector. This with the ability to have my data in the cloud so losing the phone is not the end of the world, makes for a great solution. Whether the phone slots in, or connects wirelessly the ability to drive a different screen from my phone, either works for me as a concept. Maybe the idea could be taken even further so that the circuitry for the device could be slotted into the phone itself?

As I’ve discussed in my previous blogs there are many new avenues for phones, in shape, size and function. It would be difficult to predict the future with so many possibilities, but one thing for sure is that for gadget geeks like me, the phone is going to be the constant source of innovation we thrive on

Click, touch, wave and talk: UI of the future

May 10, 2012

First there was the Character User Interface (CUI, pronounced cooo-eey) typified by green letters on a black background screen. Then the Graphical User Interface (GUI, pronounced goo-eey) came along with a mouse and icons. Pen interfaces existed in the era of GUI, but now smartphones and tablets are driving many more interaction approaches using touch interfaces.

Now the GUI itself is going through a re-birth on mobile platforms with many more new types of user interface controls than we have seen in the past, we have gone way beyond simple buttons, drop-down lists and edit fields.

Many devices also support the ability to support voice driven operations, and although voice recognition has been around for over two decades, the experience is poor and more recently drastically oversold by the likes of Apple. However this is an area that will is likely to improve radically in the coming years.

The Microsoft Kinnect gaming platform provides yet another innovation in user interaction, a touchless interface using a camera to recognise gestures and movement. Microsoft are already making moves to take this form of user interaction into the mainstream outside of gaming (http://www.bbc.co.uk/news/technology-16836031), as are many other suppliers and we should see phones and TV’s supporting these this year.

However even some old methods of interaction are being given a new lease of life such as Sony’s inclusion of a rear touchpad and dual joysticks.

So, with all these modes of interaction what does this mean to User Interface Designers? Shouldn’t they really be called User Interaction Designers? How do you decide what is the best mode of interaction for an application? Should you support multiple modes of interaction? Should you use different widgets for different interaction? Should the user choose their preferred mode of interaction and the application respond accordingly? Should the mode of interaction be decided by what the device supports? Are there standards for ALL these modes of interaction?

This emerging complexity of different user interaction methods will raise many more questions than I’ve listed above. So far I have only found little research in this area, but this is a moving target. The other evidence from the mobile world is the rapid change in user behaviour as users get used to working in different ways.

Initially I would imagine most applications to use basic interactions like touch/click so that the widest possible range of devices can be used. However those targeting specific devices will be the “early adopters” for the common interaction mode for that specific device (e.g. 3D gestures on Xbox Kinnect).

In the very long term standards will evolve and interaction designers and usability experts will combine to design compelling new applications that are “multi-interactive”, choosing the most appropriate interaction method for each action and sometimes supporting multiple types of interaction methods for a single action.

Multi-interactive interfaces will make users lives easier, but are you ready to provide them?

Is apple Siri-ous about voice?

May 4, 2012

When I first saw the new iPhone ads that featured voice interaction all I thought was WOW, Apple have done it, they have mastered voice interaction. What appeared to be natural voice interaction is the nirvana many people have been waiting for to replace point and click interfaces.

Speech recognition is not new and certainly both Microsoft and Android had speech driven interfaces before Apple. However it was the ability to talk naturally without breaks and without having to use specific key words that seemed to set Apple apart from the competition.

Instead we were all fooled by another Jobs skill, his “reality distortion field”. Indeed one person went as far as to sue Apple for selling something that did not perform as advertised.

What exactly is the difference? Well both Windows and Android recognise specific commands and actions rather like just “talking the menu” e.g. Saying “File, Open, text.doc”. The Apple promise was that you could simply talk as you would normally speak e.g. “File, Open, text.doc” would be the same from Apple if you said “get me text.doc”.

So why aren’t we there? The challenge is creating a dictionary that can understand the synonyms and colloquialisms that people may use in conversational speech as opposed to the very specific commands used in menu’s and buttons in graphical user interfaces.

Whilst this may seem like a daunting task, I believe the first step to solve this puzzle is to reduce the problem. So rather than creating this super dictionary for absolutely any application, dictionaries should be created for specific types of applications e.g. word processing or banking. This way the focus on creating synonyms and finding colloquialisms and linking them is more manageable.

The next step would be to garner the help of the user community to build the dictionary, so that as words are identified, the user is asked what alternatives could be used so that synonyms/ colloquialisms are captured.

I know this is not a detailed specification but this is an approach that Apple might use to give us what they promised – and what the world is waiting for… speech driven user interfaces.

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

What’s a UXP

March 29, 2012

Gartner are defining a category they call UXP to help organisations manage all their user experience requirements.

Gartner defines the UXP as “an integrated collection of technologies and methodologies that provides the ability to design and deliver user interface/presentation capabilities for a wide variety of interaction channels (including features such as web, portal, mashup, content management, collaboration, social computing, mobile, analytics, search, context, rich Internet application, e-commerce, an application platform and an overall user experience design and management framework)”.

There is currently no precise definition of the set of technologies a UXP encompasses, but Gartner identify the following list as candidates:

  • Web analytics
  • Search
  • Social
  • Programming frameworks and APIs
  • UX design and management
  • Rich internet applications
  • E-commerce
  • Mobile
  • Content management
  • Collaboration, with portal and mashups being core.

With growing importance of web interfaces on all devices the UXP is not a moment too soon, as organisations need to get a grip of not just these technologies, but the underlying supporting business processes and skills they require to define, create, manage and measure their user and customer experiences.

It’s clear that from an architectural perspective the UXP covers everything that is in the “Presentation layer”, and maybe a few that are in the grey areas between the Presentation layer and the Business layer.

As Gartner have identified, this is a growing list of technologies. From my perspective, some of these need to be integrated and some are standalone, and it would be helpful to have some broader categories within the UXP to help focus efforts towards implementation.

Social and collaboration technologies facilitate interaction between two or more users, and so could be grouped into a category called UXP-Collaboration.

Content is the core of any web platform and content management, search and analytics could be grouped into a category called UXP-Content.

Portal, mobile apps, RIA and mashups are essentially application development technologies so could be group as UXP-Apps.

From a process perspective these categories also make sense, as UX-Collaboration technologies are installed and then require mediation processes to manage the implementation, while UX-Content require publishing and monitoring lifecycle and UX-Apps technologies are implemented by IT, and go through an IT development lifecycle.

However, UXP is an evolving field, and as with any technology it is clear that selection and implementation cannot be done without a full understanding of business requirements, the underlying implementation and management processes and skills required.

Given the size, complexity and importance of this task I would not be surprised to see some organisations appoint a Chief eXperience Officer (CXO).

Is the growth of Mobile Apps overhyped?

March 22, 2012

There are numerous statistics on the growth of mobile apps in the various stores, and also about the number of downloads. Apple claims over 500,000 apps in its store and Google claims over 450,000 (this time last year it had only 150,000). The number of apps, downloads and rate of growth is phenomenal.

Is this just a temporary fever or will this growth continue, and if so what will drive it?

I believe this growth has only just started and that there are two key trends that will drive this growth further.

Firstly, development for smartphones will get simpler. VisionMobile’s latest survey profiles over a hundred development tools for creating mobile apps. My guess is that is a very conservative estimate of the actual number of tools out there.

A common goal for many of these providers is to make programming simpler so that more people can code. For some, this goes further, to the extent that tools are being created for children to develop apps at school. So more developers will mean more apps!

Secondly and this for me is the more exciting aspect, is that phones will do more, which means that apps will get more innovative.

Today there are a wide variety of apps already, some of which use features of the phone itself like the camera, GPS or microphone. Coming down the line are many more features that will get embedded into phones, for example the ability to detect a user emotions and the ability to monitor a users health. Such features will drive yet more applications and innovations from personal healthcare to fraud detection.

Apart from new features, phones will start interact with other devices such as your TV. At a simple level, your smartphone can be already be used as a remote control for your TV or to join in with live TV quiz shows. Already phones are interacting with cars, and this integration will inevitably go further, so that your engine management system feeds your phone with data that an app can use.

Recent surveys from recruitment agencies highlight the growing demand for mobile developers, and more interestingly the re-skilling of developers to position themselves for this growth.

Exciting times are ahead for developers and entrepreneurs who will show that Angry Birds isn’t the only way to make big money in mobile.

Mobile Apps: When to go native

March 15, 2012

Let me say from the outset, that there is no right answer for everybody. The battle between cross-platform solutions and native mobile applications is going to continue for years to come; I know I have blogged about this before, and probably will again.

For many corporate applications, native code offers the marketing group richer customer experiences, the business the chance to innovate solutions using device-specific solutions, and IT some new development tools.

However, if an organisation has to support the widest range of phones possible, the development of native apps becomes cumbersome, since you then need to write apps for each of the major mobile platforms available.

Part of this decision depends on whether you decide to support older phones, i.e. non-smartphones. For non-smartphone support you’ll need to build in support for features from SMS text services to basic text browsers.

Typically this is aimed at operating in developing countries. In developed countries like the UK, the growth of smartphones means that there is now a critical mass of users crowding out lower-featured handsets.

If you decide to target smartphones, then you still have a choice. You can either:

  1. Build for each platform, using it’s own development tools
  2. Use a cross platform mobile development solution, or
  3. Write your app as a browser solution.

So how does an organisation decide which way to go?

I found this useful little questionnaire developed by InfoTech. It takes you through a set of questions about your needs, and then suggests the best way forward between a native solution and a web-based solution.

As a quick guide to review a specific tactical requirement, I thought it was pretty good and asked very pertinent questions. Obviously this is something that an IT department could expand or specialise for their own needs, and so provides a useful structured approach to making impartial decisions without any emotional bias.

Where support for multiple platforms is crucial, a more difficult decision will be whether to use a cross-platform mobile development solution or to go for a pure web (and possibly HTML5) solution.

I’ll discuss this issue in a future blog, but for the time being, check out the questionnaire to start thinking about your mobile approach.