Archive for the ‘Front Tier of SOA’ Category

Vertical User Experience Platform

July 5, 2012

Whilst discussing what a UXP is and who the key players are with a customer I was asked an interesting question, “is there a need for industry (banking, retail, government …) specific UXP ?”.

My immediate reaction was that the technologies in a UXP were generic horizontal solutions that should be agnostic to the industry they were implemented in. The fact that they were specialised solutions and are not industry specific to me was a key advantage. So why would you want a content management solution or collaboration tool that was specific to banking or retail?

The response was interesting: For many smaller companies the complexity of managing their web presence is huge, even if they buy into a single vendor approach for example using Microsoft Sharepoint they still have a huge task to set up the individual components (content management, collaboration, social tools and apps) and this is only made harder with the need to support an increasing array of devices (phone, tablet, TV etc…).

It seems there is a need for an offering that provides an integrated full UXP that can be set-up easily and quickly without the need for an army of developers. Compromises on absolute flexibility are acceptable provided a rich set of templates (or the ability to create custom templates) were provided, such that the templates handled device support automatically. Further the UXP might offer vertical specific content feeds out of the box.

As in my previous blog “The End of Silo Architectures” using a UXP front end technology to create industry specific apps is a great idea. Such a solution could not only provide the business functionality (e.g. Internet banking, insurance quotes/claims, stock trading) but the technical issues of cross device and browser support, security and performance.

So whilst I can understand the requirement and the obvious benefit, the idea of a vertical UXP to me seems like providing a vertical specific CRM or Accounting package. The real answer is that it makes sense to provide vertical apps and use generic Content, Collaboration and social tools from a UXP. Ideally the generic components are integrated and have easy to configure templates.

As I have highlighted before though the UXP is complex not just from a technology perspective but also from the perspective of skills, processes and standards. The first step for any organisation must be to create a strategy for UXP: audit what you currently have, document what you need (take into consideration current trends like social, gamification and mobile) and then decide how you move forward.

Unfortunately this area currently seems ill serviced by the consultancy companies so it may just be up to you to roll your own strategy.

The end of silo architectures

June 28, 2012

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

What’s a UXP

March 29, 2012

Gartner are defining a category they call UXP to help organisations manage all their user experience requirements.

Gartner defines the UXP as “an integrated collection of technologies and methodologies that provides the ability to design and deliver user interface/presentation capabilities for a wide variety of interaction channels (including features such as web, portal, mashup, content management, collaboration, social computing, mobile, analytics, search, context, rich Internet application, e-commerce, an application platform and an overall user experience design and management framework)”.

There is currently no precise definition of the set of technologies a UXP encompasses, but Gartner identify the following list as candidates:

  • Web analytics
  • Search
  • Social
  • Programming frameworks and APIs
  • UX design and management
  • Rich internet applications
  • E-commerce
  • Mobile
  • Content management
  • Collaboration, with portal and mashups being core.

With growing importance of web interfaces on all devices the UXP is not a moment too soon, as organisations need to get a grip of not just these technologies, but the underlying supporting business processes and skills they require to define, create, manage and measure their user and customer experiences.

It’s clear that from an architectural perspective the UXP covers everything that is in the “Presentation layer”, and maybe a few that are in the grey areas between the Presentation layer and the Business layer.

As Gartner have identified, this is a growing list of technologies. From my perspective, some of these need to be integrated and some are standalone, and it would be helpful to have some broader categories within the UXP to help focus efforts towards implementation.

Social and collaboration technologies facilitate interaction between two or more users, and so could be grouped into a category called UXP-Collaboration.

Content is the core of any web platform and content management, search and analytics could be grouped into a category called UXP-Content.

Portal, mobile apps, RIA and mashups are essentially application development technologies so could be group as UXP-Apps.

From a process perspective these categories also make sense, as UX-Collaboration technologies are installed and then require mediation processes to manage the implementation, while UX-Content require publishing and monitoring lifecycle and UX-Apps technologies are implemented by IT, and go through an IT development lifecycle.

However, UXP is an evolving field, and as with any technology it is clear that selection and implementation cannot be done without a full understanding of business requirements, the underlying implementation and management processes and skills required.

Given the size, complexity and importance of this task I would not be surprised to see some organisations appoint a Chief eXperience Officer (CXO).

HTML5 will be key to a MAD world

February 16, 2012

According to researchers, over 5 billion devices connect to the Internet today and by 2020 over 22 billion devices including 6 billion phones, 2 billion TVs will be connected. By 2014 sales of new internet connected devices excluding PC’s will be over 500 million units a year.

We’re moving into a connected world where people expect internet access any time, any place and on anything, and so many of us will have Multiple Access Devices (MAD).

Therefore it still amazes me to find large corporate with a separate Internet strategy and Mobile strategy. I won’t name and shame but you know who you are! What next, a strategy for tablets and a separate one for Internet TVs?

One of the key principles of HTML5 is that it aims to give you the tools to write once, deploy everywhere; that is, to create applications and content that can be developed to run appropriately for every device. Of course where it makes sense an application might need to take advantage of a specific devices capability (e.g. a camera or GPS), but even then conditional behaviour can be developed to provide such differentiation rather than develop a whole new application for a specific devices.

One of the key enablers here is adopting an approach whereby the content/application VIEW (look and feel) is fully controlled by CSS, and namely version 3. CSS3 has a number of features that allow you to control layout and look and feel according to screen dimensions/estate. The enabling technology is Media Queries, which allow you to create different VIEWS of an application based on screen dimensions. I’ll be writing more about this soon.

Creating a MAD (Multiple access devices) strategy is not all about technology though. Organisations will have to monitor and sometimes drive changes in customer behaviour, look at how much more time youth spend on smartphones than watching TV or in fact any other device. With each person having multiple access devices, different devices will likely to be used specifically for different parts of the customer buying cycle. If a purchase requires research and thought then most likely this will be done on PC’s/Laptops, for instant updates (news, stock prices, weather, scores and so on) smartphones for entertainment maybe tablets will prevail.

There are many more challenges to be discussed and I hope to cover these in more depth in follow up blogs, but for now small or large organisations need to create a single MAD strategy that encompasses customer/user needs, monitor behavioural changes and trends and investigate the capabilities of enabling technologies.

The change will be profound, as organisations realise the total impact to processes, skills and technologies to really master Customer Experience in a MAD world, a journey which the visionaries have already started.

HTML5: The right time right place for mobile?

February 9, 2012

Most people by now understand that main challenge for developing mobile applications is creating a solution that runs on as many platforms as possible. This challenge can range from supporting browsers that only support text, up to fully fledged smartphones.

For organisations that are targeting users in the developed world, many are simplifying this challenge to target smartphones only. However even here to create local native applications requires solutions that support Apple’s iOS, Windows, Android and Java (Blackberry).

There are many mobile development platforms available to assist with creating “write once deploy everywhere” apps. The main constrains here are that you end up with deployments to many different stores, and that quite often still write platform specific code to take advantage of platform specific features.

HTML5 has long been a strong candidate for mobile applications, but is it ready? Are mobile browsers upto date with HTML5?

The answer to this question can be a simple “No”, no mobile browser supports the full HTML5 specification. Or a “Maybe” depending on what features (camera, phone book and GPS) of the phone you require you may have support from HTML5.

Push that up to a resounding “Yes”, if you want to move an application that currently runs on the web to run on mobile. Of course, I should also caveat the above with ‘there are grey areas’ in between these responses, not very helpful I know.

For corporates looking to support mobile users with line of business applications I believe there are some great examples that prove HTML5 is ready for them. For a start Facebook is one such application taking full advantage of HTML5, and promoting its use for Facebook apps.

The key areas of HTML5 that are supported across mainstream mobile browsers are offline storage, geolocation, multimedia, graphics (canvas), touch events and large parts of CSS3. The mobile HTML5 site provides a list of mobile browser capabilities.

In the past marketers are argued that presence on App Stores adds value to “brand awareness”, and whilst this is true, there is nothing stopping an organisation having using both native apps and HTML. For example, take LloydsTSB. You can download their app, which effectively once downloaded then runs a “browser” version of their Internet banking service.

There are also some great libraries out there that make cross platform mobile development much easier and provide features that make your web applications feel much more like a native phone app. JQueryMobile is a great example.

So what are you waiting for?

HTML 5 makes the browser smarter

January 26, 2012

The unsung hero of the web has always been Javascript, without which the standards-based web would be completely static. Javascript enables functionality to be executed in the browser, and has been used to create all sorts of effects otherwise not possible with HTML alone.

In the early days, Javascript implementations weren’t entirely standard, requiring developers to have to write variants for different browsers; this isn’t really an issue any more.

For applications, developers will either use libraries or develop their own validation routines. This Javascript code adds significantly to the amount of code downloaded.

With HTML5, developers will need to write less Javascript, as the browser provides features to do things for itself rather than rely extra scripting.

Validation is the main area of improvement. HTML5 now provides a number of new validation features such as mandatory checking, type checking, range and field length validation. The validation is done within the browser, and developers can opt to decide how to process errors.

Obviously validation has to be repeated on the server for security, to ensure that data hasn’t been hacked in the browser or in transmission. This then means that validation has to be maintained in two places and kept in sync.

HTML5 also provides a number of new input field types such as tel, email, color, datetime. This empowers the browser, by applying it to display a date picker, or a colour chooser for example. More importantly for mobile applications it would allow the browser to show an appropriate keyboard layout e.g. a numeric layout for tel, and an alphabetic keyboard for email type.

There are also a number of new attributes which previously required Javascript such as autocomplete, placeholder and pattern which will prove very useful.

There will be some organisations that will not want the browser to affect their carefully designed user experience; for these people the answer is simple, just don’t use the new features.

For the rest, you will enjoy having to write less Javascript for HTML5 browsers, but of course you will still need to have backwards compatibility for non-HTML5 browsers which will rely on Javascript.

HTML5 Gets pushy

January 20, 2012

In the past there have been a number of clever approaches to get around the limitations of web technologies inability to allow servers to broadcast messages/data to clients. One of the more popular libraries for such capability has been Comet.

Whilst providing a solution where the standard lacked functionality, Comet is quite inefficient – both in terms of network bandwidth and performance.

Comet effectively posts a message to the server and the server replies if there is data waiting to go, if there is not the call is left waiting and if subsequently it times out the client is notified which in turn then makes another call. So it’s inefficiency is that it is constantly making calls to the server regardless of whether there is data to be sent back or not. Also it is using HTTP, so the requests are relatively large in size.

In a previous blog, I mentioned how the WebSockets API allows you to have bi-directional messaging between a browser and server. The use of WebSockets requires a Socket server to facilitate messaging using the socket protocol (which is far more efficient in terms of request size than HTTP). The most popular example of a good use of WebSockets is online chat.

However there is another option: Server Sent Events (SSE).

The main advantage of SSE is that no additional software is required, a standard web server can be used as it only relies on the HTTP protocol. However, the main downsides therefore are that SSE is uni-directional, server to client broadcast only and, because they use the HTTP protocol, the message requests are much larger than socket calls.

SSE also allows arbitrary events which even allows messages to be sent to different domains. Care needs to be taken as this can be open to abuse, so developers should check the domain from which the event has been generated is one that they are expecting.

This is a powerful capability, and for anyone looking at “lean portal” implementations, could be a way of creating functionality like “inter-portlet communication” without requiring a portal server to facilitate the sharing of data.

They are also relatively simple to implement which again gives them an advantage over WebSockets.

SSE will be most useful to applications like stock ticker’s, live news, weather updates, sports scores and any other applications where the server needs to push data to a client. So does this mean the end of Comet? No because to provide backwards compatibility with non-HTML5 browsers, Comet could be used as a polyfill.

Using Polyfill to cover up the cracks in HTML5

October 23, 2011

Long gone are the days when Internet Explorer had 95% of the browser market. We have lived in multi-browser world since the start of the web. Whilst this has its plus point, it also has its downsides – none more so than ensuring backwards compatibility. Using HTML5 today is not simply a case of does the browser support it or not, but what aspects of the huge specification does it support and to what extent. A good site for seeing the various levels of support across browser releases, against different areas of the HTML5 specification can be found at CanIUse.com.

The W3C’s answer to developers creating solutions with HTML5 is that the new features of the spec should “gracefully degrade” when used in older browsers. Essentially this means the new markup or API is ignored and doesn’t cause the page to crash. Developers should test and develop backwards compatibility. This can be an onerous task. However help is at hand with libraries like Modernizr you can detect what features of HTML5 the browser supports.

Once you know that the browser doesn’t support a HTML5 feature you have used you can write or use a 3rd party “polyfill”. In HTML, a polyfill is essentially code that is used to provide alternative behaviour to simulate HTML5 features in a browser that does not support that particular feature. There are lots of sites providing polyfills for different parts of the HTML5 spec, a pretty good one can be found here it lists lots of libraries covering almost all parts of the specification.

For me a big concern is that I’ve not yet been able to find a single provider that gives you polyfills for the whole of HTML5, or even the majority of the specification. This could mean that you have to use several different libraries, which may or may not be compatible with each other. Another big concern is that each polyfill will provide varying levels of browser backwards compatibility i.e. some will support back to IE 6 and some not.

With users moving more of their browsing towards smartphones and tablets which typically have the latest browser technology supporting HTML5, backwards compatibility may not be an issue. However it will be several years before the HTML5 spec is complete, and even then there are new spec’s being created all the time within the W3C. So far from being a temporary fix the use of polyfills will become a standard practice in web development, unless of course you take the brave stance of saying your application is only supported on HTML5 browsers.

However this does raise another question, if you can simulate HTML5 behaviour do you need to start using HTML5 at all to create richer applications? The answer is quite possibly not, but having one will certainly improve your user experience and make development of your rich internet applications simpler.