Posts Tagged ‘Service oriented architecture’

Vertical User Experience Platform

July 5, 2012

Whilst discussing what a UXP is and who the key players are with a customer I was asked an interesting question, “is there a need for industry (banking, retail, government …) specific UXP ?”.

My immediate reaction was that the technologies in a UXP were generic horizontal solutions that should be agnostic to the industry they were implemented in. The fact that they were specialised solutions and are not industry specific to me was a key advantage. So why would you want a content management solution or collaboration tool that was specific to banking or retail?

The response was interesting: For many smaller companies the complexity of managing their web presence is huge, even if they buy into a single vendor approach for example using Microsoft Sharepoint they still have a huge task to set up the individual components (content management, collaboration, social tools and apps) and this is only made harder with the need to support an increasing array of devices (phone, tablet, TV etc…).

It seems there is a need for an offering that provides an integrated full UXP that can be set-up easily and quickly without the need for an army of developers. Compromises on absolute flexibility are acceptable provided a rich set of templates (or the ability to create custom templates) were provided, such that the templates handled device support automatically. Further the UXP might offer vertical specific content feeds out of the box.

As in my previous blog “The End of Silo Architectures” using a UXP front end technology to create industry specific apps is a great idea. Such a solution could not only provide the business functionality (e.g. Internet banking, insurance quotes/claims, stock trading) but the technical issues of cross device and browser support, security and performance.

So whilst I can understand the requirement and the obvious benefit, the idea of a vertical UXP to me seems like providing a vertical specific CRM or Accounting package. The real answer is that it makes sense to provide vertical apps and use generic Content, Collaboration and social tools from a UXP. Ideally the generic components are integrated and have easy to configure templates.

As I have highlighted before though the UXP is complex not just from a technology perspective but also from the perspective of skills, processes and standards. The first step for any organisation must be to create a strategy for UXP: audit what you currently have, document what you need (take into consideration current trends like social, gamification and mobile) and then decide how you move forward.

Unfortunately this area currently seems ill serviced by the consultancy companies so it may just be up to you to roll your own strategy.

The end of silo architectures

June 28, 2012

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

Using Polyfill to cover up the cracks in HTML5

October 23, 2011

Long gone are the days when Internet Explorer had 95% of the browser market. We have lived in multi-browser world since the start of the web. Whilst this has its plus point, it also has its downsides – none more so than ensuring backwards compatibility. Using HTML5 today is not simply a case of does the browser support it or not, but what aspects of the huge specification does it support and to what extent. A good site for seeing the various levels of support across browser releases, against different areas of the HTML5 specification can be found at CanIUse.com.

The W3C’s answer to developers creating solutions with HTML5 is that the new features of the spec should “gracefully degrade” when used in older browsers. Essentially this means the new markup or API is ignored and doesn’t cause the page to crash. Developers should test and develop backwards compatibility. This can be an onerous task. However help is at hand with libraries like Modernizr you can detect what features of HTML5 the browser supports.

Once you know that the browser doesn’t support a HTML5 feature you have used you can write or use a 3rd party “polyfill”. In HTML, a polyfill is essentially code that is used to provide alternative behaviour to simulate HTML5 features in a browser that does not support that particular feature. There are lots of sites providing polyfills for different parts of the HTML5 spec, a pretty good one can be found here it lists lots of libraries covering almost all parts of the specification.

For me a big concern is that I’ve not yet been able to find a single provider that gives you polyfills for the whole of HTML5, or even the majority of the specification. This could mean that you have to use several different libraries, which may or may not be compatible with each other. Another big concern is that each polyfill will provide varying levels of browser backwards compatibility i.e. some will support back to IE 6 and some not.

With users moving more of their browsing towards smartphones and tablets which typically have the latest browser technology supporting HTML5, backwards compatibility may not be an issue. However it will be several years before the HTML5 spec is complete, and even then there are new spec’s being created all the time within the W3C. So far from being a temporary fix the use of polyfills will become a standard practice in web development, unless of course you take the brave stance of saying your application is only supported on HTML5 browsers.

However this does raise another question, if you can simulate HTML5 behaviour do you need to start using HTML5 at all to create richer applications? The answer is quite possibly not, but having one will certainly improve your user experience and make development of your rich internet applications simpler.

HTML5 gets a database

June 9, 2011

As a relative late comer to HTML5 trying to catch up on a spec that spans over a 1000 pages is no mean feat, let alone the fact that the definition of what makes up HTML5 is covered across several specs (see previous blog on standards spaghetti). If you’ve been following this series then you’ll have worked out I have a few favourite features that I think will radically change the perception of web applications, and you guessed it HTML5’s support for database access is another.

The specification started out as early as 2006 with WebSimpleDB (aka WebSQL), and went as far as implementation into many browsers including webkit, Safari, Chrome and Firefox. From what I can find Oracle made the original proposal in 2009 and the W3C made a switch to Indexed DB sometime in 2010. Although Mozilla.org already had their own implementation using SQL-Lite, they too preferred IndexedDB). The current status as of April 2011 of the IndexedDB spec is that it is still in draft, and according to www.caniuse.com early implementations exist in Chrome 11 and Firefox 4. Microsoft have released a prototype on their html labs site at to show their current support .

Clearly it is not ready for live commercial applications in the short term, but it is certainly something worth keeping your eye on and to plan for. When an application requires more than simple key value pairs or requires large amounts of data, IndexDB should be your choice over HTML 5’s WebStorage api’s (localStorage and sessionStorage).

The first important feature about IndexDB is that it is not a relational database but in fact an object store. Hence there are no tables, rows or columns and there is no SQL for querying the data. Instead data is stored as Javascript objects and navigated using cursors. The database can have indexes defined however.

Next there are two API modes of interaction, Asynchronous and Synchronous API’s. As you would imagine synchronous API’s DO block the calling thread (i.e each call waits for a response before returning control and data). Therefore it follows that the asynchronous API’s do NOT block the calling thread. When using asynchronous API’s a callback function is required to respond to the events fired by the database after an instruction has been completed.

Both approaches provide API’s for opening, closing and deleting a database. Databases are versioned, and each database can have one or more objectstores. There are CRUD API’s for datastore access (put, get, add, delete) as well as API’s to create and delete index’s.

Access to the datastore is enveloped in transactions, and a transaction can be used to access multiple data stores, as well as multiple actions on a datastore.

At a very high level, there you have it, IndexDB is a feature that allows you to manage data in the browser. This will not only be useful for online applications (e.g. a server based warehouse could export data cubes for local access) but also for offline applications to hold data until a connection can be established. I’d fully expect a slew of Javascript frameworks to add value ontop of what the standards provide, indeed persistence.js is one such example.

It’s good to see early implementations and prototypes for IndexDB and whilst the date for finalising this spec is unclear, I for one will be monitoring it’s progress closely and waiting with baited breath for it’s finalisation.

http://www.w3.org/TR/webdatabase/

http://www.w3.org/TR/IndexedDB/

http://hacks.mozilla.org/2010/06/beyond-html5-database-apis-and-the-road-to-indexeddb/

http://trac.webkit.org/export/70913/trunk/LayoutTests/storage/indexeddb/tutorial.html

Birth of the User Experience Platform (UXP)

January 15, 2011

Regular readers will know I have an interest in the user experience. Actually, it’s more like a passion – so, what’s next for web and user interaction technologies?

 Gartner has answered that question in their recently released hype cycle paper on the next generation web (see further reading, below). The cycle itself raises some interesting issues and trends, not least the potential horror of ‘Web 3.0’ – which the analyst suggests could be an ambiguous and unhelpful term.

 In other areas, Gartner is able to be more precise. The analyst recognises that the web continues to evolve along multiple dimensions, such as social, mobile, programmable and real time. Such developments are taking place outside and within the business, causing growth on an unprecedented scale.

 Much work, however, still needs to be done. Too many workers at too many companies remain unaware of methodologies and processes that can be used to help improve the user experience.

 Understanding the user is everything. Giving users the platform that meets their needs – and inevitably the power to tweak that platform via end-user computing – will sort the web-enabled wheat from the business chaff.

 Once again, that is a trend recognised by Gartner. The analyst suggests that a series of trends, such as context-aware computing, the mobile web and the cloud, are of particular interest right now. However, it is their take on user experience platforms (UXP) that is most significant.

 Earlier in the year, I said I expected the pendulum to swing towards UXP in 2010 (see further reading). That foresight now looks spot on, with Gartner tagging the emerging concept of integrated technologies that help deliver user interaction in its hype cycle.

 The analyst suggests the UXP is developing as a critical platform, which represents the convergence of presentation layer technology. It suggests the UXP helps provide consistency and integration, helping users to have a similar experience across multiple platforms. A UXP, in short, provides significant efficiencies.

 Gartner suggests vendors have been slow to match demand and that the market will emerge through 2013. Some specialists, however, are ahead of the game – and the analyst’s hype cycle identifies edge IPK as a UXP vendor.

 Once again, it’s nice to be proven correct and even better that our good work is recognised. My advice is to take a look at the UXP now; it’s increasingly a business necessity and you will be way ahead of your competitors.

Further reading:

 http://www.gartner.com/DisplayDocument?id=1407814&ref=g_sitelink

 http://blogs.computerworlduk.com/facing-up-to-it/2010/04/user-experience-platforms-uxp/index.htm

Fat client / Rich client / Mobile client

October 8, 2010

It’s a given that you’d better get online if you want to reach out to your customers. With more and more people having mobile access to the internet, firms need software that can help clients to interact on the move.

Step forward web-based rich internet applications (RIAs), which are online tools that have many of the features of their desktop counterparts. The use of RIAs date back a decade but their use continues to evolve.

As analyst Gartner concludes in respect to enterprise-level adoption (see further reading, below), RIA platforms are still in a dynamic and early adopter phase of market evolution. What is certain is that the RIA market is highly competitive.

 As well as the most distinct and prominent flavours, Apple pushes the use of its own software. Such divisions are inherent to the RIA market and competition is now taking a specific route.

 Most RIAs are splitting into two distinct groups: client technology, where a specific app – such as Silverlight or Flex – is installed into the client; or the server-based and Ajax route, where users only need a browser and no other client requirements.

 The distinction between the two approaches is such that Gartner considers Ajax and client-based RIAs as similar but separate technologies. Many firms choose to opt for the client approach – but for me, going with the client approach seems like a backwards step. It like we’re re-inventing the battle between desktop and browser apps, only this time both options are in the browser.

 First, users normally need to install a specific framework that executes the RIA before an application can launch. In Java-based alternatives like Ajax, there is no installation requirement – built-in browser functionality means required components are kept server-side.

 Second, the line between the desktop and the browser is blurring (see further reading). The browser is increasingly seen as the operating system, with individuals able to securely access social networking, music streaming and enterprise applications via the browser.

 Take note, however, that going for development via the browser is not a standalone decision. Businesses must also consider mobile development – and must avoid relying on a specific toolset for mobile development.

 Get the decision wrong and you can find your business in a similar platform-specific cul-de-sac, this time on the mobile rather than the desktop. By going with a mix of HTML/Javascript and Ajax-server based technologies, your business can use the same developers on desktop and mobile environments.

 HTML/Javascript and server-based Ajax is the route that will allow you to reach out to an increasingly mobile and browser-based audience. And in the future, it’s a combination that will help your business cope with the increasing range of screen sizes.

 Open source development frameworks like Rhodes and PhoneGap allow skilled web specialists to write once and deploy anywhere, creating mobile apps that have access to local device functions like camera, contacts and GPS.

 If you want to give your software the greatest reach, make sure your web-based developments take a direction that allows you to serve your savvy customers.

Further reading

 http://www.gartner.com/DisplayDocument?doc_cd=164266

 http://www.computerworlduk.com/TOOLBOX/OPEN-SOURCE/blogs/index.cfm?entryid=2389&blogid=22&tsb=comment


Digg!

Mobile Delusions Part Two

June 21, 2010

Lest you forget, 2010 is the year of the mobile device. It’s a subject I blogged earlier this month, and it’s a subject I’m returning to now in order to add further clarity.

That first blog on mobile delusions tackled the thorny issue of return on investment (ROI). In that posting, I suggested that busineses must think carefully before rushing head first into mobile development.

Well thought-out business plans, I suggested, will always win in the long run. That statement remains as true as ever; in fact, its resonance has increased as the proliferation of different smart phones continues to rise.

 If you’re asked to think about handheld devices, it’s more than likely you’ll think of one specific phone and operating platform. That selection might be pushed by your personal preference for a BlackBerry or Nexus One, but the vast majority of people will immediately think of an Apple iPhone.

 Why is the iPhone so all consuming? Consumer and media hype certainly helps: you don’t see national TV news coverage of people queuing round the block to get hold of a new BlackBerry device. For Apple, every new device is a national – no, global – event (see earlier blog on the iPad).

 Apple has been smart. It’s beautifully designed gadgets appeal to a ‘fanboy’ mentality, where the ‘Twittering’ elite will have you believe that each new Apple device is representative of a new era of social and technical development.

 To be fair, some of Apple’s devices are great. The iPod helped make digital music a portable reality. It’s continual development through the iPhone showed how openness can spawn great application development.

 But the iPhone is just one device in an increasingly crowded marketplace. According to analyst Gartner, Apple’s iPhone represents just under 15% of the global smart phone operating system market.

 The proportion, although significant, lags well behind Research in Motion and Symbian, the latter of which still accounts for almost half of the smart phone market. The conclusion is simple: developers will have to look beyond the Apple ‘fanboy’.

 An app designed for an iPhone should be easily portable across all mobile operating systems. Considerable market fragmentation means an app that has limited appeal for one group might be more attractive on another platform, particularly for BlackBerry users that desire enterprise interactivity.

 Which brings me back to the difficulties of getting an ROI from mobile development. Fragmentation and differentiation means you need to be learning about how you can make mobile pay.

 And the best way to create an ROI is through web-based apps that can easily cross platforms, rather than platform-specific software.


Digg!

Gestures to Help the Business

June 3, 2010

Business IT is now all about the consumer. The CIO faces a series of demands from employees keen to use high-end consumer hardware and software in the business.

 Such demands present significant challenges, such as technology integration, fears over openness and potential security risks. When it comes to the continued development of these challenges for leading executives, there is good news and bad news.

 The bad news is that consumers – particularly those entering the business – are only likely to become more demanding. With converged technology in their pockets and detailed personas online, blue chip firms will find it difficult to lay down the law for tech-savvy users.

 However, the good news is that the next wave of consumer technology is also likely to produce significant benefits to the business. Take Project Natal, Micorosoft’s controller-free entertainment system for the Xbos 360 console that should be released by the end of the year.

 Motion-controlled technology has been in-vogue for gamers since the launch of Nintendo’s Wii in late 2006. The system, which allows the user to control in-game characters wirelessly, has been a a huge commercial and technical success.

 Natal is likely to take such developments to the next level, giving Xbox 360 users the opporuntity to play without a game controller – and to interact through natural gestures, such as speaking, waving and pushing.

 Maybe that sounds a bit too far-fetched, a bit too much like a scene from The Matrix? Think again – early demonstrations show how the technology could be used in an interactive gaming environment.

 But that’s really just the beginning. With Microsoft pulling the strings behind the technology, Natal is likely to be provide a giant step towards augemnted business reality – where in-depth information can be added and layered on top of a real physical environment.

 The future of the desktop, for example, will be interactive. Employees will be able to use gestures to bring up video conferencing conversations and touch items on the desktop to bring up knowledge and data.

 Employees in the field, on the other hand, will be able to scan engineering parts using their mobile devices. Information send back to the head office will allow workers to call in specific parts and rectify faults.

 The implications for specific occupations are almost bewlidering. Surgeons will be able to use Natal-like interactions to gain background information on ill patients; teachers will be able to scan artefacts and provide in-depth historical knowledge to students.

 The end result is more information that can be used to help serve customers better. And that is surely the most important benefit of next-generation consumerisation.

 Further reading

 http://www.xbox.com/en-US/live/projectnatal/


Digg!