Posts Tagged ‘Layered Architecture’

Vertical User Experience Platform

July 5, 2012

Whilst discussing what a UXP is and who the key players are with a customer I was asked an interesting question, “is there a need for industry (banking, retail, government …) specific UXP ?”.

My immediate reaction was that the technologies in a UXP were generic horizontal solutions that should be agnostic to the industry they were implemented in. The fact that they were specialised solutions and are not industry specific to me was a key advantage. So why would you want a content management solution or collaboration tool that was specific to banking or retail?

The response was interesting: For many smaller companies the complexity of managing their web presence is huge, even if they buy into a single vendor approach for example using Microsoft Sharepoint they still have a huge task to set up the individual components (content management, collaboration, social tools and apps) and this is only made harder with the need to support an increasing array of devices (phone, tablet, TV etc…).

It seems there is a need for an offering that provides an integrated full UXP that can be set-up easily and quickly without the need for an army of developers. Compromises on absolute flexibility are acceptable provided a rich set of templates (or the ability to create custom templates) were provided, such that the templates handled device support automatically. Further the UXP might offer vertical specific content feeds out of the box.

As in my previous blog “The End of Silo Architectures” using a UXP front end technology to create industry specific apps is a great idea. Such a solution could not only provide the business functionality (e.g. Internet banking, insurance quotes/claims, stock trading) but the technical issues of cross device and browser support, security and performance.

So whilst I can understand the requirement and the obvious benefit, the idea of a vertical UXP to me seems like providing a vertical specific CRM or Accounting package. The real answer is that it makes sense to provide vertical apps and use generic Content, Collaboration and social tools from a UXP. Ideally the generic components are integrated and have easy to configure templates.

As I have highlighted before though the UXP is complex not just from a technology perspective but also from the perspective of skills, processes and standards. The first step for any organisation must be to create a strategy for UXP: audit what you currently have, document what you need (take into consideration current trends like social, gamification and mobile) and then decide how you move forward.

Unfortunately this area currently seems ill serviced by the consultancy companies so it may just be up to you to roll your own strategy.

The end of silo architectures

June 28, 2012

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

HTML5 gets a database

June 9, 2011

As a relative late comer to HTML5 trying to catch up on a spec that spans over a 1000 pages is no mean feat, let alone the fact that the definition of what makes up HTML5 is covered across several specs (see previous blog on standards spaghetti). If you’ve been following this series then you’ll have worked out I have a few favourite features that I think will radically change the perception of web applications, and you guessed it HTML5’s support for database access is another.

The specification started out as early as 2006 with WebSimpleDB (aka WebSQL), and went as far as implementation into many browsers including webkit, Safari, Chrome and Firefox. From what I can find Oracle made the original proposal in 2009 and the W3C made a switch to Indexed DB sometime in 2010. Although Mozilla.org already had their own implementation using SQL-Lite, they too preferred IndexedDB). The current status as of April 2011 of the IndexedDB spec is that it is still in draft, and according to www.caniuse.com early implementations exist in Chrome 11 and Firefox 4. Microsoft have released a prototype on their html labs site at to show their current support .

Clearly it is not ready for live commercial applications in the short term, but it is certainly something worth keeping your eye on and to plan for. When an application requires more than simple key value pairs or requires large amounts of data, IndexDB should be your choice over HTML 5’s WebStorage api’s (localStorage and sessionStorage).

The first important feature about IndexDB is that it is not a relational database but in fact an object store. Hence there are no tables, rows or columns and there is no SQL for querying the data. Instead data is stored as Javascript objects and navigated using cursors. The database can have indexes defined however.

Next there are two API modes of interaction, Asynchronous and Synchronous API’s. As you would imagine synchronous API’s DO block the calling thread (i.e each call waits for a response before returning control and data). Therefore it follows that the asynchronous API’s do NOT block the calling thread. When using asynchronous API’s a callback function is required to respond to the events fired by the database after an instruction has been completed.

Both approaches provide API’s for opening, closing and deleting a database. Databases are versioned, and each database can have one or more objectstores. There are CRUD API’s for datastore access (put, get, add, delete) as well as API’s to create and delete index’s.

Access to the datastore is enveloped in transactions, and a transaction can be used to access multiple data stores, as well as multiple actions on a datastore.

At a very high level, there you have it, IndexDB is a feature that allows you to manage data in the browser. This will not only be useful for online applications (e.g. a server based warehouse could export data cubes for local access) but also for offline applications to hold data until a connection can be established. I’d fully expect a slew of Javascript frameworks to add value ontop of what the standards provide, indeed persistence.js is one such example.

It’s good to see early implementations and prototypes for IndexDB and whilst the date for finalising this spec is unclear, I for one will be monitoring it’s progress closely and waiting with baited breath for it’s finalisation.

http://www.w3.org/TR/webdatabase/

http://www.w3.org/TR/IndexedDB/

http://hacks.mozilla.org/2010/06/beyond-html5-database-apis-and-the-road-to-indexeddb/

http://trac.webkit.org/export/70913/trunk/LayoutTests/storage/indexeddb/tutorial.html

Does HTML5 mean the end of Silverlight: Yes

March 31, 2011

If you’re like me, you might have a dream that surfers will soon not have to rely on plug-ins to enjoy browsing the web. For fellow dreamers, the forthcoming and latest round of browser wars might lead to a better web experience rather than yet another plug-in based nightmare.

Microsoft has recently had to grin and bear the pain, while its dominant Explorer browser has seen its market share attacked by a series of platforms, including Mozilla Firefox, Apple Safari – and most notably – Google Chrome. With market share now hovering at round 60% (see further reading, below), it’s almost as if the top guys at Redmond have suggested that enough is enough.

The result is a return of the browser wars, with Microsoft set to preview the final beta of Internet Explorer 9 (IE9) in September. Chrome is clean, simple and fast – and expectations will be that IE9 provides a much quicker browsing experience.

Initial signs look good. Graphics performance is enhanced and hardware is accelerated. But the real story is the heavy use of HTML5, showing that researchers in Redmond also feel the next generation mark-up language is the best way forward for development.

“The future of the web is HTML5,” suggested Dean Hachamovitch, the general manager for Internet Explorer in a blog post earlier this year (see further reading). With Apple and Google also throwing their weight behind HTML5, much debate has rightly centred on the tricky situation facing Adobe’s video plug-in Flash.

But Microsoft’s support for HTML5 potentially creates another set of circumstances and another high profile conflict. This conflict surrounds Silverlight, a web framework that integrates multimedia and graphical elements in a single environment.

More intriguingly, it is Microsoft’s own framework – and, since April 2007, it has formed the backbone of the provider’s presentation framework. So, where does Microsoft’s support for HTML5 leave Silverlight? That, for web developers, is the key question.

Online publication The Register recently referred to the clash as “The Silverlight Paradox”, suggesting that a high quality and HTML5-ready IE9 will surely make many of the features of Silverlight and Flash redundant (see further reading).

Such a paradox, however, is fraught with complications. IE9 might look like it provides new fuel for Microsoft’s browser battle, but the true level of optimisation will not be clear until web developers get their hands on beta.

As The Register article suggests, legacy requirements mean the use of plug-ins will persist for many years – even if IE9 delivers everything it promises. But the move towards HTML5 shows that the captive strength of plug-ins is waning and businesses must develop web platforms with capability across all levels, from the desktop through to the mobile. The new web experience is emerging.

Further reading:

http://arstechnica.com/microsoft/news/2010/03/firefox-may-never-hit-25-percent-market-share.ars

http://blogs.msdn.com/b/ie/archive/2010/04/29/html5-video.aspx

http://www.theregister.co.uk/2010/08/05/inside_ie9/

The new legacy is HTML !

November 22, 2010

The legacy of past computing decisions is one of the biggest technology challenges facing businesses. What’s more, lessons from the past are not being heeded.

Let’s start with the most famous legacy code of them all – because if you’ve encountered COBOL, you’ve encountered legacy. Invented in 1959, the object-oriented language became a mainstay of business computing for the next four decades.

The legacy, however, quickly turned into a significant burden. Gartner reported that 80% of the world’s business ran on COBOL in 1997, with more than 200 billion lines of code in existence and an estimated 5 billion lines of new code produced annually (see further reading, below).

The reliance on that rate of production came home to roost towards the end of the last century, when language problems led to the panic associated to Y2K. The story since then has been one of decline. The continued move of business online has led to a clamour for new, sleeker and internet-ready programming languages.

First specified in 1990, HyperText Markup Language (HTML) became the predominant web development language. Its use ran alongside the development of open standards, such as JavaScript and the Cascading Style Sheets of CSS.

Such languages and styles helped to define the layout of the Web. But that is far from the end of the story. Online development in the era of HTML has become increasingly patchy, with more and more developers using varying styles of code.

Additional online tools, such as Silverlight and Flex, create further complexity. The result is that HTML, and an associated collection of standards and tools, are fast becoming the new legacy.

Just as in the case of COBOL, more and more lines of code are being produced. The disparate pace of online development is such that we will end up with reams of legacy HTML, JavaScript and CSS code.

Learn from history and get to grips with the problem now. Make sure you have proper documentation and standards. Select tools that are integrated with the rest of your business processes and which allow users to make the most of earlier development projects.

Think about how certain approaches – such as a mix of HTML/JavaScript and Ajax-server based technologies – will allow your business, and even your end-users, to use the same development techniques on desktop and mobile environments.

Also look to the future and take a look at HTML5, which is currently under development as the next major revision of the HTML standard, including features that previously required third-party plug-ins, such as Flash. Don’t stop there carry on with CSS3, Web Worker and WebFonts all new evolutions of current web technologies that will tomorrow be mainstream.

The end result should be the end of fragmented development and a legacy of useful web applications, rather than unusable and unidentifiable code.
Further reading:

http://www.wired.com/thisdayintech/2010/05/0528cobol-conference/


Digg!

Take your IT department forward by putting end user development at the front

November 19, 2010

Here’s a wake up call for the IT department – end-user computing will definitely become dominant; it’s just a matter of time.

 Proof comes in the form of modern business practices. Increasing numbers of executives are now saying that time to market is absolutely critical. A slow moving organisation is one that loses.

 For many firms, the ability to move quickly is underpinned by technology. The pace of change and centrality of IT to contemporary business means every organisation, whatever the sector, relies on technology to help maintain information flows and to help its employees deal with customers.

 Such reliance should be good news for the traditional technology team. But there’s a significant catch. The business wants to make changes and add products quickly. Technology, as the underpinning structure, should be set up to create speed.

 Unfortunately, this simply is not the case for many businesses.  The integral nature of IT to business processes means that line-of-business executives have to go through IT when they want to make changes.

 In many organisations, the traditional cycle of IT delivery is far too slow. One step forwards – in the form of the business’ recognition of the need to create a new product offering – is often several steps back for the IT department.

 Rather than being able to respond with agility to business need, IT development takes place across an elongated cycle, where each change needs to checked, re-checked and checked again. Businesses, if they are going to be agile, need to stop such lethargy.

 Focus remains on the IT department – and the focus has to be on technology because it is at the core of modern business practice. But smart executives are beginning to ask what can be done so that business change can swerve round the elongated cycle of IT delivery.

 For technology workers, such transformations might seem like a coup d’etat. But there is no need to be scared. IT workers that embrace the change and help the business move towards end-user computing will not be overthrown.

 Your role should be at the higher level, helping the businesses to understand how web interfaces – the new desktop – can be used to help executives avoid the traditional IT cycle of checking and testing.

 Employees want to be able to create instant changes to text that can help inform customers. They want to be able to manage data using their own business rules, creating drop down lists of crucial information.

 Permissions need to be granted and re-granted; workflow needs to be easily manageable, so that the business can use the web to drive agile processes. True agility comes in the form of end-user development.

 And the forward-thinking IT department will recognise it needs to help drive the end-user revolution, not hold it back.


Digg!

Ditch your mobile strategy…for a multi-device one !

November 1, 2010

Blue-chip enterprises are doing it, technology providers are doing it and networking giants are preparing for it: the smart guys are already moving from a mobile strategy to a multi-device strategy.

 Long gone are the days when you would expect your development team to create a single application for a single device. Rather than converge on to one device, the world – both in the consumer and enterprise space – is going multi-device and multi-screen.

 Let’s take three recent examples (see further reading for more details). First, media company Blockbuster has announced it is using APIs to deliver movies, product reviews and real-time inventory availability to customers on various devices including phones, set top boxes and gaming consoles.

 Second, technology provider AT&T has launched U-verse Online, part of a strategy to make content available to consumers across multiple screens, including the TV, PC and mobile devices. Finally, network giant Verizon has announced plans to charge for a block of data and the allowance to share it across as many devices as the user owns.

 Such broader developments help to show that the media’s skewed attention towards individual device launches is misguided. The media would have us believe that the nature of a single device is all-important; that a new device is crucial because it provides a new platform to receive and view information.

 Apple’s iPad and new iPhone, for example, are beautifully thought-through computers. But while the launch of such devices is important, they are simply stepping stones towards a multi-device future.

 What is important – rather than the device itself – is the wider approach being taken by companies like Apple, which is demonstrating how applications and data can be accessed in a similar format on different devices.

 Apple’s iBook application – which is coming to the iPhone and iPod Touch – received more than five million book downloads in the first 65 days since its iPad launch (see further reading). For it’s part, Amazon is also pursuing a multi-device strategy and is releasing free Kindle apps for Apple devices, PCs, BlackBerrys and Google Android.

 Access to data, then, is becoming significant across different kinds of devices. And that importance will only increase. What is perhaps perplexing is that the media is not dedicating more time to the importance of the multi-device strategy.

 At the time of writing, a Google News search for “multi-device strategy” returns just 12 results. Expect that to change and quickly. After all, the smart guys are already preparing for a multi-device future.

Further reading

 http://www.marketwire.com/press-release/Blockbuster-Selects-Sonoa-Systems-to-Power-Multi-Device-Strategy-1160292.htm

 http://www.von.com/news/2010/05/at-t-debuts-u-verse-online-touts-multi-device-str.aspx

 http://www.electronista.com/articles/10/05/27/verizon.4g.may.cost.for.bandwidth.not.devices/

 http://www.techflash.com/seattle/2010/06/apple_brings_ibooks_to_iphone_stepping_up_competiton_with_amazon.html


Digg!

Some screens are better than others…

October 5, 2010

What’s your most important screen? Which device – regardless of application and information – is most important?

 Your business has probably spent years developing multi-channel strategies that allow customers to interact with your firm online, offline and by phone. But now, the level of online interaction is changing and organisations need to prepare multi-screen strategies.

 Microsoft has clearly been considering such strategies and started talking about a three-screen strategy towards the end of last year (see further reading, below).

 The company’s “three screens and a cloud” vision concentrates on how software experiences will be delivered through cloud-based services across PCs, phones and TVs.

 The software giant believes the approach will lead to a programming model that helps create a new generation of applications for businesses and consumers. That belief is spot on.

 Non-believers only have to think about how providers have worked to ensure the new generation of social apps – Facebook, LinkedIn, Spotify – are accessible online through various platforms with different screens.

 As I have mentioned elsewhere, the message for developers is clear: do not make the mistake of creating an application for a single platform. In the future, successful developers will have to accommodate applications to fit more than one screen size.

 In fact, the multiplicity of variable screen sizes is such that Microsoft’s three-screen strategy might be a few screens short. While the underlying sentiment behind the theory is right, big name providers are creating new ways to present information.

 Apple’s iPad is an obvious example, a device that sits somewhere between the pocket size smart phone and the laptop computer. Other less-hyped innovations are always entering the market.

 Take Intel’s recently announced Classmate PC, a hybrid device for education that offers the capabilities of a touch screen tablet and the usability of a netbook (see further reading).

 Some developments leave me to conclude that it’s too early to state that the three screens of PCs, phones and TVs will dominate our lives. Information is being provided in a series of ways across a range of forms.

 Convergence of screens is still far from a reality. Personally, I think we will be using far more than three screens – and the way that most people use a screen will vary depending on the device, location and a range of other contexts. As I have regularly suggested, context awareness is going to be a crucial element in the ongoing development of devices.

 While some people will like the option of having a phone on their watch, other individuals will want a different type of portable device that offers the option of a high quality, rollout screen.

 The end point, of course, will be convergence. Think forward and you can begin to imagine a situation where information on various screen forms is holographically projected. For now, however, such concepts remain dreams for the next generation.

Further reading

 http://www.microsoft.com/presspass/press/2009/nov09/11-17pdc1pr.mspx

 http://www.techradar.com/news/mobile-computing/intel-launches-new-tablet-netbook-classmate-pc-hybrid-685842


Digg!