Posts Tagged ‘SOA’

Vertical User Experience Platform

July 5, 2012

Whilst discussing what a UXP is and who the key players are with a customer I was asked an interesting question, “is there a need for industry (banking, retail, government …) specific UXP ?”.

My immediate reaction was that the technologies in a UXP were generic horizontal solutions that should be agnostic to the industry they were implemented in. The fact that they were specialised solutions and are not industry specific to me was a key advantage. So why would you want a content management solution or collaboration tool that was specific to banking or retail?

The response was interesting: For many smaller companies the complexity of managing their web presence is huge, even if they buy into a single vendor approach for example using Microsoft Sharepoint they still have a huge task to set up the individual components (content management, collaboration, social tools and apps) and this is only made harder with the need to support an increasing array of devices (phone, tablet, TV etc…).

It seems there is a need for an offering that provides an integrated full UXP that can be set-up easily and quickly without the need for an army of developers. Compromises on absolute flexibility are acceptable provided a rich set of templates (or the ability to create custom templates) were provided, such that the templates handled device support automatically. Further the UXP might offer vertical specific content feeds out of the box.

As in my previous blog “The End of Silo Architectures” using a UXP front end technology to create industry specific apps is a great idea. Such a solution could not only provide the business functionality (e.g. Internet banking, insurance quotes/claims, stock trading) but the technical issues of cross device and browser support, security and performance.

So whilst I can understand the requirement and the obvious benefit, the idea of a vertical UXP to me seems like providing a vertical specific CRM or Accounting package. The real answer is that it makes sense to provide vertical apps and use generic Content, Collaboration and social tools from a UXP. Ideally the generic components are integrated and have easy to configure templates.

As I have highlighted before though the UXP is complex not just from a technology perspective but also from the perspective of skills, processes and standards. The first step for any organisation must be to create a strategy for UXP: audit what you currently have, document what you need (take into consideration current trends like social, gamification and mobile) and then decide how you move forward.

Unfortunately this area currently seems ill serviced by the consultancy companies so it may just be up to you to roll your own strategy.

The end of silo architectures

June 28, 2012

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

The new legacy is HTML !

November 22, 2010

The legacy of past computing decisions is one of the biggest technology challenges facing businesses. What’s more, lessons from the past are not being heeded.

Let’s start with the most famous legacy code of them all – because if you’ve encountered COBOL, you’ve encountered legacy. Invented in 1959, the object-oriented language became a mainstay of business computing for the next four decades.

The legacy, however, quickly turned into a significant burden. Gartner reported that 80% of the world’s business ran on COBOL in 1997, with more than 200 billion lines of code in existence and an estimated 5 billion lines of new code produced annually (see further reading, below).

The reliance on that rate of production came home to roost towards the end of the last century, when language problems led to the panic associated to Y2K. The story since then has been one of decline. The continued move of business online has led to a clamour for new, sleeker and internet-ready programming languages.

First specified in 1990, HyperText Markup Language (HTML) became the predominant web development language. Its use ran alongside the development of open standards, such as JavaScript and the Cascading Style Sheets of CSS.

Such languages and styles helped to define the layout of the Web. But that is far from the end of the story. Online development in the era of HTML has become increasingly patchy, with more and more developers using varying styles of code.

Additional online tools, such as Silverlight and Flex, create further complexity. The result is that HTML, and an associated collection of standards and tools, are fast becoming the new legacy.

Just as in the case of COBOL, more and more lines of code are being produced. The disparate pace of online development is such that we will end up with reams of legacy HTML, JavaScript and CSS code.

Learn from history and get to grips with the problem now. Make sure you have proper documentation and standards. Select tools that are integrated with the rest of your business processes and which allow users to make the most of earlier development projects.

Think about how certain approaches – such as a mix of HTML/JavaScript and Ajax-server based technologies – will allow your business, and even your end-users, to use the same development techniques on desktop and mobile environments.

Also look to the future and take a look at HTML5, which is currently under development as the next major revision of the HTML standard, including features that previously required third-party plug-ins, such as Flash. Don’t stop there carry on with CSS3, Web Worker and WebFonts all new evolutions of current web technologies that will tomorrow be mainstream.

The end result should be the end of fragmented development and a legacy of useful web applications, rather than unusable and unidentifiable code.
Further reading:

http://www.wired.com/thisdayintech/2010/05/0528cobol-conference/


Digg!

Gestures to Help the Business

June 3, 2010

Business IT is now all about the consumer. The CIO faces a series of demands from employees keen to use high-end consumer hardware and software in the business.

 Such demands present significant challenges, such as technology integration, fears over openness and potential security risks. When it comes to the continued development of these challenges for leading executives, there is good news and bad news.

 The bad news is that consumers – particularly those entering the business – are only likely to become more demanding. With converged technology in their pockets and detailed personas online, blue chip firms will find it difficult to lay down the law for tech-savvy users.

 However, the good news is that the next wave of consumer technology is also likely to produce significant benefits to the business. Take Project Natal, Micorosoft’s controller-free entertainment system for the Xbos 360 console that should be released by the end of the year.

 Motion-controlled technology has been in-vogue for gamers since the launch of Nintendo’s Wii in late 2006. The system, which allows the user to control in-game characters wirelessly, has been a a huge commercial and technical success.

 Natal is likely to take such developments to the next level, giving Xbox 360 users the opporuntity to play without a game controller – and to interact through natural gestures, such as speaking, waving and pushing.

 Maybe that sounds a bit too far-fetched, a bit too much like a scene from The Matrix? Think again – early demonstrations show how the technology could be used in an interactive gaming environment.

 But that’s really just the beginning. With Microsoft pulling the strings behind the technology, Natal is likely to be provide a giant step towards augemnted business reality – where in-depth information can be added and layered on top of a real physical environment.

 The future of the desktop, for example, will be interactive. Employees will be able to use gestures to bring up video conferencing conversations and touch items on the desktop to bring up knowledge and data.

 Employees in the field, on the other hand, will be able to scan engineering parts using their mobile devices. Information send back to the head office will allow workers to call in specific parts and rectify faults.

 The implications for specific occupations are almost bewlidering. Surgeons will be able to use Natal-like interactions to gain background information on ill patients; teachers will be able to scan artefacts and provide in-depth historical knowledge to students.

 The end result is more information that can be used to help serve customers better. And that is surely the most important benefit of next-generation consumerisation.

 Further reading

 http://www.xbox.com/en-US/live/projectnatal/


Digg!

Act Intelligently To Ensure E-Forms Are Applications

May 26, 2010

 This might be the information age but we are still obsessed with paper. As much as 62% of important documents are still archived as paper, according to content management association AIIM.

 Worse still, we often fail to use electronic forms of delivery when our fascination with the printed word is broken. Rather than simplifying business processes, electronic formatting too often adds a further layer of complication.

 Electronic forms – or e-forms – are a digital version of paper forms that should help eliminate the need to rely on paper. E-forms should also help firms encode data for multiple purposes, so that electronic data can be used in various ways to help improve information processing.

 But rather than implement a simple and disciplined approach to electronic form-filling, too many companies have multiple versions of ‘the truth’ stored in many different formats, from paper to spreadsheets and onto online forms of encoding.

 The simple message to technology leaders facing this morass of information is stop and think about the way your firm processes data. Analyst Gartner has produced a five-stage maturity model to help businesses bridge the paper-to-digital divide.

 The model provides a way for firms to deliver a return on investment as they move from paper-based data gathering to an optimised management process at the final level.

 However, there is some worrying news. Just 20% of large enterprises will have reached the fifth level of Gartner’s maturity model for e-forms by the end of 2010, according to the analyst.

 More attention, then, needs to be paid to e-forms. Even more critically, real concentration must be centred on how information is collected as part of the end-user experience.

 Gartner’s fifth stage of maturity suggests that the e-form should become the single graphical user interface. Such an approach optimises data gathering, database management and customer engagement. To quote Gartner: “It’s a form, but it’s also a rich application”.

 Using rich internet applications (RIAs) – fully-featured software that runs in a browser – should allow your business to gather relevant data and complete transactions quicker. The whole user experience will be centred in one place, without the need for individuals to complete extraneous e-forms.

 Such speed and convenience will mean your business can move towards an optimised management process. But becoming one of the firms that has a mature approach to e-forms will mean you need to act intelligently.


Digg!

Voice Interaction: The Next User Interface?

April 29, 2010

Let’s start with the obvious; voice interaction – despite recent developments discussed below – is nothing new. It could, however, be the next user interface.

Software that can convert spoken words into written text has been available since the early 1980s, with modern commercially available systems claiming accuracy upwards of 98% (see further reading, below). Such results are not bad, but not great either.

In almost 30 years of use and refined development, speech recognition still can’t match the recognition levels of human beings. For the most part, such issues mean most people are still confined to the traditional input methods of keyboard and mouse.

But lurking on the horizon is another upheaval and a potential boon for speech recognition. Google’s recently announced Nexus One handset includes voice recognition technology that allows the individual to control the device (see further reading).

The intuitive system – which learns in relation to individual queries – allows the user to interact with a range of services, from composing an electronic mail in Gmail to visiting the world’s site via Google Maps.

The search giant isn’t alone in developing more intuitive voice services. Microsoft’s Windows operating system uses voice control technology to help the user control Vista. The technology has been refined further in the recently released Windows 7 platform.

Such developments, however, have not been without issues. A famously unsuccessful demonstration of Vista voice recognition software in 2006 led to a simple note of “Dear Mom” being translated as “Dear Aunt, let’s set so double the killer delete select all” (see further reading).

Reuters reported that Microsoft chief executive Steve Ballmer blamed the failed speech recognition product demonstration on “a little bit of echo” in the room, which confused the speech-to-text system.

For users of voice technology, such confusion has been a common concern. But do the refinements in Windows 7, and the progress made by Google, show that we are actually getting to a point where voice recognition is usable?

Voice controlled car stereo systems – which have often suffered due to background noise – are now viewed as increasingly reliable. And early feedback from Nexus One users suggests the voice recognition technology is “amazing” (see further reading).

After three decades, then, we might finally have reached the tipping point for voice control. But as the speech interface becomes commonplace, one final word of warning: be careful where you talk.

Straining ears could pick up confidential information from spoken dictation. Worse still, confused members of the public might question your sanity! So, be careful as you embark on the road to a spoken revolution.

Further reading

http://www.springerlink.com/content/6705248320770312/

http://news.bbc.co.uk/1/hi/technology/8443256.stm

http://discuss.gdgt.com/htc/google/nexus-one/general/Voice-recognition-is-AMAZING/


Digg!

Multi-Interaction Interfaces

April 7, 2010

What’s your favourite interface for communication? Maybe it’s speech, gesture, or touch, or maybe – when you’re working online – it’s the humble keyboard?

 If it’s the latter interface, you’re not alone – most people still choose to communicate with a system through the traditional input means of keyboard and mouse. But modes of interaction are changing.

 The reasons for such change are twofold. First, alternative forms of communication – such as speech, pen and body movements – are becoming viable methods for controlling devices. Second, the up-and-coming cadre of online youngsters demand multiple modes of interaction.

 Such multiplicity might seem strange to Generation X – usually people born between 1960 and 1980 – who spend most of their time communicating through email, phone or SMS text messaging.

 When they’re using one particular platform, individuals from Generation X don’t like to be interrupted by a message on another system; such multiplicity creates a sense of panic rather than satisfaction. The opposite is true for Generation Y.

 New millennials are used to working with a range of interfaces at the same time. While logged on to Facebook, youngsters will be undertaking simultaneous conversations through instant messaging (IM) and video communication through Skype.

 But what do such changes mean for business? More than just logging on through the keyboard, Generation Y individuals are looking to maintain multiple sessions with companies on a range of platforms.

 A consumer applying for car insurance through one particular portal will be contrasting quotes through a comparison site, talking to a call centre agent through a head set and analysing the opinions of friends through IM.

 Businesses need to be ready to offer their customers multiple means for interaction. More to the point, they also need to monitor a range of collaboration channels – because that is what their Generation Y clients will be doing.

 The fast-changing nature of technology means the modes for interaction will only continue to evolve. Research on human-computer interaction means individuals will be able to input information through diverse formats, increasing usability and access to digital media.

 Such change all means your business will need to be ready for a new era of multi-interaction interfaces. Because if you’re not ready, your new customers certainly will be.

Context-Aware Computing

March 24, 2010

2009 was the year of cloud computing, but what will dominate business technology in 2010?

 Social media will continue to be pushed, as will related areas like customer engagement and personalisation. But there’s always an area that develops quickly and catches the media unaware.

 I’m going to take an early stab at a fast-emerging area known as context-aware computing (CAC). In its broadest sense, CAC deals with the concept that technology can sense, and then react to, the environment.

 The area has received significant research interest since the mid-1990s but is beginning to gain traction as a potentially significant area for business computing. Gartner, often a barometer for hype in technology, has already started to looking at CAC.

 In September 2009, the analyst stated that businesses could use CAC to target prospects, increase intimacy and enhance collaboration. Gartner defines context-aware computing as using information about the user to improve the quality of an interaction.

 The analyst expects CAC to develop quickly. The typical blue-chip company will manage between two and 10 business relationships with context providers by 2012, and context will be as influential in mobile consumer services and relationships as search engines are to the web by 2015.

 The figures sound impressive, but what do they actually mean? Rather than hype, what companies really needs is clear implications. Cloud computing, for example, is expected to have a huge impact on UK plc, but the majority of firms are still hanging back and waiting to see how the concept might be realised in business.

 If implications are not clear, CAC could quickly become the latest technology that is hyped before its problem-solving potential is properly defined.  What we don’t need is yet another technology looking for a problem; what we do need is an idea of how CAC might be exploited.

 Gartner suggests emerging context-enriched services will use location, presence and social attributes to anticipate an end user’s immediate needs, offering more-sophisticated and usable functions.

 Such definitions make CAC seem like the next generation of location-based services. I would hope context services will offer much more than that, helping firms to collate client information – such as areas of specialism and customer preferences – to create a much more personalised service.

 What is sure in this fast-developing area is that context, not just content, will be king during the next decade. And you would be well advised to think about how customer information can be used to help serve your clients better.

Further reading

 http://www.gartner.com/it/page.jsp?id=1190313


Digg!