Posts Tagged ‘Front end development’

The end of silo architectures

June 28, 2012

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

Future of mobile: Part 3

May 13, 2012

Today I have 3 GPS devices, 4 Cameras, 3 Video cameras, 3 movie players, 5 music players and the list goes on. All of these are in a variety of devices that I use in different places for different purposes.

Drilling down into the detail what I actually have is a phone, a desktop home PC, a laptop, an iPod, a car stereo, in-car GPS, a TV+HD/DVD Player, a digital SLR, that’s just me and not including what the family has.

This presents a number of challenges, risks as well as a lot of cost…

Most of us want as little duplication of cost as possible. Already even though cars come with stereos many people are now plugging in their MP3 players, utilising the speakers in the car only. Many people will also use their phone’s GPS rather than the car’s. Newer TV’s have wireless access to browsing and social apps. I’m tempted by the hype of tablet computing, but have to ask myself, why? I have all the compute options I need?

More devices mean more synchronisation issues for personal settings and personal data. While cloud based services will resolve many of these issues, it is still early days to move everything into the cloud as users of MegaUpLoad found.

In 1999 I went to a tech show in Vegas, where I saw a potential solution to the problem from Sony. They were demonstrating the concept of “apps on sticks”. Basically these were memory sticks (max 32mb at the time) with other devices, like GPS, radio and even camera on the stick. The idea was simple you’d simply plug your GPS stick into your phone, laptop, car or any other device, rather than have that function in multiple devices. This approach would have required a lot of standardisation and clearly is a concept that never came to fruition.

More recently Asus have launched their PadFone, this is a smartphone that comes with a tablet screen. When you need to work with a bit more screen estate, you simply slot your phone into the back of the screen and hey presto you have a tablet that can use the 3G or wireless connection on your phone. Apart from being able to charge your phone, the tablet screen also integrates with the phone itself so voice and video calls can be made/received using the tablet screen.

This concept really works for me, and I could see myself buying into the family of products: TV, Car Stereo, projector. This with the ability to have my data in the cloud so losing the phone is not the end of the world, makes for a great solution. Whether the phone slots in, or connects wirelessly the ability to drive a different screen from my phone, either works for me as a concept. Maybe the idea could be taken even further so that the circuitry for the device could be slotted into the phone itself?

As I’ve discussed in my previous blogs there are many new avenues for phones, in shape, size and function. It would be difficult to predict the future with so many possibilities, but one thing for sure is that for gadget geeks like me, the phone is going to be the constant source of innovation we thrive on

Is apple Siri-ous about voice?

May 4, 2012

When I first saw the new iPhone ads that featured voice interaction all I thought was WOW, Apple have done it, they have mastered voice interaction. What appeared to be natural voice interaction is the nirvana many people have been waiting for to replace point and click interfaces.

Speech recognition is not new and certainly both Microsoft and Android had speech driven interfaces before Apple. However it was the ability to talk naturally without breaks and without having to use specific key words that seemed to set Apple apart from the competition.

Instead we were all fooled by another Jobs skill, his “reality distortion field”. Indeed one person went as far as to sue Apple for selling something that did not perform as advertised.

What exactly is the difference? Well both Windows and Android recognise specific commands and actions rather like just “talking the menu” e.g. Saying “File, Open, text.doc”. The Apple promise was that you could simply talk as you would normally speak e.g. “File, Open, text.doc” would be the same from Apple if you said “get me text.doc”.

So why aren’t we there? The challenge is creating a dictionary that can understand the synonyms and colloquialisms that people may use in conversational speech as opposed to the very specific commands used in menu’s and buttons in graphical user interfaces.

Whilst this may seem like a daunting task, I believe the first step to solve this puzzle is to reduce the problem. So rather than creating this super dictionary for absolutely any application, dictionaries should be created for specific types of applications e.g. word processing or banking. This way the focus on creating synonyms and finding colloquialisms and linking them is more manageable.

The next step would be to garner the help of the user community to build the dictionary, so that as words are identified, the user is asked what alternatives could be used so that synonyms/ colloquialisms are captured.

I know this is not a detailed specification but this is an approach that Apple might use to give us what they promised – and what the world is waiting for… speech driven user interfaces.

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

Is the growth of Mobile Apps overhyped?

March 22, 2012

There are numerous statistics on the growth of mobile apps in the various stores, and also about the number of downloads. Apple claims over 500,000 apps in its store and Google claims over 450,000 (this time last year it had only 150,000). The number of apps, downloads and rate of growth is phenomenal.

Is this just a temporary fever or will this growth continue, and if so what will drive it?

I believe this growth has only just started and that there are two key trends that will drive this growth further.

Firstly, development for smartphones will get simpler. VisionMobile’s latest survey profiles over a hundred development tools for creating mobile apps. My guess is that is a very conservative estimate of the actual number of tools out there.

A common goal for many of these providers is to make programming simpler so that more people can code. For some, this goes further, to the extent that tools are being created for children to develop apps at school. So more developers will mean more apps!

Secondly and this for me is the more exciting aspect, is that phones will do more, which means that apps will get more innovative.

Today there are a wide variety of apps already, some of which use features of the phone itself like the camera, GPS or microphone. Coming down the line are many more features that will get embedded into phones, for example the ability to detect a user emotions and the ability to monitor a users health. Such features will drive yet more applications and innovations from personal healthcare to fraud detection.

Apart from new features, phones will start interact with other devices such as your TV. At a simple level, your smartphone can be already be used as a remote control for your TV or to join in with live TV quiz shows. Already phones are interacting with cars, and this integration will inevitably go further, so that your engine management system feeds your phone with data that an app can use.

Recent surveys from recruitment agencies highlight the growing demand for mobile developers, and more interestingly the re-skilling of developers to position themselves for this growth.

Exciting times are ahead for developers and entrepreneurs who will show that Angry Birds isn’t the only way to make big money in mobile.

Mobile Apps: When to go native

March 15, 2012

Let me say from the outset, that there is no right answer for everybody. The battle between cross-platform solutions and native mobile applications is going to continue for years to come; I know I have blogged about this before, and probably will again.

For many corporate applications, native code offers the marketing group richer customer experiences, the business the chance to innovate solutions using device-specific solutions, and IT some new development tools.

However, if an organisation has to support the widest range of phones possible, the development of native apps becomes cumbersome, since you then need to write apps for each of the major mobile platforms available.

Part of this decision depends on whether you decide to support older phones, i.e. non-smartphones. For non-smartphone support you’ll need to build in support for features from SMS text services to basic text browsers.

Typically this is aimed at operating in developing countries. In developed countries like the UK, the growth of smartphones means that there is now a critical mass of users crowding out lower-featured handsets.

If you decide to target smartphones, then you still have a choice. You can either:

  1. Build for each platform, using it’s own development tools
  2. Use a cross platform mobile development solution, or
  3. Write your app as a browser solution.

So how does an organisation decide which way to go?

I found this useful little questionnaire developed by InfoTech. It takes you through a set of questions about your needs, and then suggests the best way forward between a native solution and a web-based solution.

As a quick guide to review a specific tactical requirement, I thought it was pretty good and asked very pertinent questions. Obviously this is something that an IT department could expand or specialise for their own needs, and so provides a useful structured approach to making impartial decisions without any emotional bias.

Where support for multiple platforms is crucial, a more difficult decision will be whether to use a cross-platform mobile development solution or to go for a pure web (and possibly HTML5) solution.

I’ll discuss this issue in a future blog, but for the time being, check out the questionnaire to start thinking about your mobile approach.

HTML 5 makes the browser smarter

January 26, 2012

The unsung hero of the web has always been Javascript, without which the standards-based web would be completely static. Javascript enables functionality to be executed in the browser, and has been used to create all sorts of effects otherwise not possible with HTML alone.

In the early days, Javascript implementations weren’t entirely standard, requiring developers to have to write variants for different browsers; this isn’t really an issue any more.

For applications, developers will either use libraries or develop their own validation routines. This Javascript code adds significantly to the amount of code downloaded.

With HTML5, developers will need to write less Javascript, as the browser provides features to do things for itself rather than rely extra scripting.

Validation is the main area of improvement. HTML5 now provides a number of new validation features such as mandatory checking, type checking, range and field length validation. The validation is done within the browser, and developers can opt to decide how to process errors.

Obviously validation has to be repeated on the server for security, to ensure that data hasn’t been hacked in the browser or in transmission. This then means that validation has to be maintained in two places and kept in sync.

HTML5 also provides a number of new input field types such as tel, email, color, datetime. This empowers the browser, by applying it to display a date picker, or a colour chooser for example. More importantly for mobile applications it would allow the browser to show an appropriate keyboard layout e.g. a numeric layout for tel, and an alphabetic keyboard for email type.

There are also a number of new attributes which previously required Javascript such as autocomplete, placeholder and pattern which will prove very useful.

There will be some organisations that will not want the browser to affect their carefully designed user experience; for these people the answer is simple, just don’t use the new features.

For the rest, you will enjoy having to write less Javascript for HTML5 browsers, but of course you will still need to have backwards compatibility for non-HTML5 browsers which will rely on Javascript.

Using Polyfill to cover up the cracks in HTML5

October 23, 2011

Long gone are the days when Internet Explorer had 95% of the browser market. We have lived in multi-browser world since the start of the web. Whilst this has its plus point, it also has its downsides – none more so than ensuring backwards compatibility. Using HTML5 today is not simply a case of does the browser support it or not, but what aspects of the huge specification does it support and to what extent. A good site for seeing the various levels of support across browser releases, against different areas of the HTML5 specification can be found at CanIUse.com.

The W3C’s answer to developers creating solutions with HTML5 is that the new features of the spec should “gracefully degrade” when used in older browsers. Essentially this means the new markup or API is ignored and doesn’t cause the page to crash. Developers should test and develop backwards compatibility. This can be an onerous task. However help is at hand with libraries like Modernizr you can detect what features of HTML5 the browser supports.

Once you know that the browser doesn’t support a HTML5 feature you have used you can write or use a 3rd party “polyfill”. In HTML, a polyfill is essentially code that is used to provide alternative behaviour to simulate HTML5 features in a browser that does not support that particular feature. There are lots of sites providing polyfills for different parts of the HTML5 spec, a pretty good one can be found here it lists lots of libraries covering almost all parts of the specification.

For me a big concern is that I’ve not yet been able to find a single provider that gives you polyfills for the whole of HTML5, or even the majority of the specification. This could mean that you have to use several different libraries, which may or may not be compatible with each other. Another big concern is that each polyfill will provide varying levels of browser backwards compatibility i.e. some will support back to IE 6 and some not.

With users moving more of their browsing towards smartphones and tablets which typically have the latest browser technology supporting HTML5, backwards compatibility may not be an issue. However it will be several years before the HTML5 spec is complete, and even then there are new spec’s being created all the time within the W3C. So far from being a temporary fix the use of polyfills will become a standard practice in web development, unless of course you take the brave stance of saying your application is only supported on HTML5 browsers.

However this does raise another question, if you can simulate HTML5 behaviour do you need to start using HTML5 at all to create richer applications? The answer is quite possibly not, but having one will certainly improve your user experience and make development of your rich internet applications simpler.

HTML5 The proprietary standard

October 16, 2011

The good thing about standards is that they are uniform across different vendor implementation. Well that is at least the primary goal. So how does a vendor make a standard proprietary?

Well it’s quite easy really you provide extensions to the standard for features that are not yet implemented in the standard. Vendors wouldn’t be that unscrupulous would they? For example would they create application servers following standards but add their own extensions to “hook you in”, sorry I mean to add value beyond what the standards provide ;o)

I’m sure Microsoft’s announcement at Build to allow developers to create Windows 8 Metro applications using HTML5 and Javascript took many Microsoft developers by surprise. What is Microsoft’s game plan with this?

Optimists will cry that it opens Metro development out to the wider base of web developers rather than just to the closed Microsoft community. Cynic’s will argue that it is an evil ploy for Microsoft to play the open card whilst actually hooking you into their proprietary OS. In the cynics corner a good example is Microsoft’s defiant stance of Direct3D versus the open standard alternative OpenGL. This has lead to Google developing Angle, effectively allowing OpenGL calls to be translated into Direct3D ones so that the same programmes can be run on Microsoft platforms.

Whatever it is developers aiming for cross platform conformance will need to stay sharp to ensure that proprietary extensions do not make the application incompatible in different environments.

Adobe’s recent donation of CSS Shaders shows a more charitable approach whereby extensions are donated back to the standards bodies to make the “value added” features available to every platform. This is largely the approach in which standards evolve, with independent committee’s validating vendor contributions.

So what is Microsoft’s game? It’s too early to really say whether there is an altruistic angle on their support for HTML5 and JS, but history has shown us that the empire is not afraid to strike back. Look at their collaboration with IBM on OS/2 leading them to leave IBM in lurch with their own launch of Windows NT. A similar approach occurred not long after with with Sybase and Sql Server.

I maybe a cynic, but having been a Windows developer from Windows 1.0 to Windows NT and following a road of promises and U turns has made me that way when it comes to Microsoft. It’s great to see increasing support for HTML5 but I am always a little concerned with the motivations of the Redmond camp. However perhaps I myself need to be “open” to a different Microsoft, one that is embracing standards even though it may cannibalize it’s own Silverlight technology.