Posts Tagged ‘Self Build’

A dirty IT architecture may not be such a bad thing

April 19, 2012

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Is the dream of re-use outdated?

April 12, 2012

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

AppInventor to drop out of school

February 3, 2011

Something odd is happening. While children have never been more involved in computing, fewer and fewer young people are studying technology.

 Any parent of young children will be able to regale you with tales of their offspring multitasking with various devices and apps. The modern, younger generation has grown up only knowing a technology-enabled world and they are a product of that interaction.

 However, that high level of interactivity has not created a rise in interest in the academic side of IT. Just 4,065 students were awarded computing A-levels this year, compared with 4,710 this time last year – a drop of 13.7% (see further reading, below).

 The jury is out on what such developments mean for the UK: while companies continue to offshore certain technology tasks, a core of highly-skilled technicians must exist in the UK. So, how can we get kids interested in the behind-the-scenes coding that supports their multi-tasking lifestyle?

 One possibility comes in the form of Google’s App Inventor, a system that claims to enable non-coders to develop Android software. Instead of writing code, interested individuals visually design the way an app looks and use blocks to specify software behaviour.

 The plus point, at least as far as getting junior programmers on board, is that App Inventor is easy to use. Code is simply snapped together to allow basic events to take place.

 That, however, is also part of the problem. As developers become more adept, the limitations of snapping blocks together – in comparison to being able to write code – become exposed.

 As Darien Graham-Smith concluded in a recent review of App Inventor for PC Pro (see further reading: “Anyone with the programming nous to make full use of App Inventor’s abilities will surely prefer a language that doesn’t force you to pedantically assemble every function, procedure and event out of multicoloured blocks.”

 Google acknowledges App Inventors’ educational route, paying deference to MIT’s Scratch project. But while the system is driven by an educational perspective, it remains restricted by its approach. In fact, Graham-Smith believes App Inventor could actually drive people away from programming unless the Blocks Editor improves.

 The system is, in short, a nice attempt to get people interested in the finer elements of programming. But successful apps are inherently much more complex than pushing Lego together.

Further reading:

 http://www.computerweekly.com/Articles/2010/08/19/242454/A-level-results-mark-39worrying-trend39-for-IT.htm

 http://www.pcpro.co.uk/blogs/2010/09/07/googles-app-inventor/

 http://appinventor.googlelabs.com/about/

AppInventor won’t solve your end user development opportunity

November 29, 2010

Once again, don’t believe the hype. Google recently launched App Inventor, a system that claims to enable non-coders to develop Android software.

The principle is sound enough – instead of writing code, interested individuals visually design the way an app looks and use blocks to specify software behaviour. The open platform for developers, meanwhile, could lead to vast array of specialised apps from people who are traditionally viewed as non-developers (see further reading, below).

However, don’t get the party bunting out just yet. The hype might suggest Google has created end-user computing for Android but the reality is slightly more complex.

Yes, the system allows individuals to work with blocks of code. And the system should be intuitive – it has been in development for more than a year and user testing has been mainly completed in schools (see further reading).

But while the drag-and-drop system of App Inventor is reminiscent of fitting Lego blocks together, experienced reviewers believe the fit is not quite as snug as it could be.

TechCrunch writer Jason Kincaid, for example, has experience of programming and attempted to put together a couple of apps. He concludes that the Google software is far from perfect and is by no means a short cut to back room, smart phone development (see further reading).

App Inventor, then, is a neat, graphical programming tool. The concept is innovative and refreshing. It is not, however, a tool for non-programmers. Google have created another step towards end-user development but this is by no means an end-point.

Senior executives should not be swayed by the hype and should not expect non-technical employees to start creating powerful Android apps. In fact, there is a strong argument for suggesting that the focus should not just be on the creation of new apps.

For some employees, end-user development is a real possibility – and Google’s App Inventor represents another staging post. At the same time, more apps create more maintenance, especially if increasing numbers of non-programmers are really going to get their hands on code.

 Proper end-user development must consider how apps can be maintained without the need for IT to run modifications and changes. Once again, good end-user development comes down to good management.

 End-users can create apps but only if the IT department is able to support such computing easily and cost effectively.

Further reading

 http://mashable.com/2010/07/12/google-app-inventor/

 ttp://www.nytimes.com/2010/07/12/technology/12google.html?_r=2

 http://techcrunch.com/2010/07/12/android-app-inventor-demo/


Digg!

The new legacy is HTML !

November 22, 2010

The legacy of past computing decisions is one of the biggest technology challenges facing businesses. What’s more, lessons from the past are not being heeded.

Let’s start with the most famous legacy code of them all – because if you’ve encountered COBOL, you’ve encountered legacy. Invented in 1959, the object-oriented language became a mainstay of business computing for the next four decades.

The legacy, however, quickly turned into a significant burden. Gartner reported that 80% of the world’s business ran on COBOL in 1997, with more than 200 billion lines of code in existence and an estimated 5 billion lines of new code produced annually (see further reading, below).

The reliance on that rate of production came home to roost towards the end of the last century, when language problems led to the panic associated to Y2K. The story since then has been one of decline. The continued move of business online has led to a clamour for new, sleeker and internet-ready programming languages.

First specified in 1990, HyperText Markup Language (HTML) became the predominant web development language. Its use ran alongside the development of open standards, such as JavaScript and the Cascading Style Sheets of CSS.

Such languages and styles helped to define the layout of the Web. But that is far from the end of the story. Online development in the era of HTML has become increasingly patchy, with more and more developers using varying styles of code.

Additional online tools, such as Silverlight and Flex, create further complexity. The result is that HTML, and an associated collection of standards and tools, are fast becoming the new legacy.

Just as in the case of COBOL, more and more lines of code are being produced. The disparate pace of online development is such that we will end up with reams of legacy HTML, JavaScript and CSS code.

Learn from history and get to grips with the problem now. Make sure you have proper documentation and standards. Select tools that are integrated with the rest of your business processes and which allow users to make the most of earlier development projects.

Think about how certain approaches – such as a mix of HTML/JavaScript and Ajax-server based technologies – will allow your business, and even your end-users, to use the same development techniques on desktop and mobile environments.

Also look to the future and take a look at HTML5, which is currently under development as the next major revision of the HTML standard, including features that previously required third-party plug-ins, such as Flash. Don’t stop there carry on with CSS3, Web Worker and WebFonts all new evolutions of current web technologies that will tomorrow be mainstream.

The end result should be the end of fragmented development and a legacy of useful web applications, rather than unusable and unidentifiable code.
Further reading:

http://www.wired.com/thisdayintech/2010/05/0528cobol-conference/


Digg!

Take your IT department forward by putting end user development at the front

November 19, 2010

Here’s a wake up call for the IT department – end-user computing will definitely become dominant; it’s just a matter of time.

 Proof comes in the form of modern business practices. Increasing numbers of executives are now saying that time to market is absolutely critical. A slow moving organisation is one that loses.

 For many firms, the ability to move quickly is underpinned by technology. The pace of change and centrality of IT to contemporary business means every organisation, whatever the sector, relies on technology to help maintain information flows and to help its employees deal with customers.

 Such reliance should be good news for the traditional technology team. But there’s a significant catch. The business wants to make changes and add products quickly. Technology, as the underpinning structure, should be set up to create speed.

 Unfortunately, this simply is not the case for many businesses.  The integral nature of IT to business processes means that line-of-business executives have to go through IT when they want to make changes.

 In many organisations, the traditional cycle of IT delivery is far too slow. One step forwards – in the form of the business’ recognition of the need to create a new product offering – is often several steps back for the IT department.

 Rather than being able to respond with agility to business need, IT development takes place across an elongated cycle, where each change needs to checked, re-checked and checked again. Businesses, if they are going to be agile, need to stop such lethargy.

 Focus remains on the IT department – and the focus has to be on technology because it is at the core of modern business practice. But smart executives are beginning to ask what can be done so that business change can swerve round the elongated cycle of IT delivery.

 For technology workers, such transformations might seem like a coup d’etat. But there is no need to be scared. IT workers that embrace the change and help the business move towards end-user computing will not be overthrown.

 Your role should be at the higher level, helping the businesses to understand how web interfaces – the new desktop – can be used to help executives avoid the traditional IT cycle of checking and testing.

 Employees want to be able to create instant changes to text that can help inform customers. They want to be able to manage data using their own business rules, creating drop down lists of crucial information.

 Permissions need to be granted and re-granted; workflow needs to be easily manageable, so that the business can use the web to drive agile processes. True agility comes in the form of end-user development.

 And the forward-thinking IT department will recognise it needs to help drive the end-user revolution, not hold it back.


Digg!

Does CTO mean “career transition occurring”?

January 4, 2010

As IT professionals, we are not great at defining what we do and why we do it. In an attempt to show the importance of our work, we hide beneath a collection of buzz phrases and three letter acronyms.

Chief technology officer (CTO) could be seen as yet another layer of obfuscation. After all, according to online encyclopaedia Wikipedi, “there is currently no commonly-shared definition of a CTO’s responsibilities, apart from that of acting as the senior-most technologist in an organisation.”

So much for the definition, what of practice? Traditionally, CTOs oversee the technical staff that are involved in archietcture, design and development – but times, and role definitions, are changing.

Underneath the hyped up talk of agility, value and innovation, something real and tangible is happening. Businesses are finally waking up to the need to use and re-use technology resources in an open and integrated fashion.

Service-oriented architecture (SOA) – with its focus on component re-use in new and interesting combinations – provides a way for the business to stay alert in this new era of business technology.

In short, systems, software and services are becoming more specialised. By its inherent nature, SOA demands an integration of resources through a series of layers, such as operational systems, component-based developments, composite services, business processes and, finally, through to the presentation layer.

And to roll with this revolution, businesses must realise that the management of architecture is no longer a simple role for a single individual. Instead of managing a group of general architects, the CTO’s technology team – working to an SOA framework – should become more and more specialised.

Firms should ensure they have architects that focus simply on integration, process, presentation, security and business intelligence. Such layered leaders should monitor standards, vendors and open source offerings, creating a roadmap for their specific area.

Leading-edge finance firms are already undertaking such a transition and new roles – such as chief process architect and chief presentation architect – are beginning to emerge.

Fail to take a similarly deep appreciation and you risk your firm being left behind. So, what does CTO really stand for? In the service-oriented age and for the architects that serve the business, ‘career transition occurring’ would seem an appropriate tag.

Further reading

http://en.wikipedia.org/wiki/Chief_technical_officer


Digg!

Providing structure through model-driven development

November 10, 2009

When you think of end-user development, you might think of IT taking a back seat as the business defines the type of applications it uses. That approach is all well and good in theory, but what about practice?

While employees might have loads of great ideas about the type of tools that could help the business work more efficiently, they are unlikely to have the requisite knowledge of programming and standards.

And unless you have the right background in place, users will not be able to create the applications that can make a real difference to day-to-day operations.

At that point, you should consider a turn towards model-driven development (MDD) – a design approach that allows your technology team to assert their presence, while providing a structured guideline to help end-users gain the software they really need.

The key to MDD is ensuring the building blocks of a business problem are understood before users take action. While MDD should aim to allow the business to create applications, the approach should rely on IT specialists using programming techniques to create the underlying components.

Open and vendor-neutral, MDD – also known as model-driven architecture – is based on the Object Management Group’s (OMG’s) established standards, including unified modelling language (UML) and the meta-object facility (MOF). OMG’s model-driven approach separates business logic from the underlying technology and allows the business to create platform-independent applications.

Rather than being created in general-purpose programming languages such as COBRA, XML, Java or .Net, MDD is created in a domain-specific language that is dedicated to a particular business problem. The break from a reliance on a particular technical flavour means users can specify the applications they require and then work with the IT team to create tools.

Such independence means underlying technology can be updated without affecting the business aspects of an application. Likewise, such platform independence means the business can generate the applications it needs without fear of a potential impact on underlying code.

So, what does the emergence of MDD – with big companies, such as Microsoft, backing its development – mean for the future of development? If the IT organisation creates applications in-line with the business specific-demands of MDD, the answer is simple: software that can make a real difference to business operations.

 

Digg!

WYSIWYG is dead go with the flow

October 12, 2009

Since the birth of window based user interface (MAC, Microsoft Windows) application designers have adopted the What You See Is What You Get approach to creating User Interfaces. Visual Basic was one of the early tools to provide a canvas onto which a screen can be drawn by simple drag and drop of screen elements on top of the canvas. “Property” sheets allowed these controls to be specialised/designed further for example change font, size, captions etc. This paradigm of development has since stuck with us, and this post questions whether this is right and whether this is the future?

Most corporates have started to standardise front end screens to be developed in browser technology for the right reasons such as; cross platform, ease of distribution, zero install. As expected tools vendors have provided good support for browser application development. However does the WYSIWYG paradigm apply? Should you still create browser screens in the same way as desktop applications?

Browser applications typically use a “flow layout” whereby the screen layout changes according to the size of the browser window. This is very useful because users could have different screen sizes, or browser settings (e.g. lots of toolbars) or even be viewing the application on a mobile device. Using a flow layout means that screen layout will change according to the users browser window size, thereby automatically handling each of the differences above.

Using this approach however means that creating a screen using a drag and drop approach onto a canvas does not necessarily give you a view of the final screen layout, hence you have to question whether now WYSIWYG is the right development paradigm for browser applications.

Another issue is that different browsers sometimes interpret the browser differently, causing screens to appear in differently across different browsers.

There is also the issue that “look and feel” is actually separated from the screen code into a style sheet, and a screen may be presented using different syle sheets, Hence displaying a form could be drastically different depending on the stylesheet used ( some great examples of this can be seen at http://www.csszengarden.com/ ).

With the above in mind is it time for a new approach? Perhaps using a more “real time design” approach. With such a tool, users would create screens and then run them to see how they would be rendered in different browsers, devices and screen sizes. With the proliferation of devices a multi-channel approach is becoming core to many organisations, and in such a world screen sizes will vary greatly, a new approach is required for creating screens because now the paradigm has changed to What You See Is What You Might And Most Probably Wont Get.


Digg!

Project failures can be good news

October 5, 2009

When it comes to software development, the latest research from the Standish Group presents very little in the way of good news. Failures are up and projects are considered less successful.

Just 32% of all projects deliver on time and on budget, with required features and functions (see further reading, below). Standish estimates that 44% of software projects are late, and over budget, and another 24% fail and are cancelled prior to completion, or delivered and never used.

The figures do not make impressive reading for IT executives, especially at a time when the business is putting pressure on the technology department to deliver more with less.

One thing is for certain; the current economic climate definitely does not help. Standish suggests the recession has helped push IT project failure rates higher and estimates that as much as 20 to 25% of failures during the last two years could have been caused by the economy forcing project cancellations (see further reading).

The upside is that IT departments are being persuaded, or even forced, to re-evaluate technology initiatives. Projects that might previously have stumbled towards completion are being canned as a result of the recession.

Good IT can help users work more effectively and efficiently, saving the business time and money. Bad technology is a money pit and too many IT executives end up pouring good money after bad, attempting to fix projects that do not provide a usable interface.

But it doesn’t have to be like this. While new economic realities help executives cull costly IT projects, remaining projects will still regularly fail to meet user expectations, as the Standish report confirms.

For your remaining projects, look for specialist approaches and tools that can help ensure your projects run in-line with user demands. An agile development approach will help you to make such tests on an iterative basis.

edge IPK offers such a strategy, its Early Visualisation Approach (EVA) provides an agile development lifecycle that allows business analysts to focus on online and offline front end applications.

Supported by the edgeConnect platform, which enables much faster entry points to development than traditional tools, analysts estimate EVA can reduce development cycles by as much as 85%.

With project failure rates rising and IT executives struggling to justify the cost of technology initiatives, investing in an iterative development approach could be your must successful decision of the year.

Further reading

http://www.cbronline.com/news/software_project_failures_hit_5_year_high_220609

http://www.cio.com/article/495306/Recession_Causes_Rising_IT_Project_Failure_Rates_?page=2


Digg!