Posts Tagged ‘Software Development’

Whilst discussing what a UXP is and who the key players are with a customer I was asked an interesting question, “is there a need for industry (banking, retail, government …) specific UXP ?”.

My immediate reaction was that the technologies in a UXP were generic horizontal solutions that should be agnostic to the industry they were implemented in. The fact that they were specialised solutions and are not industry specific to me was a key advantage. So why would you want a content management solution or collaboration tool that was specific to banking or retail?

The response was interesting: For many smaller companies the complexity of managing their web presence is huge, even if they buy into a single vendor approach for example using Microsoft Sharepoint they still have a huge task to set up the individual components (content management, collaboration, social tools and apps) and this is only made harder with the need to support an increasing array of devices (phone, tablet, TV etc…).

It seems there is a need for an offering that provides an integrated full UXP that can be set-up easily and quickly without the need for an army of developers. Compromises on absolute flexibility are acceptable provided a rich set of templates (or the ability to create custom templates) were provided, such that the templates handled device support automatically. Further the UXP might offer vertical specific content feeds out of the box.

As in my previous blog “The End of Silo Architectures” using a UXP front end technology to create industry specific apps is a great idea. Such a solution could not only provide the business functionality (e.g. Internet banking, insurance quotes/claims, stock trading) but the technical issues of cross device and browser support, security and performance.

So whilst I can understand the requirement and the obvious benefit, the idea of a vertical UXP to me seems like providing a vertical specific CRM or Accounting package. The real answer is that it makes sense to provide vertical apps and use generic Content, Collaboration and social tools from a UXP. Ideally the generic components are integrated and have easy to configure templates.

As I have highlighted before though the UXP is complex not just from a technology perspective but also from the perspective of skills, processes and standards. The first step for any organisation must be to create a strategy for UXP: audit what you currently have, document what you need (take into consideration current trends like social, gamification and mobile) and then decide how you move forward.

Unfortunately this area currently seems ill serviced by the consultancy companies so it may just be up to you to roll your own strategy.

From my discussions with customers and prospects it is clear that the final layer in their architectures is being defined by UXP (see my previous posts). So whether you have a Service or Web Oriented architecture most organisations have already moved or are in the middle of moving towards a new flexible layered architecture that will provide more agility and breaks down the closed silo architectures they previously owned.

However solution vendors that provide “out the box” business solutions whether they be vertical (banking, insurance, pharmaceutical, retail or other) or horizontal (CRM, ERP, supply chain management) have not necessarily been as quick to open up their solutions. Whilst many will claim that they have broken out of the silo’s by “service enabling” their solution, many still have proprietary requirements to specific application servers, databases, middleware or orchestration solutions.

However recently I have come across two vendors, Temenos (global core banking) and CCS (leading insurance platform) who are breaking the mould.

CCS have developed Roundcube to be a flexible componentised solution to address the full lifecycle of insurance from product definition, policy administration to claims. Their solution is clearly layered, service enabled and uses leading 3rd party solutions to manage orchestration, integration and presentation whilst they focus on their data model and services. Their approach allows an organisation to buy into the whole integrated suite or just blend specific components into existing solutions they may have. By using leading 3rd party solutions, their architecture is open for integration into other solutions like CRM or financial ledgers.

Temenos too has an open architecture (Temenos Enterprise Framework Architecture) which allows you to use any database, application server, or integration solution. Their oData enabled interaction framework allows flexibility at the front end too.

Whilst these are both evolving solutions, they have a clear strategy and path to being more open and therefore more flexible. Both are also are providing a solution that can be scaled from the smallest business to the largest enterprises. Their solutions will therefore more naturally blend into organisations rather than dictate requirements.

Whilst packaged solutions are often enforced by business sponsors this new breed of vendor provides the flexibility that will ensure the agility of changes the business requires going forward. It’s starting to feel like organisations can “have their cake and eat it” if they make the right choices when selecting business solutions.

If you’ve seen other solutions in different verticals providing similar open architectures I would be very happy to hear about them at dharmesh@edgeipk.com.

For some time both CTOs and architects have looked at enterprise architectures and sought to simplify their portfolio of applications. This simplification is driven by the needs to reduce the costs of multiple platforms driven largely through duplication.

Duplication often occurs because two areas of the business had very separate ‘business needs’ but both needs had been met by a ‘technical solution’, for example a business process management tool or some integration technology. Sometimes the duplication is a smaller element of the overall solution like a rules engine or user security solution.

Having been in that position it’s quite easy to look at an enterprise and say “we only need one BPM solution, one integration platform, one rules engine”. As most architects know though, these separations aren’t that easy to make, because even some of these have overlaps. For example, you will find rules in integration technology as well as business process management and content management (and probably many other places too). The notion of users, roles and permissions is often required in multiple locations also.

Getting into the detail of simplification, it’s not always possible to eradicate duplication altogether, and quite often it won’t make financial sense to build a solution from a ‘toolbox’ of components.

Often the risk of having to build a business solution from ground up, even with using these tools, is too great and the business prefer to de-risk implementation with a packaged implementation. This packaged solution in itself may have a number of these components, but the advantage is they are pre-integrated to provide the business with what they need.

For some components duplication may be okay, if a federated approach can be taken. For example, in the case of user management it is possible to have multiple user management solutions, that are then federated so a ‘single view of users’ can be achieved. Similar approaches can be achieved for document management, but in the case of process management I believe this has been far less successful.

Another issue often faced in simplification is that the tools often have a particular strength and therefore weaknesses in other areas of their solution. For example, Sharepoint is great on site management and content management, but poorer on creating enterprise applications. Hence a decision has to be made as to whether the tool’s weaknesses are enough of an issue to necessitate buying an alternative, or whether workarounds can be used to complement the tool.

The technical task of simplification is not a simple problem in itself. From bitter experience, this decision is more often than not made on technology and for the greater good of the enterprise, but more often on who owns the budget for the project.

Since the early days of programming developers have chased the dream of creating code that can be used by other developers so that valuable time can be saved by not re-inventing the wheel. Over time, there have been many methods of re-use devised, and design patterns to drive re-use.

Meanwhile the business users are demanding more applications and are expecting them delivered faster, creating pressure for IT departments. Sometimes this pressure is counter-productive, because it means that there is no time to build re-usability into applications, and the time saved is just added on to future projects.

Could we use the pressure to take a different approach? One that focuses on productivity and time to market, rather than design and flexibility as typically sought by IT?

I’m going to draw an analogy with a conversation I had with an old relative that had a paraffin heater. This relative had the heater for many years, and is still using it today because it works. When I questioned the cost of paraffin over the buying an energy efficient electric heater which was cheaper to run, the response was this one works and it’s not broken yet, why replace it? Now for most appliances we are in a world that means we don’t fix things, we replace them.

This gave me the idea, which I’m sure is not new, of disposable applications. Shouldn’t some applications just be developed quickly without designing for re-use, flexibility and maintainability? With this approach, the application would be developed with maximum speed to meet requirements rather than elegant design knowing that the application will be re-developed within a short time (2-3 years).

So can there be many applications that could be thrown away and re-developed from scratch? Well in today’s world of ‘layered’ applications it could be that only the front end screens need to be ‘disposable’, with business services and databases being designed for the long term, since after all there is less change in those areas generally.

Looking at many business to consumer sites certainly self-service applications and point of sales forms typically could be developed as disposable applications because generally the customer experience evolves and the business like to ‘refresh the shop front’ regularly.

My experience of the insurance world is that consumer applications typically get refreshed on average every 18-24 months, so if it takes you longer than 12 months to develop your solution it won’t be very long before you are re-building it.

When looking at the average lifetime of a mobile app, it is clear that end users see some software as disposable, using it a few times then either uninstalling or letting it gather dust in a dusty corner.

So there may be a place for disposable apps, and not everything has to be designed for re-use. This is more likely in the area of the user experience because they tend to evolve regularly. So is it time you revised your thinking on re-use?

The good thing about standards is that they are uniform across different vendor implementation. Well that is at least the primary goal. So how does a vendor make a standard proprietary?

Well it’s quite easy really you provide extensions to the standard for features that are not yet implemented in the standard. Vendors wouldn’t be that unscrupulous would they? For example would they create application servers following standards but add their own extensions to “hook you in”, sorry I mean to add value beyond what the standards provide ;o)

I’m sure Microsoft’s announcement at Build to allow developers to create Windows 8 Metro applications using HTML5 and Javascript took many Microsoft developers by surprise. What is Microsoft’s game plan with this?

Optimists will cry that it opens Metro development out to the wider base of web developers rather than just to the closed Microsoft community. Cynic’s will argue that it is an evil ploy for Microsoft to play the open card whilst actually hooking you into their proprietary OS. In the cynics corner a good example is Microsoft’s defiant stance of Direct3D versus the open standard alternative OpenGL. This has lead to Google developing Angle, effectively allowing OpenGL calls to be translated into Direct3D ones so that the same programmes can be run on Microsoft platforms.

Whatever it is developers aiming for cross platform conformance will need to stay sharp to ensure that proprietary extensions do not make the application incompatible in different environments.

Adobe’s recent donation of CSS Shaders shows a more charitable approach whereby extensions are donated back to the standards bodies to make the “value added” features available to every platform. This is largely the approach in which standards evolve, with independent committee’s validating vendor contributions.

So what is Microsoft’s game? It’s too early to really say whether there is an altruistic angle on their support for HTML5 and JS, but history has shown us that the empire is not afraid to strike back. Look at their collaboration with IBM on OS/2 leading them to leave IBM in lurch with their own launch of Windows NT. A similar approach occurred not long after with with Sybase and Sql Server.

I maybe a cynic, but having been a Windows developer from Windows 1.0 to Windows NT and following a road of promises and U turns has made me that way when it comes to Microsoft. It’s great to see increasing support for HTML5 but I am always a little concerned with the motivations of the Redmond camp. However perhaps I myself need to be “open” to a different Microsoft, one that is embracing standards even though it may cannibalize it’s own Silverlight technology.

At the time I thought this was a brave statement to make especially as the late Steve Jobs had already announced in April that year and despite a billion app downloads, that HTML5 negated the need for many proprietary browser plug-ins. It was clear at the time this was squared largely at Flash (and possibly Silverlight too).

For me this was a faux pas too far as Huggers continued his blog with statements about proprietary implementations of HTML5 by Apple and fractions of opinions within the W3C and WHAT-WG (who initiated the development of HTML5).

If we take a slight diversion and look at the developer conferences for both Microsoft and Adobe in the last four weeks, both made big announcements about tools and support for HTML5. However committed developers with years of invested skills in SilverLight and Flash were left deflated with the lack of announcements on the future of these technologies.

So have the sleeping giants finally woke up? It seems like it to me.

However, in the case of the BBC, Summerfield’s blog states that they will also launch new versions of the iPlayer for Flash and Air. This may be a short term decision to wait for wider support for HTML5, but there is little clarity about what they see as the future for iPlayer.

To Hugger’s credit he did foresee the benefits of HTML5 could bring to the BBC in reducing development timescales and having a common skill set.

However I applaud those that have the courage and conviction to take bold steps forward and put their money where their mouth is. The FT is a shining example, ditching their AppStore versions for iDevices and completely moving to HTML5.

There is work to be done on HTML5 and it will evolve for some time yet, but the bandwagon has started to roll and as a good friend of mine said to me at the start of the .com era, “When you see the bandwagon starting to move, you have a choice to jump on, or stand in the way of a tonne of metal !”.

For me it is clear. I’m not standing in the middle of the road, I’m jumping squarely onto the HTML 5 bandwagon. The question is, are you?

My previous posts about “The end of Silverlight” and “The end of Flash” both raised active debate. The general view was that I knew too little about Silverlight and Flash to make such brash claims and whilst there is some truth in that it also transpired that the general awareness of what HTML5 can do today, and what it promises when complete is poor, and that is the issue that my run of posts on HTML5 has really sought to address. Hopefully for those that haven’t had any exposure to HTML5, my posts have been of value.

The longer term is much more difficult to forecast, there is a place for both especially for rich multimedia applications and gaming, but for business applications there is going to be small minority of applications that could possibly require them. In the report by Gartner (“The (not so) Future Web”, they too agree saying that “Gartner expects leading RIA vendors to maintain a pace of innovation that keeps them relevant, but for a gradually shrinking percentage of Web applications.”

However one can’t completely ignore that web technology is evolving fast and that new spec’s are filling in the gaps for HTML5 already for example work is already in progress for TV and Gestures as well as previously mentioned 3D graphics. We are seeing major new releases of browsers with greater support for HTML5 being launched at a faster rate than ever before, coupled with a battle for the fastest JavaScript engine. A new release of JavaScript promises much better standardization as well as new features.

The developer forums are now awash with an outcry from loyal Microsoft developers demanding to know the future of Silverlight in Microsoft’s grand plans, where once there was no doubt that Silverlight is core to Microsoft. IMHO I doubt Microsoft will make a U turn on Silverlight, but I will re-iterate that the need for Silverlight in business applications will lessen as HTML5 matures.

Whilst I’ve been an active follower and advocate for HTML5, what I see lacking is a roadmap and vision for HTML, a lot more detail about how the semantic web will evolve and what it means to developers in the short and medium term. This is something the vendors seem much better at and is no wonder developers buy-in to certain technologies over others.

In the end as always the real question is not which is the better technology but what is the appropriate technology for what you need to achieve and the audience and platforms you are targeting.

Movie and Audio features in HTML5 are like many of the features I have discussed previously, they:
• Have a history of controversy over Codec support
• Specifications are too large to do real justice in these short posts
• Are an exciting, powerful new addition that will transform the web

To date the most popular media player on the web has been Adobe’s flash player, and arguably it has been it’s most popular use. Apple’s lack of support in their devices for Flash has created a small crack in Adobe’s party, but this crack could open further into a chasm that their flash drops into! However there have been many other shenanigans in this story and rather than delve into to those murky stories I’m going to again give a brief overview of the capabilities of these new features. The good news is that HTML5 will remove the need for proprietary plug-in’s like Flash and Quicktime for playing sound and movies.

audio and video are both media elements in HTML5, and as such share common API’s for their control. In fact you can load video content into an audio element and vice versa, audio into a video element – the only difference is that the video element has a display area for content, whereas the audio element does not. Defining an audio element and source file is pretty straightforward

<audio controls src=”my musicfile.mp3”
My audio clip
</audio>

You can actually assign multiple source ( src ) files. This is to allow you to provide the audio in multiple formats, so that you can support the widest array of browsers. The browser will go through the list in sequential order and play the first file it can support, so it’s important you list them in order of the best quality first rather than by most popular format.

To load a movie you simply replace the audio element with video. Video’s can also define multiple sources. You may additionally specify the height and width of the video display area.

Next to control media you can use the following API’s: load(), play(), pause(), I think what they do is self explanatory. canPlayType(type) can be used to check whether a specific format is supported.

Some read only attributes can be queried such as duration, paused, ended, currentSrc to check duration of the media, whether it has been paused or ended and which src is being played.

You can also set a number of attributes such as autoplay, loop, controls, volume to automatically start media, repeat play the media, show or hide media controls and to set the volume.

These aren’t exclusive lists of API’s or attributes as there are many more but they are some of the most common features of the audio and video people will use. With video especially there are many more great things you can achieve like creating timelines and displaying dynamic content as specific points in the video (no doubt this will be used for advertising amongst other more interesting uses).

Clearly the web will get richer with full multimedia content without the perquisite of plug-ins. However developers should be aware of the various formats supported by specific browsers and aim to provide media in as many formats as possible.

Many sites today do use sound and movies, but I believe with native support and greater imagination a new world of dynamic rich media sites will change the user experience in the same way that Ajax transformed static content into the dynamic web. With it we will see new online behaviors, a topic I will cover soon, and whilst some have said the future of TV is online the web may just give it a new lease of life !

As a relative late comer to HTML5 trying to catch up on a spec that spans over a 1000 pages is no mean feat, let alone the fact that the definition of what makes up HTML5 is covered across several specs (see previous blog on standards spaghetti). If you’ve been following this series then you’ll have worked out I have a few favourite features that I think will radically change the perception of web applications, and you guessed it HTML5’s support for database access is another.

The specification started out as early as 2006 with WebSimpleDB (aka WebSQL), and went as far as implementation into many browsers including webkit, Safari, Chrome and Firefox. From what I can find Oracle made the original proposal in 2009 and the W3C made a switch to Indexed DB sometime in 2010. Although Mozilla.org already had their own implementation using SQL-Lite, they too preferred IndexedDB). The current status as of April 2011 of the IndexedDB spec is that it is still in draft, and according to www.caniuse.com early implementations exist in Chrome 11 and Firefox 4. Microsoft have released a prototype on their html labs site at to show their current support .

Clearly it is not ready for live commercial applications in the short term, but it is certainly something worth keeping your eye on and to plan for. When an application requires more than simple key value pairs or requires large amounts of data, IndexDB should be your choice over HTML 5’s WebStorage api’s (localStorage and sessionStorage).

The first important feature about IndexDB is that it is not a relational database but in fact an object store. Hence there are no tables, rows or columns and there is no SQL for querying the data. Instead data is stored as Javascript objects and navigated using cursors. The database can have indexes defined however.

Next there are two API modes of interaction, Asynchronous and Synchronous API’s. As you would imagine synchronous API’s DO block the calling thread (i.e each call waits for a response before returning control and data). Therefore it follows that the asynchronous API’s do NOT block the calling thread. When using asynchronous API’s a callback function is required to respond to the events fired by the database after an instruction has been completed.

Both approaches provide API’s for opening, closing and deleting a database. Databases are versioned, and each database can have one or more objectstores. There are CRUD API’s for datastore access (put, get, add, delete) as well as API’s to create and delete index’s.

Access to the datastore is enveloped in transactions, and a transaction can be used to access multiple data stores, as well as multiple actions on a datastore.

At a very high level, there you have it, IndexDB is a feature that allows you to manage data in the browser. This will not only be useful for online applications (e.g. a server based warehouse could export data cubes for local access) but also for offline applications to hold data until a connection can be established. I’d fully expect a slew of Javascript frameworks to add value ontop of what the standards provide, indeed persistence.js is one such example.

It’s good to see early implementations and prototypes for IndexDB and whilst the date for finalising this spec is unclear, I for one will be monitoring it’s progress closely and waiting with baited breath for it’s finalisation.

A few years back I was deemed a heretic by many of my colleagues and friends when I suggested that HTML5 will remove the need for writing many mobile applications. I was pummelled with questions like:

But how will they work offline?

Are you saying a browser user experience can rival a platform native one like Apples?

You do realise that most games require “threading” how you going to do that?

What about storing data locally, can you do that?

I was able to fend of most of these, but the one I couldn’t at the time was about accessing the device applications like Camera and GPS. Well things have moved on and whilst I am no longer deemed a heretic there are still some corridor’s whispering doubt.

One of the big features of mobile technology used by many apps is the phones location and location based services and application have already been through a huge hype cycle.

Under the catch-all banner of HTML5, although it is a separate subspec, the W3C Geo Location working group are making location based applications a reality for web developers. It has been around a while and hence is fairly mature and stable now.

A device (even a desktop) can provide location information in a number of ways:

IP Address (this is typically the location of the ISP rather than your machine, but ok if you simply want to check which country the user is in)

Cell Phone triangulation (only fairly accurate, very dependent on the phone signal so could be problematic in the countryside or inside buildings)

GPS (very accurate, takes longer to get location, dependant on hardware support and can be unreliable inside buildings)

Location data can also be simply user defined: however this is dependent on the user entering accurate information.

Of course one of the key concerns will be privacy but the spec covers this with an approach that the requires a user to give permission for location information to be passed to an application. Note the application can only access location information through the browser and not directly e.g. from the GPS device. Hence the browser enforces the user permissions for access.

The Geo Location API allow for both one off request to get the users current location or for repeated updates on the user’s position, developers write simple callback routines for both approaches. The key information provided includes: latitude, longitude and accuracy. Accuracy is a %value of how close the longitude and latitude values are to the user. Depending on the device you may also get additional information such and speed, heading (direction of travel) and altitude.

As per any quality application you process errors accordingly, especially responding to a failure to get hold of location data because of signal issues or other reasons. Hence retrieving location information is fairly simple, the real hardwork is in processing that information and that requires good old fashioned quality programming ;o)

This specification presents a huge opportunity for web developers to create applications once deemed only the domain of platform specific code, and I for one am very excited !