Yonder Innovation Bloghttp://innovation.tss-yonder.com
a blog about innovation and solutionsThu, 31 Oct 2013 10:31:51 +0000en-UShourly1http://wordpress.org/?v=4.0.19A world where time is not patient with people.More than ever, software products companies must accept that pro-active innovation is the only strategy for success on long termhttp://innovation.tss-yonder.com/2013/10/31/a-world-where-time-is-not-patient-with-people-more-than-ever-software-products-companies-must-accept-that-pro-active-innovation-is-the-only-strategy-for-success-on-long-term/
http://innovation.tss-yonder.com/2013/10/31/a-world-where-time-is-not-patient-with-people-more-than-ever-software-products-companies-must-accept-that-pro-active-innovation-is-the-only-strategy-for-success-on-long-term/#commentsThu, 31 Oct 2013 10:27:54 +0000http://innovation.tss-yonder.com/?p=663The peculiarity of software products: the continuous need of adapting to change. A peculiarity of the software products companies such as Microsoft (with Windows, Office or Internet Explorer), Google (with Android, Google Apps or Chrome) or Salesforce.com is their need to continuously adapt their offer to the changes in the market. For many companies (see Nokia vs Apple, IBM vs “Wintel”) this challenge can be anything ranging from dangerous to deadly.

IT industry is specialized from many points of view, but when it comes to the speed at which change takes place, we are speaking of the champion en-titre with indisputable chances at a title for the coming decade and a half. Software products have gone from development cycles of several years (in the 70s and 80s) to iterations of only two weeks. The speed of adoption of new technologies has reached maximum levels (see the speed at which radios have been adopted, vs. the one for tablets). The competitive space is increasingly fierce, new software products companies could be set up by two students in a dorm starting with a negligible capital and a box of open-source software components.

In short, we are talking of the three needs that create strategic urgency in software products companies:

The need to adapt quickly to the change in the client’s/market’s demands.

The need to adapt quickly to the development of technology.

The need to adapt quickly to new competitors or to new offers coming from the existing ones.

Innovation: adaptation to change through new solutions

The typical story of software products companies is the following: an entrepreneur identifies a need on the market, coagulates a technical team around a product idea which would answer that particular need and, if the stars align the right way, the product is a success, generates income that would continue the development of the product and would ensure enough profit so that the entrepreneur (and then the investors) could find the effort as profitable in the long term as well.

In a world in which change of any nature (i.e. of the clients’ requests, the development of technology or of the competitional space) occurs slowly, or, in the words of Marin Preda, in a world in which time is patient with people, the company would not feel any need of changing the recipe for success discovered in the beginning. The product would evolve following the same cycle of development, the technical team would programme using the same architecture, and the lack of competition would have the effect of a sedative on all these habits, slowing even further with once-sharp reflexes of the young company. And it would be absolutely perfect. This, if such a world existed.

The problem is that we live in a world where time is not patient with people, and more than ever the software products companies must come to terms with the fact that the only successful strategy in the long run is the pro-active innovation or, more bluntly put, the adaptation to change through new solutions (as opposed to the adaptation to change as a must).

Innovation starts by continuously assuming risks

I will go back briefly to the typical story of the software products companies. A critical aspect which is most times overlooked is the fact that that particular entrepreneur has taken on a high risk (most of the times from the all-in category) attempting a new concept. Success stories start once that moment is overcome and the concept is working, but before the first client, before the first paid bill and before the first press coverage there is a concept that is defined, tested and, if there is enough proof to validate it, it is launched into production.

The majority of the software products companies abandon their entrepreneurial spirit or at least the product teams do, which leads them towards a type of thinking in which the need for certainty eliminates the risk-taking. And why should they be blamed? Why should you take on risks when you can get things done without a triple toe loop?

I would say that there is no point in doing this, if the company does not plan to remain in the economic landscape for more than 5 years. For everyone else, taking on risks is like oxygen is for the aerobic micro-organisms, namely vital. Just like the oxygen, taking on risks had better come in a continuous flow, in a controlled manner, and not on the brink of cerebral hypoxia (when we are talking about taking on risks as a must or recreational innovation).

The pro-active innovation team

If the success strategy is pro-active innovation, then the tactics comes in the shape of a team of proactive innovation, attached to a product and directly subordinated to a person interested in the long-term performance of the company. This team would be busy experimenting with new concepts, taking on risks with the aim of transforming assumptions into knowledge. This team would be characterized by creativity, passion for problem-solving and tolerance for failure. This team would explore Continuous Delivery, Behaviour Driven Development, Mobile-Centric User Applications, User Experience Design or Big Data and would develop plans for the implementation of these concepts into the commercial product. This team would be evaluated by the speed at which it turns assumptions into knowledge, and its members would be motivated to be leaders in the industry in which their product is active, and not the followers of others.

In short, this would be the team that would maintain the spirit that made possible the success of the product in the first place – taking on risks in introducing new concepts, a critical need for long-term success of any software product company.

Short-term optimization inhibits long-term potential

Working with many software products companies from Europe, I have noticed that these understand the need for pro-active innovation, but act in the spirit of optimization of costs on in the short-term.

Of course, a pro-active innovation team is expensive, and the short-term advantages are limited. Nevertheless, the same companies end up investing millions of Euros in modernization projects determined by changes in the market that have an impact on the product’s technical architecture, when such changes would have been significantly cheaper had the oxygen reached the patient before he required a transplant.

In conclusion, now, on the verge of the budgetary planning for 2014, software products companies should assign budget for a pro-active innovation team which would start with an evaluation of the existing product and would develop a continuous innovation programme which would maintain the blade of competitiveness forever sharp.

]]>http://innovation.tss-yonder.com/2013/10/31/a-world-where-time-is-not-patient-with-people-more-than-ever-software-products-companies-must-accept-that-pro-active-innovation-is-the-only-strategy-for-success-on-long-term/feed/0Professionalism in the development of software products. Or, what a CTO expects from the development teamshttp://innovation.tss-yonder.com/2013/05/31/professionalism-in-the-development-of-software-products-or-what-a-cto-expects-from-the-development-teams/
http://innovation.tss-yonder.com/2013/05/31/professionalism-in-the-development-of-software-products-or-what-a-cto-expects-from-the-development-teams/#commentsFri, 31 May 2013 13:34:26 +0000http://innovation.tss-yonder.com/?p=641During interviews, I often get to talking about SOLID. The expectation is that the OOP programmers have heard about and are applying the SOLID principles in their day-to-day life. SOLID is a set of 5 fundamental principles which constitute the bases of a well-written code.

The experience shows that, unfortunately, the acronym aggregating the 5 principles introduced by Robert Cecil Martin – an emblematic name in the software field, known mostly as Uncle Bob – is more often than not omitted from the development path of many programmers. This aspect touches on a wider problem which has to do with the nature of the profession and, eventually, with professionalism in the field of development of software products.

Paul Graham, founder of the famous Y Combinator accelerator and author of many essays regarding excellency in software development, wrote in an article titled “Great Hackers” that extraordinary programmers find passion in what they do and that more than anything (including money) they expect to be challenged, to feel that every day there is a new challenge that they have to deal with. “Ordinary programmers write code in order to be able to pay their bills. Extraordinary programmers see their activity as a pleasure and are happy to find out that there are people willing to pay for it,” said Paul Graham.

Not all programmers are extraordinary, but all programmers should be professional. All programmers should be able to go home, watch themselves in the mirror and say proudly “Today I did a good job.” The wild growth of the IT industry lately has left us little time to focus on aspects regarding promoting some fundamental values that concern professionalism in software development, and I would like to use this opportunity to discuss some of these values.

15 things a CTO expects from the development teams

Uncle Bob was talking in 2012, at the Software Craftmanship North America conference, about 15 things he would expect, as a CTO, from his development teams. I feel it is important for these points to be promoted, considering that them being adopted by the development teams could have a massive impact on delivering quality software products.

So here are the 15 expectations defined by Uncle Bob from the professional product teams:

#1 – We will not ship shit

Why do we have to mention this? Simply said because, in the past, software development teams have shipped poor quality code, and this has to come to an end. It comes down to everybody’s responsibility, to the essence of a professional attitude.
/

#2 – We will always be ready

What does this mean? It means that when we are asked to do deployment, we will be ready to do deployment. The peak of unprofessional attitude is represented by the famous quote “we have to wait for it to stabilize”. As if we were working with jelly. A professional team will write code which can go into production after every single iteration. A professional team will write code based on which they will win all the bets they made with the QA team. A professional team will always be ready.

#3 – Stable Productivity

Most of the time projects have high productivity in the beginning, but only a few months after the productivity falls to the point where someone decides to bring in another bunch of programmers, which, of course, only worsens the problem. The expectation is that a professional development team will have stable high productivity. Here is where a well-thought design comes into play, an architecture which will not make work harder after the first few months of development.

#4 – Inexpensive Adaptability

Openness to change is essential. We call it software for a reason, and that is because it’s soft. Why does a change in demands have to cost “a fortune” because the design has to be changed? A well-thought software product will be able to withstand cheap changes.

#5 – Continuous Improvement

The expectation is that a professional team will constantly focus on improvement: the improvement of the code, of the processes, the improvement of technology. The improvement must come from every individual, because, if not, it would come from “heaven” in a long black cape that has bureaucracy written on it, or it wouldn’t come at all, and then we would be surprised that it sucks when we go home and think how we learned nothing new.

#6 – Fearless Competence

When a programmer sees a code line and says “this code line is so ugly”, but they don’t change it, because, if they did, the line would become theirs, that programmer is a fearful incompetent. Professionalism means the opposite of this behaviour, namely – fearless competence. When a fearless competent worker sees badly written code, s/he changes it and makes sure nothing was damaged. The code won’t bite, but incompetence does.

#7 – Extreme Quality

Quality is poor in software. The development teams say it’s the sales teams’ fault, that the sales teams create pressure and that is the reason why the quality of the product is poor. Nonsense! Quality is poor because the code that was written is of poor quality, and when the code quality is poor, productivity goes down and there is a risk of new errors. Extreme quality talks about a basic principle of software professionals – quality gets higher, not poorer, under pressure, and that is because it is only under pressure that the true values of a professional can be seen. If someone is under pressure and they take shortcuts, it means that is what they believe in – shortcuts.

#8 – We will not dump on QA

It is not the job of the QA team to find bugs. In fact, QA should find nothing. After every iteration, QA should ask themselves “Why do I still have a job?”. and when QA finds something, the reaction of the development team should be “how on earth did that bug get there?”, not “ah, ok – let me fix it”. The role of QA is to make sure the product is valid and ready to be delivered, after the development team has previously checked it. A code that was checked by professionals will be ultimately validated by QA.

#9 – Automation

The automation of testing is vital. Manual testing should be an exception, not the rule. The cost of manual testing is enormous, but automation has the disadvantage of being seen as an unnecessary expense in the short run. A professional team will not accept not having automation implemented in their products.

#10 – Nothing Fragile

“We cannot touch that module because it’s fragile”. What does this mean, it’s fragile? A professional development team will always be in control when it comes to the code they create and manage. The fragility of a module is proof of lack of control and, consequently, of lack of professionalism.

#11 – We cover for each other

A product team is first of all a team, and a team helps its members and can replace them if the need should arise. On a ship, the crew has clearly defined tasks to do, but under demanding circumstances every member of the crew can take over from anybody else. Product teams should make sure that there is at least one other person who can replace every member, and it is their duty to do so. The situation when a member of the team goes on holiday (or, as it were, gets hit by a bus) and the whole project stagnates because of this is completely unprofessional.

#12 – Honest Estimates

When a programmer says “it will take me 50 hours to finish this”, that programmer is lying. There is no way one can be that precise in their estimates. Estimates have to be honest, and honest estimates come as an interval accompanied by a distribution of probabilities. Honest estimates will be correct and they will also be devoid of pressure, because, when someone asks “Can something be done for this to be finished quicker?”, the programmer who gave an honest estimate can answer “No.”

#13 – You have to say “NO”

Saying “No” is the most important duty of a professional, and when a company hires a professional, that company implicitly expects to be told “No”, as well. When does one have to say “No”? Obviously, not when one doesn’t feel like doing work. “No” has to be said every time when “Yes” would be a lie. Professionals never lie, so saying “No” is part of being a professional.

#14 – Continuous Aggressive Learning

One of the main attributions of a software developer is to learn. The software industry is developing at a very quick pace. New languages and new processes appear, which is the reason why a programmer who lags just a little bit behind might never recover. A career in programming is like surfing – if you want to float, you have to stay on the wave, and, for one to stay on the wave, continuous aggressive learning is essential.

#15 – Mentoring

The software industry has a big problem. Schools are not interested in creating professionals. Of course, in school one gets taught a programming language, algorithms, a few key principles and maybe the students will get the time to do one project (which they finish at 3 a.m., after going through a six pack of beers). But schools are generally preoccupied with turning people who know nothing into people who know something.

This is probably the most important role of software professionals – to turn IT graduates into professionals, which can be done only through mentoring, through educating and through the power of the personal example.

The chefs of the chefs

At Yonder we develop software products for other software product companies. One colleague was making the analogy of us being “chefs for the chefs”. If this analogy is correct, it means that the professional development of software products is even more important for Yonder than for our client companies. This is the reason why we are in a process of continuous improvement and why we focus more and more on the need for performance, continuous development and professionalism.

How does the organizational culture regarding professionalism change inside a software company, through the implementation of the 15 principles formulated by Uncle Bob? Everything starts from the individual. It is an individual decision whether a programmer wants to be a professional or not. Some of them will decide to be professionals, others will imitate those who already are, and those who remain fearful incompetents will swim to the shore without their surf board, which will have been caught in the aggressive wave of continuous change.

]]>http://innovation.tss-yonder.com/2013/05/31/professionalism-in-the-development-of-software-products-or-what-a-cto-expects-from-the-development-teams/feed/0Big Data, Big Confusionhttp://innovation.tss-yonder.com/2013/04/30/big-data-big-confusion/
http://innovation.tss-yonder.com/2013/04/30/big-data-big-confusion/#commentsTue, 30 Apr 2013 11:24:49 +0000http://innovation.tss-yonder.com/?p=630In an era when storage and processing costs are increasingly smaller, the traditional view of the manner in which we operate with data is changing crucially.

The hunt for information in the data forest

In “Big Data: A Revolution That Will Transform How We Live, Work and Think” authors Viktor Mayer-Schonberger and Kenneth Cukier begin by presenting the situation of the year 2009, when the virus H1N1 represented a major concern for World Health Organisation and, in particular, for the American government. The rapid evolution of the epidemics created difficulties for CDC (Centre for Disease Control and Prevention), a governmental agency, as it reported the situation with a delay of 2 weeks in comparison to the reality in the field, partly because the population did not come into contact with the medical personnel after the first symptoms appeared. Real-time reporting would have allowed for a better understanding of the size of the epidemics, an optimisation of the prevention and treatment tactics, actions with the potential of saving lives in a disaster which ultimately amounted to 284,000 victims.

Incidentally, a few weeks before H1N1 reached the first page of newspapers, Google published in Nature, a scientific journal, a paper in which they presented the results of a study that started from the question “Is there a correlation between the spread of an epidemics and searches on Google”? The assumption from which Google started is that when someone feels the effects of a newly acquired disease they will use the Internet to search for information about the symptoms (e.g. “medicine for flue and fever”). Thus, using the data published between 2003 and 2008 by the CDC and the top 50 million most frequent searches from the same period, Google managed to identify a mathematical model (iterating through over 400 million) which would demonstrate the correlation between the evolution of an epidemics and the manner in which people search on the Internet. With the help of this new technology, named Google Flu Trends, the CDC has managed in 2009 to monitor in a more efficient manner the spread of H1N1.

The story of Google Flu Trends is from many points of view the archetypal example both for the benefits as well as for the technology and the challenged involved in solving a problem from the Big Data space. Starting from a hypothesis that looks for a correlation and using small unstructured amounts of data together with modern processing technologies, one is attempting to validate the correlation which, eventually, will bring value through the transformation of data to new information.

Big Data: The New “Cloud Computing”

Big Data is at its starting point. A proof for this is the confusion we can observe on the market when it comes to defining the problem that Big Data addresses and the manner (or manners) in which it does this. When I was talking in 2009 about Cloud Computing, I was constantly amused that the question “What is Cloud Computing?” addressed to a room of 50 participants had the potential of receiving 52 answers of which, go figure, many were correct. The situation is similar today in the case of Big Data and this is because we are in a period close to what Gartner calls “peak of inflated expectations”. In other words, Big Data is discussed everywhere, and the entire industry is engaged in discovering benefits in a wide range of technologies and concepts, starting from an increased degree of maturity/applicability (e.g. Predictive Analytics, Web Analytics) and ending with Star Trek inspired scenarios (e.g. Internet of Things, Information Valuation, Semantic Web).

“Cloud Computing” has already passed its peak, according to the volume of searches on Google, while “Big Data” is still growing. The fundamental problem that determines the confusion and implicitly the non-realistic expectations is, however, caused by the fact that Big Data consists, according to Gartner’s “Hype-Cycle” model, of over 45 concepts in various stages, from the pioneering one (i.e. “Technology Trigger”) to the maturity one (i.e. “Plateau of Productivity”). Thus, Big Data cannot be treated holistically at a tactical level, but rather only in principle, at a strategic level.

Figure 2 – Big Data “Hype Cycle” (source: Gartner, 2012)

Small Data Thinking, Small Data Results

Mayer-Schonberger and Cukier identify 3 fundamental principles that allow for a shift from the Small Data approach to a Big Data approach.

“More”: keep and do not throw away

Data storage costs have reached in 2013 a historical minimum. At present, storing 1 gigabyte (GB) of data costs less than 9 cents / month using a cloud storage service (e.g. Windows Azure) and for archiving they reach 1 cent / month (e.g. Amazon Glacier), reducing the storage costs of a petabyte (1.048.576 GB) to almost $10,000.- (or $10 for a terabyte), 1,000,000 times cheaper than at the start of the 1990s, when the average storage cost / GB was of approximately $10,000. In this context, erasing the digital data accumulated through the informatics processes makes increasingly less sense. Google, Facebook, Twitter raise this principle at the level of a fundamental law, representing their ticket for new development and innovation dimensions, an opportunity open now to those that until now were limited by the prohibitive costs.

“Messy”: quantity precedes quality

Google Flu Trends functioned because Google successfully introduced in the process of iteration of the mathematical models the most frequent 50,000,000 searches. Many of these searches were irrelevant, but volume was required for determining the model which finally managed to demonstrate the correlation. Peter Norvig, the Google expert in artificial intelligence, stated in his book “The Unreasonable Effectiveness of Data” that “simple models supplied with a big volume of data are going to eclipse more elaborate models based on less data”, a principle used also in the building of Google Translate, an automated translation service based on a corpus of over 95 billion sentences formulated in English, capable of translated in and from 60 languages.

“Correlation”: facts and not explanations

We have been taught and we got used to the fact that the effect is determined by a cause, a reason for which naturally we are tempted to find out “why?”. In the Big Data world, the correlation becomes more important that the causality. In 1997 Amazon had on their payroll an entire department responsible with drawing up lists of reading recommendations for those who visited the online bookshop. It was a manual process, expensive and with a limited impact on generating sales. Today, thanks to an algorithm named “item-to-item collaborative filtering” developed by Amazon, the recommendations are made completely automatically, dynamically and with a massive impact on sales (a third of the income generated by the electronic commerce coming from the automated recommendations). Amazon does not want to know why customers buying “The Lord of the Rings” by J. R. R. Tolkien are interested as well in buying “Friendship and the Moral Life by Paul J. Wadell, but what interests them is that there is a strong correlation between these two titles, and this fact is going to generate income three times as much as without such a system.

Conclusions

At this time, Big Data represents the most abused trend on the market, and as a result the degree of confusion generated by the plethora of opinions encountered at every step (a category from which this article is not excluded) is extremely high, leading to unrealistic expectations and similar disappointments. However, clarity comes from understanding the potential, from adopting the principles (i.e. more, messy, correlation) and from acting preventively for the adaptation of current systems to the new manner of thinking from the perspective of the calculus infrastructure, of the architecture and of the technical competences of those operating them. The stake is of identifying new addressable opportunities of transforming the data into information which could increase the efficiency of a product or of a business, as Google did through Flu Trends or Amazon through their automated recommendation system.

Yonder has been accumulating Big Data experience, investing strategically in applied research projects together with product companies that understood the vision we have outlined and the benefits that such an investment could generated both on short and on long term, this trend representing one of the four technological directions chosen as an innovation topic in 2013.

]]>http://innovation.tss-yonder.com/2013/04/30/big-data-big-confusion/feed/0On predictability in software development. How can one make order from chaos or 5 steps of growing the maturity of an IT companyhttp://innovation.tss-yonder.com/2013/04/18/on-predictability-in-software-development-how-can-one-make-order-from-chaos-or-5-steps-of-growing-the-maturity-of-an-it-company/
http://innovation.tss-yonder.com/2013/04/18/on-predictability-in-software-development-how-can-one-make-order-from-chaos-or-5-steps-of-growing-the-maturity-of-an-it-company/#commentsThu, 18 Apr 2013 10:54:56 +0000http://innovation.tss-yonder.com/?p=616„A badly planned project will exceed its duration three times; nevertheless, a well-planned project will exceed it only twice” or “there is never enough time to do the things well from the beginning, but there is always enough to do everything again in the right manner” – these are only two “sayings” to be found in the world of project managers from the IT industry.

Looking at them from the point of view of a project manager, these could sounds truly funny, if they were not at cross-purposes with what the clients expect to happen: “We want predictability from the IT managers”, “We want the estimation in IT to have an acceptable deviation”; “We want predictability for budgets, within costs and deadlines”, etc.

Under such circumstances, the first natural reaction is that of finding a number of excuses which seem founded at first sight, especially given the fact that the field makes available a host of such excuses starting from the idea that the software development process is a creative one and up to comparisons with other mature industries, and the IT manager praises the intellectual work over physical work.

But beyond of making fun of a bad situation or of adding a few witty words, we must ask in all seriousness – Why would we get into such situations and how can we find a solution so that we would be predictable?

What is “Predictability in IT”

So, let’s see what is hidden behind the phrase “Predictability in IT”. For the ease of the example, we will make an analogy (http://calleam.com/WTPF/wp-content/uploads/articles/What- makes.pdf) with the construction industry (we know that some of the reader might be surprised by this analogy). So, we have a client who wishes to build a house: he goes to a team of specialists which give him, within a +/- 30% margin of error a price quote and a deadline. In opposition to the construction industry, in the field of IT the majority of the customers are complaining that they cannot obtain such estimations from the IT companies. And then – the question appears: where does this difference come from? We might think, at the first sight, that the difference resides only in the difference between the physical work and the intellectual, creative one and thus is it harder to estimate. In reality, the things are somewhat different. The conclusion of the study we quoted above is that in the construction field there are few decisions made after the start of the project (in comparison to the ones discussed previously and these decisions are made at a centralised level, only by a few people), while in what concerns software development, a lot of decisions are made, at any level and during any stage of the project development.

Thus arises the need to create a process which would describe, broadly, the activities included in an IT project, which would lead to a correct and realistic estimation of both terms and costs and which would offer the expected quality of the developed product.

To come back to the IT field – predictability and quality in IT are given by the combination of three elements:

Performant technology and tools which could be used as support for process implementation.

Naturally, there is no universally-applicable recipe for growing the maturity of an organisation on the three directions mentioned above, but surely the following 5 steps are of great help, if not essential.

1. Vision

First, a company needs to aim at policy level to improve the three directions above. One needs to consider their cost, which is not negligible, both at the level of human resources and of the financial resources, with the mention that these will function better than a boomerang, bringing long-term benefits.

2. Transition from personal process to project processes

Once the vision is set, as well as the drive at company level, a second step would be the unification of the processes at the level of each project. Why? And especially, how do we do that? We may start from the idea that “people make the best of a place”, but there are some that make a place better than others, in other words, there are cases in which experience, expertise and interest for the completion of a quality project come into play. And then, why shouldn’t we take the best-practices already tested at individual level and extrapolate them at project level, where everyone in a team could benefit from them?

This step, that pertains to the project leader, is one of collecting information, it takes a lot of tie and requires maturity in order to make the difference between what is good, relevant and applicable and what is not. Nevertheless, the benefits of this step are seen immediately in the product quality, and especially in the team’s productivity

3. Transition from processes and good project practices to those formalised at organisation level

Even though the previous step bring a series of improvements on the working model, there is however a problem. Some projects go better than others. There is still no steadiness and predictability at the level of the company. Why? Because some project already have more performant processes they are guided by, and others don’t, and this may depend both on the specificity of a project and on the experience of the people involved. Under such conditions, there is the need to share knowledge between projects, there arises the need to transfer good practices and project processes at the level of the organisation.

4. Formalising the processes at company level, organising a group which would concentrate on process improvement and tools

During this step, there arises the need of centralisation, of refining and formalising all processes coming from project level. As best practices we recommend the setting up of an internal working group whose responsibility would be the formalisation of these processes. The group would have the purpose of refining the tools which would then facilitate the process implementation. The greatest danger now would be to end up with a bureaucratic process which would lead to increased costs and the impairment of the working manner. How can we avoid ending up with bureaucracy? With the help of the tools which allow for the automatisation of the process activities.

E.g.: automatic testing tools, issue tracking tools, automatic code inspection, project management tools, etc. Experience has shown us that, in this stage, it is important to create a data base which would include all good practices identified in steps 1, 2, 3 and 4.

Furthermore, this data base (Company Knowledge Base) must include as well the KPIs measurements of existing projects. The data base includes all information collected during project development, good practices, and measurements – becoming a reference “library” for future projects. Thus, all information remains within the company, can be easily consulted, and the development of future projects is based on proven good practices and not on individual knowledge.

5. Share the best practice

The last step would be the implementation of the processes collected at company level, again at project level. This phase begins with trainings and workshops both at project management level as well as at the team level for each project. As a result – processes become unitary at company level. In this step includes as well the adaptation of organisational processes at the particular situation of each project (because there is no “perfect process that fits them all”).

You succeeded, go on

The five steps above may guide a company that is still based on individual processes or on project processes towards standard processes supported by tools and executed by highly trained people that can offer the much-coveted predictability in IT.

After the company goes through the steps above it is essential to improve continuously the already-standardised manner of work. This process must be a continuous one.

Instead of conclusions

In conclusion, the roads towards quality improvement and predictability for the Romanian IT companies is one of the “secrets” that may offer a competitive advantage, especially against the background of the price pressure coming from India or China, but also because of the increased expectations coming from the customers who want quality software products, at the estimated prices and deadlines.

]]>http://innovation.tss-yonder.com/2013/04/18/on-predictability-in-software-development-how-can-one-make-order-from-chaos-or-5-steps-of-growing-the-maturity-of-an-it-company/feed/1I’ve got a .NET product on-premises and I want to move it to Windows Azure. How much will it cost me?http://innovation.tss-yonder.com/2013/02/21/ive-got-a-net-product-on-premises-and-i-want-to-move-it-to-windows-azure-how-much-will-it-cost-me/
http://innovation.tss-yonder.com/2013/02/21/ive-got-a-net-product-on-premises-and-i-want-to-move-it-to-windows-azure-how-much-will-it-cost-me/#commentsThu, 21 Feb 2013 14:02:13 +0000http://innovation.tss-yonder.com/?p=606„Show me the money”

A business of hundreds of millions, if not even billions
Windows Azure, the cloud computing platform launched by Microsoft in 2010 celebrated on February 1st 2013 three years of existence. Public information regarding income exclusively from the Windows Azure business is missing; however, if we are to look at the last quarter (Q2 2013) we notice that the division „Server & Tools” (which Windows Azure is part of) reached an income of $5.88 billion, 33% more than in Q2 2011, the last quarter when Microsoft did not have an offer for the public cloud market. We could assume that part of this growth was caused by Windows Azure, which already places the product in the big league of the Redmond company’s product portfolio.

In one of his recent statements, Bob Kelly, Vice-President of Microsoft, the marketing responsible of the Windows Azure team, stated that there are tens of thousands of subscribers to the platform and that their number is increasing by a few hundred every day. Even if they are not precise, the numbers indicated and the increase in the division’s turnover with more than a third in comparison to the moment when Windows Azure was introduced should inspire confidence to those who are seriously considering moving their products in the clouds. Beyond this, starting at the end of 2012, Microsoft Dynamics CRM Online became a citizen with full rights of the Windows Azure platform. A direct competitor for Salesforce.com, Dynamics CRM Online has been considered by Gartner the CRM product with the fastest growth, which make Microsoft’s move inspire even more trust to those who want to take the step towards the world of the public cloud.

The issue of cost or ROI
Personally, I am a supporter of the cloud movement and I believe that many on-premises products of today are going to find their places at a high altitude in the coming 5 – 8 years, if they are going to try to survive. Furthermore, I believe that if we are to talk about new products, the first question one needs to answer is “Why NOT the cloud?”. It is an already noted trend and experience shows that all companies I have worked with lately have considered at least a form of cloud when it came to the implementation of a new product.

Forrester, a research company, indicates that by the year 2020 the global market of cloud computing will reach the value of 160 billion dollars, 83% being represented by the SaaS solutions. Regardless of the accuracy of the foresight, one thing is certain – SaaS will have a major impact on ISV companies that base their income today on selling licences.

The truly difficult question addressed the value of the investment that must be made by a company that already has an on-premises product that it wants to move in the cloud and finally the rate of return on investment (ROI) on medium and long term. In 2011 Forrester Consulting carried out a study based on the TEI (Total Economic Impact) methodology on 6 ISV companies which have migrated on-premises solutions to Windows Azure in order to determine the answer to this question.

The companies included in this study are developing solutions for energy management, PoS (Point of Sale) systems, systems for reservations and for management of IT infrastructure, have a market experience between 3 and 24 years with locations both in the USA as well as in the EU and before the migration they had on-premises products based on a traditional licensing model. Five products developed using Microsoft (.NET) technologies, only one using Java. The study followed the evolution of the migration projects for three years, during which they all succeeded in reaching the market and obtaining income.

There are at least five important conclusions of this study

Up to 80% of the target product code could be ported through a simple recompilation. The remaining 20% had to be re-written or adapted in order to use the services offered by the Windows Azure platform;

The initial code porting, which allowed a first run in Windows Azure of the on premises product (without PaaS specific optimisations) lasted between 8 and 12 man-months;

The production version (which includes optimisations, including the implementation of SaaS specific multi-tenant concepts) added on top of the initial effort between 5 and 24 man months;

The operational costs regarding the hosting of the application using Windows Azure varied between $400 and $2,500 / month (the average after three years being of $953 / month). This represented a decrease of between 70% and 80% of the previous hosting costs, based on rented servers or own infrastructure (including administration);

The annual increase owed to the SaaS solutions implemented in Windows Azure have varied between 20% and 250% in the first 9 – 14 months from the launch;

Naturally, there are many variables at play and the risks is that these numbers are interpreted in a way in which makes Windows Azure the key to success in business, and regardless of the market conditions anyone can get rich with a mere migration to the cloud. Obviously, such a conclusion would be fundamentally wrong and completely untrue. Nevertheless, the results are relevant. They answer with enough precision to those who have an on-premises product they want to migrate to the cloud. But are they enough to write a business case strictly around them? Probably not. But they can be a good starting point for a few initial calculations and surely a base on which one can build a plan for a POC (Proof of Concept). However, before all of these, one must check whether the market accepts the idea of a SaaS product and whether through such an investment business objectives are reached, such as addressing new markets (which otherwise are hard to reach because of geographical, logistical or competition-related reasons) or defining of a competitive advantage based on innovation and operational excellence.

There I’ve shown you the moneyFinally, the Forrester report presents the following table, relevant for the archetype of the ISV company that represents the 6 included in the study –

Development costs are based on the medium income of an experienced .NET developer, equated at $85,000, increased by a total cost factor of 1.2 (overhead multiplicator) .It is granted that for Romania these costs are different. Even though the operational profit is not presented, we may conclude that at least in the case of the archetype we are talking of a positive ROI.

Even if in the case of some clients I interact with the fear of taking the step towards the cloud is still greater than the perception on the value of the opportunities it offers, I notice increasingly more often that those ISV companies that decided despite a sometimes reticent market to make this decision have stood to gain. Exact Software, one of the largest ISV companies in The Netherlands has recorded in 2011 a decrease of its income from selling software by 13% in Benelux, while the SaaS based solution increased by 46%, bringing an income of 11.6 million Euro in comparison to 7.9 million Euro in 2010. The same situation is to be found in the case of UNIT4, another Dutch ISV who relied early on SaaS and now is benefiting of income of over 9 million dollars from this solution.

The road aheadWindows Azure will continue its growth in the coming period, mainly because of the new products that will be developed as SaaS on this platform and of the success of the Microsoft products that have recently adopted the platform. For myself, the fact that the future of software products, as we know them today, is closely connected with the cloud is a certainty. It remains to be seen how many ISV companies will be able to face the change, but independently of this result the numbers that I have presented above indicate the fact that for some sublimation is already a solution generating growth and profit.

]]>http://innovation.tss-yonder.com/2013/02/21/ive-got-a-net-product-on-premises-and-i-want-to-move-it-to-windows-azure-how-much-will-it-cost-me/feed/0Why is 2013 a good year for software?http://innovation.tss-yonder.com/2013/02/11/why-is-2013-a-good-year-for-software/
http://innovation.tss-yonder.com/2013/02/11/why-is-2013-a-good-year-for-software/#commentsMon, 11 Feb 2013 09:03:51 +0000http://innovation.tss-yonder.com/?p=582In 2013 the software industry is growing by 6%, almost twice as much as in 2012. Beyond this, we are going to witness a turning point in the digital era.

2013. A better year?
A potential military conflict in the South-East of Asia between China and Japan, the escalation of tensions from the Middle East caused by the Iranian nuclear programme, the demise of the Euro zone or the incapacity of the Washington administration to redress the deficit are just a few valid reasons for which the triskaidekafobia (fear of the number 13) would cast a shadow on the opportunities which could make 2013 a better year, at least from a technological point of view.

The year 2013 will be marked by a key moment for technology, in general, and for the software industry in particular – namely the internet is becoming mobile. Beyond this, with Google’s help, the phrase wearable computing acquires a new dimension, and the internet of things (or the thingternet) is credited for an attempt, successful this time, to have an impact at global level.

Communication technology has gone through a few key moments. Starting with the postal service, continuing with the telegraph, then the phone and recently the internet, humanity witnessed small industrial revolutions with every leap from the series above. In 2002 the number of mobile phone exceeded the number of land line phones, and thus instead of phoning locations we are now contacting people, depriving the new generation of the experience of phoning the house of their latest crush and of having to talk to the suspicious father.

In 2013, 44 years after the launch of the forefather ARPANET, the internet becomes mobile. According to estimations published in a study carried out by Morgan Stanley (an investment banking company) the number of mobile devices with internet access (mobile phone and tablets will exceed the number of PC. Same study states that, by 2016, the ratio will be of almost 2:1, marking in the process the moment when the majority of the population connected to the internet will be mobile (at this time the ration between mobile device and individual is not 1:1 yet).

But what does this mean for the business environment? Surely the repercussions have a major impact across the entire spectrum. Firstly, the companies that concentrated their offer around the potential for dominance of mobile devices will stand to gain. Apple is leading the group, orchestrating a strategy that has brought profits globalizing about 75% of the entire profit of the mobile phone industry, despite a market share of less than 10%.

On the other side of the barricade is Microsoft, Dell, HP (owed to the success based on the PC market, which is now declining), Nokia, RIM (owing to the slow migration towards smart phones), Sony and Nintendo (owed to the move of the gaming experience from consoles to smartphones).

For the vast majority of companies in the field of technology, the decision to adopt a mobile-first strategy is still debatable. I have recently had a debate with one of the most important tour-operators from Europe on this topics and, paradoxically, even though people are aware and know the phenomenon, the lack of vision and the preconceptions regarding the behavior of the mobile internet consumer determine a slow reaction in changing the strategy of addressing the market.

In the field of ISV companies the activity is more ablaze than ever. The companies understand that a mobile dimension of the products they are offering may ensure their success for the coming years. The approach is one that complements the functionalities offered by existing products (on-premises or SaaS) or that launches products dedicated exclusively to mobile scenarios. In the health industry we are likely to encounter applications that will help patients follow the correct treatment; the municipalities will be able to interact easier with the citizens (at reduced costs), and the banks will have a new opportunity of presenting a friendlier face to their customers.

However, as usual, Europe will follow with some delay the path of the US, where companies based on products that use the mobile internet tendency have accessed investments of 3.9 billion USD in the first half of 2012, about 46% of the overall invested capital, a spectacular increase from only 17% in 2011.

In what concerns the ISV companies…
According to Truffle – European investment company that ranks and analyses the top 100 European software companies – the major tendencies for 2013 are still cloud computing, mobile applications and adaptation to the evolution of web technologies (where standardization and increased adoption of the JavaScript programming language are determining major changes). In 2013, Yonder will be focusing its research efforts in these directions, adding as well the family of big data technologies, because of the yet unexplored potential of the issues dealing with speed, volume and increased variety of data from various industries, from health to HORECA.

Conclusions
The IT services and software industry will grow on a global level in 2013: the consumption of software by 6% (almost double as compared to 2012) and, even if not as spectacular, IT services will increase as well with about 3%. A direct consequence of the fact that the internet becomes mobile, the income from mobile applications will increase by over 50% annually, until 2016, according to Juniper Research. Furthermore, Gartner estimates that in 2013 over 80 billion mobile applications will be downloaded, almost double in comparison to 2012.

Without any doubt, mobile internet will be at the core of growth strategies for many companies. However, experience teaches us that sometimes even the most obvious moves may take quite a few good years until they penetrate the market. Thus, the web interfaces, SOA (Service Oriented Architecture) and SaaS (Software as a Service) would not be yet a topic for discussion in the architecture meetings and the board meetings of thousands of companies world-wide. At the end of the day, it is all about vision and risk taking, qualities which are reserved, most of the times, for leaders. It is up to every individual to decide on which side of the river s/he wants to be.

]]>http://innovation.tss-yonder.com/2013/02/11/why-is-2013-a-good-year-for-software/feed/0Mobile deployment options for the enterprise worldhttp://innovation.tss-yonder.com/2012/07/12/mobile-deployment-options-for-the-enterprise-world/
http://innovation.tss-yonder.com/2012/07/12/mobile-deployment-options-for-the-enterprise-world/#commentsThu, 12 Jul 2012 07:00:21 +0000http://innovation.tss-yonder.com/?p=553Like it or not mobility is central part of our life, we as a species are designed with mobility in mind, we move each day between our homes and our offices, between our desks to various meeting rooms and project premises to perform our duties. As evolution, we are in the information age, we left the industrial revolution long behind and we are mostly producing value and wealth based on information, we are in the middle of the informational revolution.

So what we are witnessing, in the last years with the evolution of mobile phones, mobile devices like tables and the mobile applications that make them useful is just normal evolution. As technology evolves, the way we interact with it becomes closer and more natural to the way we interact with the world. We need to be able to communicate and process information no matter where we are and what we are doing.

Certainly we had mobile phones for quite some time what changed is the accessibility of the Internet and Intranet related information’s on the mobile devices and the multitude of the mobile applications that gave a new dimension to the mobile world.

The traditional lifecycle of the consumer mobile applications is that once procured or developed and provisioned by the company that wants to distribute them it gets into the standard flow of the app store that is being used, being that Apple ITunes or Android Play (Market) for distribution. In some distributions channels like the Apple ITunes the applications will go through a review process and if the application is successfully approved it gets distributed to the end users. Each new version then goes through the same process.

Apart from the fact that the applications are provided by a central store, this model resembles very much how software was distributed in the era of fat applications, where each software vendor had its online software market where he sold software that got downloaded and installed by the end user.

Fortunately we have lots of experience with the traditional application distribution model and by now we know that it might have been appropriated for the end users it does not answers all the needs of an enterprise where there are plus issues that need to be addressed.

To cope with the specific enterprise issues a whole new concept was created with the Application Service Providers (ASP) and later Software As A Service (SAAS) concept that allowed enterprises all over the world to start using software with no large upfront investment, no maintenance headaches, no outdated solutions.

Unfortunately when we are talking about mobile applications in enterprises we are in most of the situations in pre ASP/SAAS era.

While the end users can decide themselves about what applications to install and upgrade and what level of sensitivity their information have, businesses usually are stricter with these requirements. Enterprises will most often have to face at least the following questions:

User authorization and license management. How is user access controlled to which applications? What if users change roles? What if the applications need to be licensed per user, how are licenses enforced and tracked for the purchased applications?

Application and data security. How is integrity assured? What happens with the data and application access if a device is stolen, lost or hacked?

Application distribution. How can the application be distributed to multiple device types? How different policies as mandatory installs and / or mandatory upgrades are handled?

Application and device tracking. How can corporate and legal compliance be established? What is the method to inventory and audit the mobile devices

As can already be seen the lifecycle of a mobile applications dedicated for the business can be quite complicated.

Procurement. Given the limitations in displaying and interaction of the mobile devices, mobile applications are most effective when they serve a very specific function, rather than the broad range of functions most enterprise software offers. In the end, what are needed are enterprise-class controls with consumer app store convenience. While more companies are looking to bring mobile app development expertise in-house, many often look to outside developers to source these specialized applications. This often makes sense considering that most companies are dealing with a myriad of mobile operating systems and device types.

Provisioning. To be able to run any kind of mobile application on a mobile most of the time the application needs to be signed with a certificate. While in some platforms like Android these certificates can be self-signed and generated, on other platforms like IOS there certificates are provided by Apple based on a yearly developer or enterprise license. If on Android a development subscription or license is only needed for applications that need to be deployed trough the mainstream Android Play market, on IOS any application that is installed on an IOS devices needs to be signed with a provisioning certificate.

On iOS there are three sorts of distribution profile:

Developer / Ad-Hoc. iOS Developers enrolled in the Standard program have the opportunity to distribute their application outside of the App Store on up to 100 different devices.

Developer / App Store, allowing the applications to be uploaded to the App Store after review by Apple

Enterprise / Ad-Hoc. iOS Developers enrolled in the Enterprise program have the opportunity to distribute their application outside of the App Store but for internal use and without the 100 devices limit. The “Enterprise” provisioning profile is created to allow applications to be distributed inside/internally to a company. The enrollment is linked to a company’s DUNS number (https://iupdate.dnb.com). The certificate has to be updated each year and so are the applications that where signed with the certificates.

Review and Authorization. After the mobile applications is sourced or developed it is subject of an approval process. Depending on the company, this step may require different levels of trials, reviews, and sign offs. For instance, testing with limited deployment to small groups might be a first step. Then, the app may need to move through other approval workflows, such as technical, security, financial, and legal reviews, before widespread deployment.

Deployment. Once the mobile applications are approved they must be deployed to the right set of users. If a company is large enough or if there are many existing mobile applications, this is not a trivial matter. Most companies have multiple departments with differing application needs. Users may need access to applications based not only on department but also by their role and level within the company. To simplify management and access for mobile apps, mobile application management systems should be integrated with a company’s existing directory services. As an employee changes roles, their access to applications should be automatically revoked or granted based on that change.

Also with the multitude of mobile OS platforms and versions, a company needs to make it simple for users to get the right apps for the right devices.

Usage tracking. Companies purchasing enterprise mobile apps want to understand which of their employees have downloaded their applications and which ones haven’t. This is especially true for any required or featured applications within the company. This information can be used both for license tracking and auditing capabilities, as well as to track compliance with corporate application policies. If an employee has not yet downloaded a required application, department managers should have visibility in order to contact that user and ensure compliance.

Updating. Any application that provides any kind of useful service usually evolves either because business process changes or underlying data changes. Application updates must be made available to users as simply as possible. Also it is important to understand which versions of applications are deployed throughout the enterprise. Sometimes it is necessary to push updates to users such as when security vulnerabilities are discovered. In other cases, older versions of apps become unusable as backend system applications evolve. By setting up a clear and enforceable policy for app versioning in the enterprise, IT’s role in distributing and enforcing application updates can be dramatically simplified.

Decommissioning. When an employee leaves or their device is lost or stolen, it is critical that the company’s sensitive application data is not put in danger. IT teams must have the ability to remotely lock and wipe application data. With different security needs for different applications, it is important to have app-level security profiles, independent of any device policies in place.

Given that the nonfunctional requirements are clarified let’s go over the major deployment methods that are generally available to Android and IOS based mobile applications.

There are four major ways to deploy mobile applications on IOS or Android smart phones today: consumer app stores, side loading, web based applications and MXM based solutions

Consumer App Stores

The easiest and most cost effective deployment methods for mobile applications are trough the dedicated consumer deployment channels as the: Apple’s App Store and Google’s Android Market are. While the initial investment is very low, these app stores do come at a significant cost. First, their use requires giving up any meaningful control of the release cycle, as companies must wind their way through vendor approvals for the initial submission as well as for all subsequent updates.

If the mobile application provides a generic service and uses maybe one backend system no matter what companies or users are using the application (just like we can see in the bellow image) then it may make sense to have the enterprise application being delivered over a regular consumer app store.

Most of the times, however, the mobile applications are a front end to a subset of already existing software solution from the repertoire of an software vendor, solution that might have an on premise or cloud based deployment.

Having a per customer deployment of the backend solution or a dependency on such a backend poses a serious limitation to a single app based deployment.

There are however ways to deal with these scenarios. One would be to create / configure a different application for each customer with different backend access address and maybe some different branding mechanism.

In case the number of customers are manageable, the number of deployments are well kept under control especially in conditions where deployment costs are important might prove as a conveyable method.

If the number of customers could potentially create a logistic problem then another alternative solution to these problems would be to have one application delivered in the store that would allow / configure access to different backend services based on a pin code mechanism and a central dispatching server.

When the application would be installed or at first start the application would request a pin code that would be used for requesting from the central dispatching server the proper backend URL. This would allow for one mobile application deployment to function with an indefinite number of backend deployments and all this in ways that would protect the identity of the corporate customers if that is sensible.

Perhaps the most important aspect is that companies must give up control of the app itself, as it becomes publically available on these consumer application stores, there are no “private” applications.

These generic app stores provide very limited auditing capabilities, without the ability to see specific users and versions. There is no usage tracking to help developers understand what features are being used and user feedback is diluted by irrelevant public comment and use. Critical to security, the decommissioning of an app is not possible for individual users when they leave the company or misplace a device with a public app store.

Side Loading

This approach offers significant control and addresses several of the issues with consumer app stores, but it comes at a very high cost and effort. The burdens introduced by side-loading limits the frequency of releases and updates as every device must be touched every time a new app is installed or updated. While this approach may work for small group testing, it is not scalable beyond a handful of users. In addition, side-loading fails to provide critical enterprise features and controls, such as app usage monitoring or decommissioning.

Web Based Applications

The technology stack used to build the application can have a major impact over the deployment options.

Under iOS and Android there is are easy ways to create shortcuts of web pages or web applications on the home screen of the smartphones just like native applications have thus providing easy access to these applications in conventional manners.

In the special cases when offline mode is not desired or required a mobile web application provides the extra feature that can be deployed as a regular web application so no more upgrade or installation problems. The users will always benefit of the latest version of the software with no upgrade problems.

While web based applications tend to scale very well and benefit from all the advantages that SAAS or Cloud based applications offer, just like all the above methods, web based mobile applications also pose problems as fails to provide critical enterprise features and controls, such as app usage monitoring and inventory.

On top of that on iOS version 4 and bellow there is great performance penalty when a web application is being started from the home screen as opposed from the web browser as the JavaScript engine does not run in its most performant mode. While this issue was addressed (http://www.guypo.com/mobile/ios5-top10-performance-changes/) with iOS version 5, there are still small differences in performance between a web page lunched in the browser of the smart phone and from the home screen.

MDM, MAM, EAS Based Solutions

It becomes harder to believe, each passing day that a policy that would enforce employees to use a standard corporate device in order to control access and mobile equipment would have any success in the future. The whole BYOD (Bring Your Own Device) is so widespread in the corporate environment that at least half of the business had to considerate as given according to a 2011 survey.

Some organizations, especially those in the government and health care face new legal questions. Who actually needs to own the device? There is no universal and clear answer but at least there are 3 major directions:

Shared management. Basically the employee owns the devices but if he needs to access corporate resources from a personal device then the employee grants the company the right to manage the device or parts of the device (like certain applications) and sometimes even to lock and wipe the device even if there will be personal data lost

Corporate ownership. The organization buys and owns the device allowing or not non business use of the device. Employee who don’t like the device or services can carry their own devices which will not have any corporate access

Legal transfer. Basically the organization buys the device from the user while keeping control and managing the device just like the above methods while it commits to selling it back to the user when the employee leaves the company.

Organizations that find themselves trying to answer the ownership questions or are in need to address the extended lifecycle of the enterprise mobile applications have quite a substantial toolset from three sometimes confusing market segments:

Mobile Device Management software, also known as MDM

Mobile Application Management software, also known as MAM

Enterpise Aplication Stores, known as EAS

Mobile Device Management, MDM solutions are device centric. The focus is on the device and what happens to the apps and data on the device in case of an event: temporary or permanent access restrictions, data plan restrictions, rooming restrictions, loss or theft. Most MDM solutions have a level of application management, most of the time involving creating an inventory of applications on the devices and an enforcement model for installing and removing files and data.

Mobile Application Management (MAM) focuses on the applications and the users of the applications, therefore MAM supports license management, application updates, complex application life-cycle and protects the end-users privacy. MAM is also linked to the Application Store by checking in code during the publishing of the app in the store and by managing application provisioning. MAM is enforcing policies at the application level: provisioning, deployment, roll back installation, data security and configuration managements.

An Enterprise App Store is a company internal marketplace for shopping, categorizing and installing applications. Strictly speaking, an enterprise app store may or may not provide the functionality provided in a Mobile Application Management solution. Enterprise App Stores can be provided either as an application installed on a device or as a web-service providing a similar catalogue. An Enterprise App Store should allow end-users to be able to install applications from a listing of available applications.

There are many players in the field each with it’s own competitive advantage and attacking different zones of the MDM, MAM or EAS segment. Just some of them are:

Conclusions

There does not seem to be a universal solution or silver bullet when it comes to deploying mobile applications in the enterprise. The complexity of the deployment depends very much on the technology chosen: native versus web, on the specifics of the application (like: need for enforceable updates when a new backend version is deployed) and on the specific needs of the customers (core needs for the management of the decommissioning and tracking of the application usage).

Generic applications deployable trough the regular channels (iTunes & App stores) and mobile web solutions would tend to be the easiest to manage as usually enterprises that already need and employ a MDM or MAM solution will be able to easily integrate them into their existing workflow and track the usage if needed.

Custom mobile solutions individualized with different backend per deployment and enterprise would probably require investigating and investing into an Enterprise Application Store solution so a new version can be easily provisioned, pushed and monitored through multiple enterprise customers that do not yet employ or require any MDM or MAM solution. In the same time software vendors would need to be prepared and able to also push the same applications through the customers MDM or MAM solutions where those are in use.

The provisioning rules imposed on some platforms as IOS aggravate the matter a lot as in this circumstances the enterprises would need to be enrolled into the IOS Developer Enterprise Program and the software vendors would need to provision the applications with the customers provisioning at each release and even worst at each customer yearly licensing renewal.

]]>http://innovation.tss-yonder.com/2012/07/12/mobile-deployment-options-for-the-enterprise-world/feed/0RavenDB – yet another NoSQL DBMS … or not?http://innovation.tss-yonder.com/2012/05/25/ravendb-yet-another-nosql-dbms-or-not/
http://innovation.tss-yonder.com/2012/05/25/ravendb-yet-another-nosql-dbms-or-not/#commentsFri, 25 May 2012 06:56:00 +0000http://innovation.tss-yonder.com/?p=530Nowadays we see more and more non relational database management systems put forward especially in the PaaS / IaaS field: DynamoDB & SimpleDB (on Amazon), MongoDB, Apache Cassandra, Microsoft Azure Table Storage, CouchDB etc. If we would take a look at the .NET world, what alternatives do we have for Azure Nosql Storage? Many of the most popular NoSQL dbms interface with .NET via different means like web oriented APIs, Thrift interface, COM interop & LINQ adapters built on top of services, but this article will focus on a .NET native one which was built targeting the .NET platform in terms of API, deployment and underlying technologies: RavenDB. This dbms drawn our attention because it’s stated to be transactional and in NoSQL world perhaps the biggest challenge is to enforce transactional writes which in general is not supported.

For the busy ones who only want to scratch the surface of RavenDB here are the conclusions we have drawn after a hands-on approach: RavenDB is an easy to use NoSQL dbms for those familiar with .NET API’s, full supports linq for both queries & indexes, it scales out easily and above all it’s ACID. Even though it’s open source, verify if the licensing model suits you. If you plan on building an application which:

needs transactional support

big data is one of your main concerns hence scale out capabilities are a must

you require advanced search capabilities

runs on .NET

it’s worth taking a look at RavenDB. The support is well structured and you can have it up and running in no time.

For the others who want to dive into details read on.

Under the hood

RavenDB is a document database management system built on top of Lucene.NET for search, and ESENT for storage. Lucene.NET is the porting of the famous Apache Lucene text search engine on the .NET platform and ESENT is Microsoft’s Extensible Storage Engine (aka Jet Blue Engine used in products such as Exchange Server or Active Directory) which is optimized for fast data access (read it as an ISAM db engine).

Documents in RavenDB are digested by the dbms as JSON’s and binary data is stored as attachments. (1 attachment can be stored across multiple instances of RavenDB called shards).

Some of the cool features

Dynamic indexes:

The dbms relies on indexes for serving a query. This means that for every submitted query there has to be an index upon which the data will be retrieved. The user can define indexes (called static indexes) and save them on the server, but that’s not mandatory. If you omit that, the server will create a temporary index suitable for the submitted query and cache it for you. If that index is used multiple times, it will be promoted to a permanent index. Although it’s highly recommended that you design the indexes by yourself (and there is a nice interface based upon lambda expressions for you to build map & reduce functions for that), if you have ad-hoc queries, as they hit the dbms you will experience performance improvement as RavenDB worms up.

Unit of work pattern:

When you design your application in an OO manner, you also expect that manipulating the data will follow the same pattern. Relational DBs by its nature doesn’t comply with that, so ORM’s would need to fill in that gap. In .NET world, relational DBMS is a common choice, hence support for ORM’s is everywhere. Usually those ORM’s follow the unit of work pattern (which relies on the ACIDness of the underlying dbms). With RavenDB you get the same approach, without requiring an ORM: you open a session of work and as you interact with the database, documents get cached in memory, changes on those documents are done in memory, and are persisted on demand with a single call to the SaveChanges() atomic operation. Referential integrity is also preserved: different calls for retrieving the same document will result in the same instance being served back (per shard).

Search capabilities:

Since RavenDB delegates the search functionality to Lucene.NET search engine you get the option to supply one of the builtin analyzers of Lucene.NET which are used when tokenizing the text-to-be-indexed, and this enables you to benefit for some advanced search features out of the box like free text search with english thesaurus.

Partial document update:

It’s often the case that you would want to update only a part of the document structure (only a subset of the properties a json structure has). In RavenDB there is the concept of patching which supports editing a portion of the stored document without having to load the whole document in mem, change it, and save it back.

Deployment model

The database engine can be hosted within your own application, as a separate application hosted in IIS or as a windows service. The client talks with the server over http when the server is remote and with direct calls when the server is embedded in the caller’s process.

Security

Depending on where you host the RavenDB engine you can setup the first layer of authentication. On the second level, RavenDB supports authentication using OAuth which integrates well with the RESTful API, and it also has a plugin for authorization at document level.

Scaling out

RavenDB is by nature a distributed dbms. This means that your documents can be split across different instances of the dbms called shards (which run on different machines). Although it supports a feature called autosharding which should take the care of splitting the documents per shards off the shoulders of the designer, it’s recommended that you partition your documents by yourself, keeping in mind application logic factors such as multitenancy (its better that documents specific to a regional location to go on the same shard) or transactional operations (transactions should affect documents stored on the same shard, in order to avoid the MSDTC to kick in which brings some performance concerns)

Extensibility

RavenDB implementation follows a pluggable architecture pattern (it was done with MEF – Microsoft Extensibility Framework) which means that the core engine can be extended with different features (called bundles). In fact some of the core features of RavenDB have been developed as bundles eg. Sharding & Replication, Authorization, Delete Cascading.

Cross platform & vendor-lock in concerns

RavenDB was built for the .NET world with everything that means. Although Mono is out there for running .NET code on different platforms, MSDTC (Microsoft Distributed Transaction Coordinator) is a prerequisite for transactional storage of multiple documents across multiple shards (read across separate machines) hence this will not run outside Windows. So if you decide to opt out for RavenDB, it’s highly probable that you will be tied to Windows.
RavenDB is open source, but depending on your project you’ll have to pay a suitable license (it’s free for OSS projects).

Watchouts

When talking about scalability we often think of deployment in cloud. RavenDB was designed for scaling in the .NET world, but it is not out of the box ready for Microsoft’s PaaS. Additional setup with Azure Cloud Drive has to be done to enable RavenDB’s persistent storage.
Reporting is also not it’s strong point since there is a lack of tooling in this area. Although you can pull out whatever aggregate information you need directly through a query using map-reduce techniques, you’ll have to look for/build yourself some reporting tools that can interface with rest apis for pulling data.
There ain’t such thing as free lunch: RavenDB support for transactions relies on MSDTC which can raise performance issues when dealing with transactions that span across different servers. From this point of view extra attention needs to be paid when designing the model.

Another article will follow this one, where we will share insights from one of the experiments we are doing in Yonder Labs with RavenDB so stay close.

]]>http://innovation.tss-yonder.com/2012/05/25/ravendb-yet-another-nosql-dbms-or-not/feed/0The future native cross platform UI technology that may not behttp://innovation.tss-yonder.com/2012/05/14/the-future-native-cross-platform-ui-technology-that-may-not-be/
http://innovation.tss-yonder.com/2012/05/14/the-future-native-cross-platform-ui-technology-that-may-not-be/#commentsMon, 14 May 2012 07:24:30 +0000http://innovation.tss-yonder.com/?p=495Everybody is beating the world wide web horse with respect to cross platform software UI. With so many devices and operating systems on the market today, developers become confronted with a difficult problem, that of choosing which one to develop their applications for. We’ve blogged about this before, recommending the web as a safe and future-proof target, especially for enterprise software user interfaces.

The web is not the only answer

It is unquestionable that native software development brings the best possible user experience for an application. This makes a lot of sense since a platform vendor, such as Apple, Microsoft or Google, will tailor development tools for their ecosystem and capabilities. This is mostly visible in the iOS app world: the popular services such as YouTube and Facebook, are generally accessed through dedicated apps, which are thoroughly optimized and designed for the smartphone or tablet experience. The developers are jumping hoops that are so high as specially designed graphical artwork that goes beyond anything consumers used to experience a decade ago, a characteristic that is enabled by a unified system, a visible marketplace and fierce competition.

The web will have a hard time competing quality-wise with native development and by quality I mean performance and user experience potential. Hardcore web developers will most likely refuse to admit this, but a web browser is a much more restricted environment compared to an operating system. It’s getting better and better, but with the current browser vendors having completely different interests in the web space, I don’t have much hope for fast paced development of the web. It will get stronger, but if you’re looking at the scope of the HTML5 specification, and at the current web browser support, it really doesn’t contain enough to make web apps on par with native ones.

Native means fragmented, but native also means fast

Microsoft is currently the underdog in the modern client platform battle, but their influence is tremendous. The entire IT industry is still dependent on their operating system and office tools. They have an enormous war chest, impressive talent with huge and dedicated developer following. Even though they haven’t lately been able to attract a lot of attention to their platform developments such as Windows Phone 7, they aren’t the kind of company who quits a fight after the first lost battle. Keyboards and mice might be a boring thing of the past, but they still haven’t been replaced by the so called “post-PC devices”. Let’s assume for a moment that tablets are the devices of the future. Even if this would be the case, we are so far from using them instead of PCs that the transition is merely at the beginning. Apple is leading the race, with Android accelerating close behind it, but I doubt they are going to remain the dominant platform for long. Software vendors are also very reluctant to get locked in to a particular platform so the chance of yet another monopoly coming to existence is fairly slim.

If you want to see what a native app behaves compared to a web app, try to open Google Maps on your smartphone’s web browser and then try it on the dedicated app. Repeat the experiment with Facebook. If you were to choose between two software products that offer the same functionality, but one has a native client for your platform, which one would you choose?

Cross-platform native rendering. Dream versus reality

Even though native development is desirable, it’s not actually easy, I would dare to say even possible at this moment, to do it in a cross platform manner. There are some efforts such as Appcelerator Titanium that claim to be “native”, but the only native part is their runtime, which is based on platform components that are exposed through a common wrapper API. They are faking native apps by running JavaScript code which calls native code, but by this logic, web apps are also native since the DOM operations are executed natively by the web browser.

It’s especially difficult to achieve the cross platform crown since Windows Phone doesn’t even allow native code to be executed. This is going to change with Windows 8, because WinRT, the modern application runtime from Microsoft, is going to be exposed to native code as well as to managed code. The implications could be spectacular.

With mobile hardware getting more and more capable, running immersive 3D graphics on a handheld device is no longer science fiction, it’s reality. The really nice thing is that ALL platforms will have the capabilities of running these kind of applications, either through OpenGL ES or some version of DirectX, which although are different, game developers have been writing game engines that work with both for a very long time. Let’s make an imagination exercise: if high-quality 3D graphics can be rendered in real time on more than one platform, why shouldn’t it be possible to render a basic table of information the same way?

iOS has native app support by default, Android has it through NDK and Windows 8 will also have it by default. Rather than reuse UI components from the operating system, it is possible to synthesize cross platform components through native accelerated graphics calls directly, the same way games are drawing their UI. For device-specific functionalities, such as camera, phone book and such, a cross platform wrapper could be developed, the same way PhoneGap does. The main difference of this approach is that applications would be executed natively, with direct and unrestricted access to platform and hardware functionality.

How would a cross platform development framework look like?

One important challenge is the fact that C++ is currently the only mature and well-supported cross platform language for doing native development, which is undesirable to many developers. There are a few contenders, such as D and Go, so a potentially successful native development framework should carefully consider those languages instead of going with the established choice.

This idea is not completely split from reality. The Qt development framework has already made some steps in this direction, which enables seamless C++ and JavaScript integration with native UI rendering on multiple platforms. Recent efforts have been focused on running Qt applications on iOS and Android, which proves the feasibility of this approach. There are some potential issues with Qt, since Nokia might be in a conflict of interest and the licensing of Qt for mobile use cases is quite unclear. I believe there is a short window of opportunity in this space, because not everybody is satisfied with the current choices and a better alternative is not only possible, but it would offer enormous value to those who want to focus on multiple platforms, but still achieve fast performance, concurrency and a compelling user experience.

Conclusion

I hope this article has opened your mind towards alternative future scenarios, since these days everybody seems to be talking about the web (including myself), but to be honest, I don’t expect graphically rich professional-level 3D games to be developed in HTML&JavaScript any time soon. Let’s not forget that games are not the only type of applications which are not well-suited for running in a web browser, there are many more…

]]>http://innovation.tss-yonder.com/2012/05/14/the-future-native-cross-platform-ui-technology-that-may-not-be/feed/0Platform of choice for enterprise UIhttp://innovation.tss-yonder.com/2012/05/09/platform-of-choice-for-enterprise-ui/
http://innovation.tss-yonder.com/2012/05/09/platform-of-choice-for-enterprise-ui/#commentsWed, 09 May 2012 08:49:12 +0000http://innovation.tss-yonder.com/?p=488Confidently adopting a framework for a new application is a difficult task, especially for the user interface part, with so many mature technology choices. Desktop PCs are no longer the only devices that are used in organizations and employees have increasing expectations regarding the accessibility of the business infrastructure. This becomes an important challenge for ISVs in the upcoming period as software products get renewed and migrated on modern technologies. It’s also a challenge for the developers here at Yonder who have to keep up to date with technology and tool trends since we’re in the business of helping ISVs adapt to technical change and innovation.

Software vendors either adapt to innovation or slowly fade out of the market. The user interface has enormous influence on the perception of value and many decisions are based on how a piece of software looks and behaves, rather than obscure implementation details that might be important, but which are definitely harder to sell.

The post-PC UI problem

A decade old software product is mature, stable, flexible and functional. Customers are happy and paying well. However, more and more users of the software would like to access it using their newly bought tablet device. This is quite difficult since the application architecture was designed with requirements that were established more than ten years ago.

You might fake it somehow, using desktop virtualization and a remote terminal connection from a “post-PC device”, but at the end of the day the result remains just that, a fake attempt at leveraging the post-PC evolution. While selling fake solutions is possible, a real one would have better chances of success.

Naive solution (let’s develop for Windows)

The first impulse is to pick the dominant platform the same way one picks Windows when developing a desktop application. If it was all right ten years ago, it should be all right today…

Smartphones and tablets are relatively new to the technology game. Smartphones haven’t really adopted a dominant platform yet, the same way personal computers have adopted Windows. There are two major choices for smartphone platforms: Android and iOS. Apple is still dominating the tablet market, but that might not be the case for much longer according to Gartner. We can easily observe that things are not how they used to be and it would be quite risky to execute the same strategy from years ago when choosing a platform for an enterprise application presentation layer today.

Open technologies to the rescue

The Word Wide Web is the most obvious approach to cross-system accessibility. Although it was initially designed for linking rich text documents on the Internet, it has slowly evolved to complex user interface running capabilities, which are enabled by JavaScript, a great programming language with a not so great name. The development of JavaScript and dynamic web pages has been very organic, driven by intense competition during the browser wars (Netscape vs Internet Explorer), which is unfortunately still visible in some aspects of this technology.

Internet Explorer has just recently started catching up with the other browsers, which are faster, more secure and richer in features. This causes a lot of headaches to web developers who unfortunately still have to support that browser. It also drags down the adoption of the web as the platform of choice because many developers associate web development with fixing cross browser issues. Explaining why Internet Explorer has been such a barrier to the development of the web is unproductive and quite unpleasant, so I’ll leave it for another time.

Fortunately, the HTML5 and ECMAScript standardization efforts are slowly fixing many incompatibilities and mismatches between browsers, which should speed up web adoption in the enterprise world. With increased performance, security and graphical representation possibilities (CSS3, Canvas, SVG), the web browser slowly becomes the platform of choice from conventional community and consumer-oriented applications to mobile web apps. The ecosystem is already mature, with dozens of web libraries and frameworks for building rich application interfaces and development is more intense than ever.

The challange

The web makes it possible to address a large number of users through simple accessibility and low barriers, but it doesn’t make everything easy. There are sometimes too many ways of accomplishing something with web technologies and doing it the right way requires comprehensive experience. It took us a lot of man-years of development and iteration to be able to really understand what makes a great web application great and we plan on sharing some of our knowledge in future posts. Stay tuned…