Pages

Saturday, December 24, 2011

As I was driving down to a restaurant with a friend of mine,
we were chatting about another common friend and his new venture on mobile applications. The conversation soon gained technical flavor and it was
a nice drive into the fast changing technology lane. Here are some excerpts from
our conversation during the drive.

On why enterprises are in a hurry to port existing
applications to mobile platform...

The technology is evolving so fast and enterprises will soon
be embracing mobile devices which range from smart phones to tablets. Every
tech worker owns a mobile smart device of his or her choice. Most such workers
are holding senior positions in the enterprise and are very keen to use it to perform
their work and for the purpose, try to influence the IT heads to allow such
devices in work environment. This in fact is a challenge for the CIOs in terms
of information security and confidentiality. But as this trend is growing, the
IT heads have no option than to embrace this trend and start regulating this
with a formal BYOD (Bring Your Own Device) policy, controls and governance framework
around it.

On how BYOD is relevant in the context of mobile
applications...

Yes, as the BYOD is gaining increased acceptance, the next
big challenge is to get existing applications working on such devices, so that
the employees don’t have to be provided with a desktop or even laptop. This in
turn drives the need for porting the applications to mobile platform. Many
tools and methodologies are emerging in this space so as to facilitate building
mobile applications from ground up and also to port existing legacy applications
to mobile platform. Write once deploy any where is the USP for today’s
development tool vendors.

On how legacy applications can be ported...

This is where the Service Orientation is gaining importance.
Business services are identified and exposed as reusable services and then
build a portal application on top of it to appropriately present it for end
user access on a variety of devices. The organizations would also consider embracing
the cloud based SaaS applications to replace the legacy applications. And yes,
migration to cloud could be a daunting task but CIOs are seeing a longer term
benefit in doing so. An alternative shorter term solution could be to get a
virtual desktop on the mobile device and then work on whatever legacy app that runs
on the desktop.

About the concerns on cloud...

Yes, there still are certain concerns that keep organizations
away from the cloud. However this trend is changing. Most organizations have
already moved less critical applications to the public cloud. Like we have
central / reserve banks regulating the banking industry, it is time for the
industry consortium to come up with an independent regulatory body / framework,
which can help establish the trust amongst the enterprises, which in turn will
ease some of the security concerns. While industries like Banks and healthcare providers
have reasons to be concerned to embrace cloud, other industries are showing
serious signs of embracing the cloud.

On the amount of data that banks process and manage and
whether that could be a deterrent for cloud adoption...

Be it cloud or not, data quality and data maintenance is
going to emerge as a critical function. Dirty data and redundant data is being
identified as having considerable impact on the profits of the organization. Tools
have emerged in assuring data quality, data de-duplication and master data
management. Computing hardware and related technologies like virtualization has
made vertical and horizontal scaling very easy and thereby making the usage of
these data intensive tools a possibility.

We both enjoyed this conversation and I am sure, you would
also enjoy reading this.

Friday, December 16, 2011

As with any typical Application development, performance is mostly
conveniently ignored in all the phases of the development life cycle. In spite of
it being a key non functional requirement it mostly remains undocumented. It is
more so, as the development, test and UAT environments may not really represent
the real world production usage of the application as some of the performance
problems could not be spotted earlier. Even if the application is put to load
test, there are certain in the production environment, like data growth, user
load, etc, which may lead to performance degradation over a period of time.

While most performance problems could easily be spotted and
resolved, some could be a challenge and may require sleepless nights to resolve.
A structured approach may help addressing such issues within reasonably quicker
time frame. Here is a step by step approach which should work in most cases.

1.Understand the production environment

It is important to understand the
production environment thoroughly so as to identify the various hardware & networking
resources and the middleware components involved in the application delivery.
In a typical n-tiered application, it is possible that there could be multiple
appliances and servers through which a requested passes through and get
processed before responding back to the user with response. Also understand which
of these components are capable of collecting logs / metrics or capable of
being monitored in real time.

2.Understand the specific feedback from the end
users

Gather details like who noticed
the performance degradation, at what time frame, whether it is repeating at
pattern or just pulling the system down. Also understand if the entire
application is slowing down or some specific application components are not
performing. Also try to experience the problem first hand, sitting alongside an
end user or if possible use an appropriate user credentials to experience the
performance issue. The ‘who’ also matters as in certain circumstances, the
application slow down may be for a user associated with some specific role as the
amount of data to be processed and transmitted may differ based on the user role.

3.Review available logs and metrics

Gather available logs and metrics
data collected by various hardware and software components and look for information
that could be relevant to the specific application, or more specifically the
set of requests that could demonstrate the performance issue. As Logging itself
could be performance overkill, it would be ideal to switch off the logs or to
set it to collect only minimal logs. If that be the case, configure or effect
necessary code change to achieve appropriate level of logging and then try to collect
the required details by re-deploying the application on to a production
equivalent environment.

4.Isolate the problem area

This step is very important and
could be very challenging too. Take the help of developers and performance and
load testing tools, to simulate the problem and in the meanwhile monitor for
key measurement data as the request and response pass through various hardware
and software components.

By analyzing the data gathered
from the application end user or out of the first hand experience, and with the
available logs and metrics try to isolate the issue to a specific hardware or
software component. This is best done by doing the following step by step:

a.Trace the request from the UI to the final
destination, which typically may be the Database.

b.If the request could reach the final
destination, then measure the time taken for the request to cross various physical
and logical layers and look for any information that could cause the slow down.
If a hardware resource is over utilized, it could so happen that the requests
would be queued up or rejected after a time out. Look for such information in
the logs.

c.Then review the response cycle and try to spot
the delays in the return path.

d.Try the elimination technique whereby, the involved
component one after the other from the bottom is cleared of performance
bottleneck.

Experience and expertise on the
application and the infrastructure architecture could come in handy to spot the
problem area quickly. It is possible that there could be multiple problems whether
contributing to the problem on hand or not. This situation may lead to shift in
focus on different areas resulting in longer time to resolve the problem. It is
important to always stay in focus and proceeding in the right direction.

5.Simulate the problem in Test /UAT environment

Make sure that the findings are
correct by simulating the problem multiple times. This will reveal much more data
and help characterize the problem better.

6.Perform reviews

If the problem area has already
been isolated in any of the steps above, then narrow the scope of the review to
the components involved in the isolated problem area. If not, then the scope of
review is little wider and look for problem areas in every component involved
in the request response cycle. Code reviews to debug performance issues require
unique skills. For instance, looping blocks, disk usage, processor intensive
operations could be the candidates for a detailed review. Similarly, in case of
distributed application, look for too many back and forth calls to different
physical tiers could easily contribute to performance problem. Good knowledge
on the various third party components and Operating System APIs consumed in the
application may sometimes be helpful.

When the problem is isolated to a
server and the application components seem to have no issues, then it might be
possible that any other services or components running on the server might
cause load on the server resources there by impacting the application being
reviewed. If the problem is isolated to Database server, then look for dead
locks, appropriate indexes etc. Sometimes, lack of archival / data retention policies
could result in the database tables growing in a much faster pace leading to
performance degradation.

7.Identify the root cause

By now one should have identified
the specific application procedure or function that could be the cause of the
problem on hand. Have it validated by doing more simulations and tests in environments
equivalent to production.

8.Come up with solution

It is just not over yet, as root
cause identification should be followed by a solution. Sometimes, the solution
to the problem may require change in the architecture and might have a larger
impact on the entire application. An ideal solution should prevent the problem from
recurring and at the same time it should not introduce newer problems and
should require minimal efforts. Alternatively if the ideal solution is not a
possibility with various constraints, a break-fix solution should be offered so
that the business continues and also plan for having the ideal solution
implemented in the longer term.

Hope this one is useful read for those of you in production
support. Feel free to share your thoughts on this subject in the form of
comments.

Sunday, November 27, 2011

Traditionally, each software application is developed to
maintain and manage the identity and the related permission information within it. As more and more such applications gets deployed, user provisioning and managing access control could
soon be a nightmare. A well managed Identity Management function within an enterprise
can alleviate the hassles around this and will also enable the enterprise to
better govern the identity and resource provisioning activities.

Identity Management solution as such comprises of the following key functions in addition to being technically capable of exposing necessary automation APIs:

Account Provisioning

This is a core function within Identity Management and this is where an identity gets created. The following are the typical activities
that need to be performed under this function.

Adding an Identity - includes receiving a request with required data, performing necessary verification and obtaining approval from appropriate authority.

Modifying an Identity - involves change of certain attributes of an identity.

Deleting an Identity - when an identity no is no longer associated with the organization, deletion may be required. Deletion may not mean actual deletion and instead may mean de-activation.

Suspending / Resuming an Identity - usually when employees go on long vacation, it would be appropriate to suspend the identity and resume again when the employee comes on board.

Resource Provisioning

An Identity once created need to be provisioned to access one
or more services, which could be out of a computing resource or a non computing service. For instance, computing resources could mean access to payroll
application and similarly a non computing resource could mean physical access to the Data Center.

De-Provisioning

De-provisioning is an equally important function which, if not
done on a timely manner could put the organization into a big risk. For
instance, if an employee who has been granted access to critical systems, is
not de-provisioned when he leaves the organization, he could cause potential loss
to the company.

Managing Permissions and Authorization

Provisioning a resource would only mean that the resource
has a need to use the target resource, but it has to be further managed by
defining specific privileges like, Read, Write, Delete. Similarly, the identity may have
to be granted different permissions for different sub functions that the
resource may expose. While a standards based IAM solution would be extensible, the consuming application may require changes to interact with the IAM solution and make use of the authorization information that is exposed.

Governance

With a central identity management solution, it is important
that the related functions are better managed, monitored and audited. This requires defining, implementing and monitoring controls around people, process and tools & technology.

People – The person performing the one or more of the above
functions should be highly trust worthy and appropriate separation of duties
and responsibilities should be put in place. For
instance, the person approving the identity creation should not be the same
person who creates it. The identity performing these functions should be at appropriate level which ensures accountability.

Process – Policies and processes need to be defined for each
of the above functions. For instance, Identity creation shall specify the
source of data, the required attributes for which data need to be captured, a
process or methodology to have the identity information verified and on top an
approval process. It is typical that the approving authority may be different
for different resource, which has to be unambiguously defined. There should
also be a process specifying the monitoring and audit requirements for the
above functions.

Tools &
Technology – Carrying out the above functions will certainly need an
appropriate tool and related technology. A comprehensive enterprise tool may
facilitate carrying out all the required functions in addition to offering
necessary APIs for the resources that consumes the authentication services. It is important to specify how access to these tools and related infrastructure is protected and governed.

The following are the key control objectives that need to be
defined with respect to each activity performed under Identity Management:

Identification – the
security control process that creates an entity and verifies the credentials of
the individual, which together form a unique identity for authentication and authorization
purposes

Authentication – a security control process that verifies
credentials to support an interaction, transaction, message, or transmission

Authorization – a security control process that grants permissions
by verifying the authenticity of an individual’s identity and permissions to
access specific categories of information or functions exposed by a resource.

Accountability – a security control process that records the
linkage between an action and the identity of the individual or role who has
invoked the action, thus providing an evidence trail for audit or
non-repudiation purposes

Audit – a security control process that examines data
records, actions taken, changes made, and identities/roles invoking actions
which together provide a reconstruction of events for evidence purposes

All the control objectives above serve the requirement to
provide an auditable chain of evidence.

Identity management control processes should have an input,
one or more control activities, an output, feedback, management monitoring, and
an overall audit appraisal activity to ensure that they are fit-for-purpose.
The starting point is an individual who is enrolled into an organization and subsequently
acts in a function or role in the organization. The individual may be an
employee, partner, or contractor, or third party. The output is the appropriate
degree of policy enforcement and individual accountability for the business
activity. Within the controls, the threats and vulnerabilities constituting the
business risk must be addressed and assessed. These include business, legal,
and technical aspects.

Like with any systems, the following are the key non-functional
requirements an Identity Management infrastructure should aim to offer.

Being more responsive and secure

Interoperability with a multitude of systems requiring
identity information.

Support for multiple authentication mechanisms, like two
factor, bio-metric, etc.

Interfaces and APIs for automation which could result in reduction
in operational costs.

A governance framework would not be complete if it does not define
the measurements that indicate the efficiency and effectiveness. The following
are some of the metrics that could be considered:

Password Reset volume – A well managed Identity Management
System is expected to considerably reduce the help desk calls on forgotten
passwords. As such a measure of this activity could be a key metric to
establish that there is a considerable saving in such help desk activities.

Number of distinct credentials per user – With Single Sign
On implemented, there should be only one distinct credential per user.

Average time taken for each of the identity management
functions could be another useful metric to establish that the investment is
worth.

Sunday, November 20, 2011

Cloud is catching up amongst enterprises. Amidst the
security and the other concerns that are still to be addressed, CIOs are seeing
a clear benefit in shifting towards the Cloud offerings. That means, there is
an increasing number of enterprises seriously engaging cloud based applications.
This necessitates the need for a model to measure or assess a SaaS application.
Like we have Capability Maturity Model to assess a software development shop to
be at a particular level, we need a maturity model to assess the SaaS
application.

Microsoft way back in 2006 suggested a 4 level maturity
model using Scalability, Multi-Tenancy and Configuration to define various
stages. While Level 1 is meant to define an ad-hoc hosted application which
lacks all the three basic fundamental characteristics and at Level 4 a SaaS
application is expected to meet all these three basic characteristics. This
model has its own deficiencies as it does not consider few other important
characteristics of a SaaS application, like for instance managing the releases,
data isolation, etc.

Forrester has come up with the six level definition of SaaS
Maturity. Let us examine each of these levels as below:

Level 0: Just Outsourcing, not SaaS. This is a typical
scenario where a service provider operates a software installation for a large
customer and cannot leverage this setup for another customer. This is just
outsourcing and not SaaS.

Level 1: Manual ASP (Application Service Provider). In this
case, the service provider has established unique skill in operating similar
service to multiple customers, but each client has a dedicated instance and
each instance is manually customized by the service provider to the needs of
the customer.

Level 2: Industrial ASP, still not SaaS. The Application
Service Provider uses techniques to package and deploy the application with
different configurations for different customers. In this case, still the customer
does not have the ability to customize their instance of the application.

Level 3: Single-app SaaS. This is when the provider is able
to offer application as service to
multiple customers out of single packaged application. This is the initial
level of SaaS, wherein the application demonstrates some of the basic
characteristics of Software as a Service.
At this level, the provider deploys the packaged application on a
scalable infrastructure, and shares the single instance to multiple customers
with customization limited to configurations.

Level 4: Business Domain SaaS. At this level, the provider
offers a well defined business application and also a host of packaged
application modules or third party packages, with which the customer has the
ability to extend the business logic of the application.

Level 5: Dynamic Business Apps as a Service. This level is a
visionary target where in the provider offers a comprehensive application and
integration platform on demand. With this ability, the provider can compose
tenant specific and even user specific business applications or services.

SEI has in line with the popular CMMI for development,
presented CMMI for services packing in some of the service specific process
areas in addition to typical development related practice areas. This however
is used to assess an organization offering services as against assessing a SaaS
product.

As of now, none of the models are popular amongst the major SaaS vendors, may be because of not enough competition on the SaaS space. Once major players compete on SaaS space, then customers for sure will find a way to assess the service maturity and that could be the way forward.

Wednesday, October 26, 2011

Code quality is one of the often talked about issue in a
project status meeting. The magnitude of this issue will be bigger in case of
maintenance products, where end users are encountering defects. While all the
members of the project team very well know that code review if done well during
the build phase could reduce the code quality issues by a
significant percentage. The design and build process manual clearly calls it
out that code review is an exit criteria for the code to move on to QA for
testing. But still this issue surfaces every now and then.

The question that comes up is whether code review was done
at all or done just for the sake of process compliance. The possible reasons for reluctance among the developers to do a peer review are: Lack of reviewing skills, using the review finding against the developer for reviewer's personal advantage, inferiority complex of the developer. The other most common reason cited attributed by developers is lack of time.

As all of us are very clear on the benefits that the code review bring on the table, let us not try to list out and discuss about the benefits.
Let us attempt to list down the required skills of a code reviewer.

Subject matter (Domain) expertise: Though most developers are not expected to be
domain experts, this skill will certainly be required if the code review is
expected to prevent functional defects getting slipped into the next phase. The
very fact that the developers need not have domain expertise could possibly
mean that the developer might have not understood the requirement as it was
intended to be resulting in injection of defects. A mis-interpretation of the requirements could result in a functional
defect and there is a chance to spot it if the reviewer posses the domain
expertise. Some of the production
defects could be unique and may not be reproducible and in such case, code review is the recourse to trouble shoot. This if done well during the build phase, could have prevented such defects surfacing in production at a later stage.

Technical Skills: The reviewer should be an expert in the
technology and the programming language used. In software programming, code
could be written in innumerable ways for a given requirement. However, given
the standards and practices the team is expected to follow, the various quality
attributes identified for the project and the goals set for the specific
review, the reviewer should possess appropriate level of knowledge to spot problem
areas. It is important for the reviewer to know the internal subsystems and the inter dependencies on various local and external computing resources.

Positive Attitude: This is a very important skill for the
reviewer and the reviewer should never use the review findings against the developer as an individual. The issues should be considered as a team issue. The Reviewer
should acknowledge the capabilities of the developer and the developer may have
valid reasons for having written code in a particular way. At the end of the
review, it would be a good idea to discuss the summary of the findings with the
entire team, as some findings could be good learning for other team members.
The organizational standards and practices should also be revisited and revised
if necessary based on the nature of the findings. This may also result in
identification of certain training needs for the team.

Team Skills: Both developer and the reviewer should have the
common objective of producing quality code out of the build phase and they must
work as a team to achieve best results. If not, the code review may happen just
for name sake or may lead to personality issues which in turn would affect the
project deliverable.

Attention to Details: This is an important skill a reviewer
should possess to carry out effective reviews. Typically it is human to
miss certain blocks of code as parts of it may appear to be correct. Unit testing is not a substitute for code review. The
reviewer should with respect to each line of code, ask questions like, what if this
statement fails to execute, if there is any other best way to achieve the same
action, whether this could lead to potential performance issue, whether this
statement may require more system resources, like Memory, CPU time, etc.

Knowledge on Tools: In addition to the above, code reviews
with certain specific goals will require knowledge and expertise on appropriate
debugging and diagnostic tools.

One of the key metrics in the Software Engineering space is
defect injection ratio. This helps to identify the phase that has injected most
number of defects. Many times, the stake holders think that it is the
developers who inject defects into the delivered software. The reality however
may not be completely true as much number of defects get injected in the
requirements and design phase also. However, code review when rightly used,
helps the development team not only to keep the defect injected in the build
phase under control, but also not let the requirements / design phase defects
slip into the next phase.

Monday, October 24, 2011

I keep getting questions from some of my friends, as to what
the role Solution Architecture is all about and will they be a fit for that
role. For the benefit of every one out there, I thought of putting together my
thoughts on the role of Solution Architect. Let us first examine what is
expected out of this role and then look at the skills needed to be in this role.

As the title indicates very well, the role is expected to
bring in solution to varying business problems, most which could be a product
or project by itself. But, as you know, it is always a challenge to come up
with a best solution due to it being intangible and that there are many quality
attributes which are not completely identified and specified. There would be
lots of missing links in the areas of business domain, choice of technology,
hardware components, business processes, future domain and technology trends,
etc which the Solution Architect should
be able to connect and come up with best solution that could last longer, so
that the organization reaps the return on investing in the solution.

Some of the key characteristics of a best Software Solution
are:

Longivity: While
the solution must solve the current business problem, it should also be reliable,
usable, secure and also future proof. This means that the solution Architect
should consider the industry and technology trends that could have an impact on
the problem / solution in the near and longer term.

Trade off: The
challenge with the various mostly unspecified quality attributes is that they
are interdependent. And meeting one such attributes may most likely mean compromising
on another. Obviously, a lot of trade off has to happen between various quality
attributes and such trade off should be justifiable in the context of perceived
benefits for the organization. For instance, performance may have to be
compromised to achieve better security. The tradeoffs have to be carefully made
after considering various factors, like the risk appetite of the organization, target
users of the solution, the technology platform, current IT investments of the
organization, etc.

Implementation view:
It is important that the solution should be devised with the intended
deployment view into consideration. Without that, it could so happen that the solution
as designed and built may call for massive changes to the infrastructure
investments, which could be a total surprise for stakeholders. Such surprises
emerging towards the close line of the project could increase the cost by
manifolds or delay the project further.

The above is not the exhaustive list. There are many other
factors that will have to be given due consideration before coming up with the
best solution. Above all, the solution architect should be able to see that the
solution is successfully implemented and put into use. That means, a lot of
work in terms of convincing the stakeholders as to why this solution and not an
alternative, hand-holding the design and development team and also to some
extent the end users to have it implemented the same way it was intended.

With all the above, let us now try to identify the essential
skills of an aspiring Solution Architect:

Domain skills: A
thorough understanding of the business domain is required to first understand
the problem better and second to know the potential future needs that may
emerge along the same lines of the problem space. It is also important that the
person has the ability to learn things fast, as in most cases, there won’t be
lead time for him to gain appropriate business skills. That means, the Solution
Architect should also be a Business Analyst.

Technical skills:
A thorough understanding on the technology currently in use in the
organization, the technology currently in use in similar industry domain and
the emerging future technology trends. This knowledge is essential to ensure
that the solution does not become obsolete soon and that the organization is in
a position to be ahead of completion in terms of IT enabled capabilities. At
the same time, applying a new technology early in its evolution has its own
issues and it is always better to wait for the technology to evolve and mature as
more and more organizations adopt the same. It is important for a Solution
Architect to closely follow technology trends and gather enough knowledge to understand
what could be the best fit for solving various business problems on hand. He
should have enough understanding of the technology chosen, so that his team
(mostly himself) comes up with a prototype to establish that the solution
really solves the problem. However, as the solution goes further down the implementation
lane, the Solution Architect should be able to demonstrate hands on skills, so
that he could command expertise and be the go to person for resolution of
issues.

Team skills: Though
mostly the Solution Architect will be an individual performer, some
organizations may have dedicated teams to assist the Architects. Even in case
of Architects acting as individual performers, the solution is implemented by a
project team. So, the Solution Architect needs to be a team player and should
with his domain and technical expertise, lead the team by example.

Process / Project
Management Skills: Needless to say that the Solution Architects have to
have the Project Management skills too, as one may have to manage the
pre-solution activities as a project. For the purpose, he has to be familiar with
the processes as well.

That means, the Solution Architect should be an all rounder
with moderate to expert level skills in all the areas. On top of these skills, one has to understand
that solutioning is not just a science, but also an art, which is mastered with
years of experience over as many projects and years involving various technologies
and domains.

There could be different views on this and comments or
opinions are welcome.

Friday, October 21, 2011

It is good to note that electronic delivery of public services is proposed to be mandated in India. The Ministry of Information Technology has proposed a draft Electronic Service Delivery Bill, as per which, every competent authority of the appropriate government shall publish a schedule for delivering public services in electronic mode. It also requires that the all public services in India should be delivered in electronic mode within 5 years from the date of commencement of this bill. The bill also provides for extension of the term by another 3 years, provided it is supported by valid reasons. That means, in eight years from now, all public services in India will be delivered online. The draft bill can be downloaded from the Ministry of Information Technology website.

The bill is likely to be placed before Cabinet soon. Check out this news brief on Business Line.

Tuesday, October 11, 2011

For those, who are not familiar with the term BYOD, it
stands for “Bring Your Own Device” and use it to achieve your work goals, be it
within the company or anywhere else. A simple example for this is when an employee uses his own iPad to access his corporate emails, or use any other wi-fi enabled device to connect to corporate wi-fi network and use it to perform certain work related tasks. This has been in practice with the
education and training companies, where the students / participants are
expected to use their own devices, subject to meeting of the required minimal hardware
and software specifications. Thanks to the last recession the recent explosion
of the smart personal gadgets, companies are increasingly considering allowing this.

The factors that drive the BYOD amongst corporate are:

Increased Productivity - Employees are expected
to be happy working on their favourite stuff and in turn that is likely to
bring in increased productivity.

Better Mobility – Organizations with mobile
workforce, who typically work on the move, feel that BYOD could offer better
mobility and flexibility.

Cost Savings – Though this may not be a real
benefit, as organizations may end up spending considerably on mitigating the risks
that BYOD brings on board, this is considered as a factor driving the increased
adoption.

Influence from senior executives – Typically if
a senior executive buys a latest gadget and then using it in the workplace to
do their work.

Decreasing client installs – With increased
adoption of Cloud based applications, all that a user need to access the
enterprise application is a compatible web browser and this favours BYOD.

Certainly BYOD brings on board a lot of challenges to the IT
department and here are some of the key challenges:

Support – The IT department have to start
supporting varying make and models of smart gadgets running different operating
systems and web browsers. Unless the IT department comes up with the list of
gadgets that they can support, it could soon be a nightmare.

Licensing – If there are certain third party components
to be installed on the smart devices, then it is better to have the licensing
terms of the component vendor verified, as some vendors may impose restrictions
in installing such components on devices other than those owned by the
organization.

Network and Application Security – When employees
use the organization provided devices, they are appropriately hardened in line
with the security policies of the organization. But in case of BYOD, the
employees for sure would not like to have their devices hardened for work use, instead
they would like to be the administrators of their own device and play with it
in whatever way they want. On the other hand, employees may even go ahead and
install more and more mobile apps of their choice, some of which could even be
malware.

Data Security – Whatever data that is cached or
stored on the gadgets, as the devices are used for work are subject to be easily
compromised.

For sure, this is yet another challenge that the IT managers
should be ready to face soon, if not now.

Saturday, September 10, 2011

The evolution of Cloud Computing has paved way to enterprises look at subscribing for SaaS applications as against licensing an application for exclusive use. The primary benefit being the cost savings due to centralization. As more and more enterprises are looking for Cloud and SaaS model for its application needs, the product companies are exploring options to enhance their existing products so that they can be offered on SaaS model. It is important to understand the key characteristics of the SaaS applications before planning for the conversion.

While the application should be accessible over the web is an important characteristic, the following other characteristics are also important to look at:

1. Multi-tenancy

Typically all applications support multiple users. But a SaaS application should support multiple users of different organizations. Which means there should be a mechanism to identify and appropriately differentiate the users of a specific organization. That is the application should support multiple tenants. The tenants would also be interested to have their data be isolated and not to be mixed up with that of other tenants. At the least, the SaaS application should have the ability to uniquely identify each and every data record against a tenant.

2. Subscription and billing mechanism

Organizations are embracing SaaS applications on the premise that they will be paying far less based on one or more parameters, which measure the usage by the specific tenant. For instance, SaaS application may be priced based on number of users or based on subscription and use of specific modules / features. Some times, the pricing may be even complex, where it could be based on the transaction volume or a combination of one or more such measures. So, the application should be capable of tracking and logging such parameters and that the billing could be automated.

3. Scalability

A typical web application is hosted on a separate instance owned and exclusively used by a specific tenant. Whereas in case of SaaS application, the provider owns the hosted instance, which is used by all the tenants. Though the provider has the option to host a separate instance for each tenant, the economy of scale would at its best when a single instance is offered for multiple tenants. Depending on the application's features and the wide reach amongst the potential customers, the customer base could grow so fast and the application should be scalable both horizontally and vertically to support the unexpected growth in volume.

4. Manageability

The tenants should have the ability to manage their part of the application including managing the users, roles, permissions, etc. As the subscription base grows, it would be ideal to leave the application management to the tenants themselves. This requires the application to have necessary features / functions for use by the tenants.

5. Self service sign-up

While self service sign-up is not a key characteristic, it is a highly desirable to have this feature when the customer base is expected to grow too fast. Similarly, on boarding a customer may involve data migration from a different application used by the tenants before. The SaaS application should expose appropriate interfaces / APIs to facilitate the migration. It would also be a desirable to expose APIs to facilitate export / back up of data by tenants themselves.

6. Tenant specific customization

Typically, product companies undertake to customize an application to meet the specific needs of the customers by enhancing the application. But this would not work in case of SaaS application, as all the tenants would typically be using the same version of the application. That means, the application should be highly customizable, so that it satisfies the specific needs of all the tenants. In a large scale SaaS application this is achieved by providing the ability to extend the application by defining and deploying specific screens and scripts by the tenants themselves.

That is not all. There are other characteristics too and some of them could be key depending on the nature and demands of the industry and the providers. Please feel free to share your thoughts.

Here are some useful reference links that deal with the SaaS application challenges and characteristics.

Saturday, September 3, 2011

With the evolution of smart phones and tablets, the survival of Personal Computer could be under threat. Let us examine, if there is a possible thing that PC alone can do.

Thick Client Applications: We have started seeing the increasing number of applications moving to the cloud and one just need a browser and may be an appropriate pug-in to run a cloud application. Even heavy weight applications like ERP suites and Business Intelligence Suites are now being offered over the cloud. In few years from now, I don't think there will be any compelling need to use a thick client application.

User Convenience: Yes, a bigger monitor and a regular keyboard with a mouse will really be convenient to work on a PC. But do we need a PC just to have a bigger display and the key board? Not really, some of today's smart phones are dockable on to a device, which facilitates connecting to a bigger display and keyboard.

Higher Computing Power: When the applications are served out of Cloud, much of the processing happens elsewhere on some server(s) located over the cloud, and not much power is required on the client device. That is not the end, in few years from now, the smart phones / tablets will equally sport a high end processor with even multiple cores.

Extreme Gaming: Most of the popular games today are online games. Gamers also prefer online games which connects buddies from all over the globe to join and play together. More so, because the gaming service providers gain more in the form of Ad revenues in case of online games than Thick client games. Above all, there are special purpose gaming consoles in the market for extreme gaming.

Enterprise Computing: While it could be ideal to go with enterprise owned secured and locked down personal computers to access and process enterprise information, but that does not mean that they have to be PCs. Even now most enterprises are encouraging their employees to work from home on a Laptop, there by saving so much of energy costs at the physical location and at the same time commuting time for the employee. The evolving tablets could easily replace the Laptops.

Research firm Gartner slashed its growth forecast for the global PC market this year to 3.8 percent from 9.3 percent citing boom in media tablets.

You name one thing, we can think of how tomorrow's personal gadgets could address that. Would like to here from you on this trend.

Friday, September 2, 2011

Long time ago, when I was seriously preparing a process document on my Laptop and I kept typing in the text as it was flowing from my mind, not looking at the screen. When I looked up to see how it is coming up, surprised that the sentences were scrambled here and there. Then I started observing as to what is happening while typing in and found that the typing position suddenly changes to a an unexpected location, and your key strokes produce characters at an unwanted location! For a while I thought this could be a virus or malware problem or may be a problem with Microsoft Word.

But it did not took much time for me to figure out that this is the 'tap to click' feature of the touch pad. As you keep typing in, your thumb or such other finger taps on the touch pad surface and as a result, the typing position shifts to the position where the mouse cursor was at that time. From then on I have it included in my Laptop build document to have the tap feature disabled. May be this feature is useful for some, but for me it is a hindrance. Similar issues with the pointing stick, which is positioned amidst the keys and if you have it enabled, the chances of you taping on it is even more. Share your experiences with this feature.

Friday, April 8, 2011

Interesting to note that cars of next decade will be one's smart terminals and would allow the drivers and / or passengers to interface with various devices at home or office and transact. This means, travelling is not a waste of time, one can be at work while driving!

Even cars and the humans can be embedded with devices and technology that will authenticate the driver / passengers based on one or more of various personal traits, before allowing to drive the car. We have seen some such capabilities in science fiction movies, which can be a reality in the coming decades. The possibilities are endless!

Wednesday, February 16, 2011

For a sustained success of an IT services organization, it is important to have a high performance workforce backing the leaders. A quick peek into various resources on what is high performance workforce, found that the following are the three fundamental building blocks to setup a High Performance Workforce:

1. Accountability for Right Results, which requires the employees to have the ability to focus on the right priorities and in turn achieve the right results at right time.

2. Earn Trust, which requires continuous mentoring and recognition of people whom the organization depend on, so that they feel valued, confident and ready to give their best.

3. Talent Development, a continuous skill assessment and development program, with which, the workforce is always on the edge of the needed skills and is ready to tap the opportunities that comes through its way.