Thursday, March 08, 2012

Before Marrying That Cloud
Provider, Be Sure It Isn’t a Bridezilla

When it comes to the cloud,
selecting the right cloud provider can be challenging at best and nightmarish
at worst. The cloud includes a variety of ways to deliver technology
services, from standalone software applications like hosted email or databases
to very new and exciting uses that include disaster recovery as a service
(DRaaS), storage as a service, or platform as a service (PaaS) applications.
While there are obvious advantages to entering into a cloud partnership — including decreased IT costs, increased flexibility and a
reduced focus on IT maintenance issues — selecting the
right cloud provider is a critical first step in realizing those benefits in a
way that creates a synergistic, long-term and fulfilling relationship.

That’s why I have put
together a list of the top questions IT pros should ask when they’re
considering a cloud partner for the long haul.

Is the cloud infrastructure
backed by at least a 99.9 percent up-time SLA, and is there financial
compensation for not meeting it?

Is there a defined change
control process for scheduled maintenance, upgrades and security patches?

Are there additional
charges for services like backup, OS License, security patch deployment and
system management/monitoring?

What type of security and
monitoring practices are in place at the data center (i.e., firewalls, IDS,
vulnerability scanning, etc.)?

Does the cloud provider
allow for easy scalability and addition of resources, including CPU, memory,
storage and bandwidth?

What kind of disaster
recovery services does the provider offer?

“For a CIO or IT manager,
choosing a cloud provider can feel like choosing a spouse”. “Joking aside, the
relationship between the client organization and the cloud provider they choose
is a deep and meaningful one. The client trusts the cloud provider with what
oftentimes amounts to mission-critical data and information and relies on that
provider to keep that information secure and available in sometimes heavily
regulated environments. Therefore, choosing the right provider is a serious
decision that has to be made with the utmost care and concern.”

Like
most things in life cloud computing is a double-edged sword. It can either
eliminate internal IT or be used by IT to drive business innovation.

Speaking at the IBM Pulse2012 conference today, Danny Sabbah, general manager for IBM Tivoli, strongly urged
IT organizations to be a lot more proactive about pursuing the latter path.
Clearly, many line-of-business executives are taking advantage of cloud
computing to sidestep IT. What internal IT organizations need to do in order to
make it less appealing for business executives to go around IT is to simplify,
standardize and then automate their IT environments.

According to Sabbah, it’s all the complexity within IT environments today that
is making it hard for IT organizations to dynamically respond to the needs of
the business. Instead of relying on a “tools du jour” strategy, Sabbah says
that IT organizations need to implement cloud computing in a way that
accelerates business innovation or you risk being bypassed all together.

Sabbah says that cloud computing done right should make the business more
resilient to rapidly changing business conditions, while at the same time
providing more choice and flexibility across a hybrid cloud computing
environment. Furthermore, Sabbah says that those cloud computing platforms
should not only be application workload-aware, but also come
with built-in security and analytics capabilities.

As IT has become more complex, the business has actually become more
susceptible to disruptions because of any number of IT issues, adds Sabbah. Not
only has the rise of mobile computing exponentially increased the number of end
points that need to be managed, Sabbah says there are roughly 13 billion
security events a day that need to be analyzed. And as physical systems become
more instrumented, the amount of data that needs to be managed is increasing at
rates that no IT organization can keep pace with.

None of these issues are ever going to go away. IT leaders need to figure out
how emerging technologies will ultimately better serve the needs of the
business. Otherwise, it’s only a matter of time before somebody else, for
better or worse, makes that decision for them.

Tuesday, February 14, 2012

Backup strategies are like rashes: people tend to ignore them for as long as
they can, then they ask a specialist if it’s going to be OK.

There’s a lot of good ways to back up databases. The challenge is finding
best strategy for your data. To find out if your SQL Server
backup strategy measures up, ask yourself the following questions:

1. How Do You Know When Your Backups Aren’t Successful?

People deal with lots of databases, and it’s hard to keep track of every
last one. Different strategies are used to manage backups for all these
databases — sometimes a service external to the database runs the backup. Do
you know when that service fails, or doesn’t start? At other times, individual
SQL Server Agent jobs handle backups for each database. If a new backup is
added and a backup job isn’t created, will you know it wasn’t run?

Make sure
you have an alert system for failures.

Supplement
your backups with regular checks of last backup date. Make sure you’re
checking the last date of log backups if you’re not using the simple
recovery model.

2. How Much Do You Lose if Even Just One Backup File Goes Bad?

Backups take up a lot of space. There are a lot of backups, and people
rarely use them. It’s human nature to go a little cheap, and keep them only in
one place. That place may have been provisioned using RAID 0. Wherever
it is, a backup file might get deleted or corrupted.

You need to look at all the different types of backups you run and consider
things like this:

If you’re
using just full and transaction log backups, the chain ends at the last
good log backup.

If you’re
doing full backups once a week and differentials on weeknights, those
differentials are no good without the full.

For critical databases, I prefer to keep backups in more than one place—
onsite and offsite. Think you can’t afford an offsite location? Here are the
rates for Amazon S3.

3. How Fast Do You Need to Restore Your Data?

If you’re a manager, this is your ‘Recovery Time Objective’. If you’re a
project manager, you just call this ‘RTO’. If you’re a database administrator,
this is the number of minutes until you’re pumped full of adrenaline and
sweating heavily when something goes wrong. Whenever I’m considering a
backup technology, I want to know how long it will take me to restore my data
if everything is gone except the backup files.

This means you need to know:

How long
will it take to get to the backup files? If they’re on tape or in a
magical Data Domain device, can
you access them yourself?

How long
will it take to copy those files to a good place to restore them from?

If you don’t
have a server to restore to, how long will it take to bring one up and
configure it?

How long
will the restore take?

If my backup technology can’t meet my Recovery Time Objective, I need to
start thinking about high availability options that can mitigate my risk. Once
you’ve got high availability in place, you still want to keep a plan to restore
from backups and test it periodically, but you’re less likely to need to use
it.

4. How Much Data Can You Afford to Lose if Your Transaction Log File is
Vaporized?

When you stop and think about it, isn’t ‘Recovery Point Objective’ a strange
way of saying how much data it’s OK to lose? When you think about backup
strategies, you need to know:

Can you do
incremental restores? (And have you tested it?)

Can you
restore to a single point in time? (Example time: right before the data
was truncated. And have you tested it?)

If your
database and transaction log files disappeared and you were left with only
the backup files, how much data would be lost?

5. How Much Can Your Backup Impact Performance?

Whenever your backup is running, you’re burning precious resources: CPU,
disk IO, memory, and (depending on your configuration), network. In many cases,
it’s just fine to use this up during non peak times. However, for VLDBs which
are backing up multiple terabytes and for critical OLTP databases which serve
customers around the globe, backups need to be run as fast as possible. After
all, backups are only one part of maintenance.

When your performance is critical, these are the questions you need to ask:

Are you
backing up at the right frequency? If we’re talking about a slow
transaction log backup, check how often it runs.

Are you
using all the available magic? If you’re using SAN storage, have you
explored all the options for backing up your databases from the SAN?

Are you
using backup compression? Compression can burn more CPU, but reduce the
amount of writes and overall duration of your backup.

Does your
backup have enough throughput to the storage device? Whether you’re using
iSCSI or Fiber Channel, throughput is often a big part of the solution.

Have you
read the SQL CAT Team’s Case
Study on backing up VLDB’s? It’s chock full of configuration tips to
make your backups blaze through faster.

Sunday, January 22, 2012

Takeaway:The relational database model has prevailed for decades, but a new type of database — known as NoSQL — is gaining attention in the enterprise. Here’s an overview of its pros and cons.

For a quarter of a century, the relational database (RDBMS) has been the dominant model for database management. But, today, non-relational, “cloud,” or “NoSQL” databases are gaining mindshare as an alternative model for database management. In this article, we’ll look at the 10 key aspects of these non-relational NoSQL databases: the top five advantages and the top five challenges.

Five advantages of NoSQL

1: Elastic scaling

For years, database administrators have relied on scale up — buying bigger servers as database load increases — rather than scale out — distributing the database across multiple hosts as load increases. However, as transaction rates and availability requirements increase, and as databases move into the cloud or onto virtualized environments, the economic advantages of scaling out on commodity hardware become irresistible.

RDBMS might not scale out easily on commodity clusters, but the new breed of NoSQL databases are designed to expand transparently to take advantage of new nodes, and they’re usually designed with low-cost commodity hardware in mind.

2: Big data

Just as transaction rates have grown out of recognition over the last decade, the volumes of data that are being stored also have increased massively. O’Reilly has cleverly called this the “industrial revolution of data.” RDBMS capacity has been growing to match these increases, but as with transaction rates, the constraints of data volumes that can be practically managed by a single RDBMS are becoming intolerable for some enterprises. Today, the volumes of “big data” that can be handled by NoSQL systems, such as Hadoop, outstrip what can be handled by the biggest RDBMS.

3: Goodbye DBAs (see you later?)

Despite the many manageability improvements claimed by RDBMS vendors over the years, high-end RDBMS systems can be maintained only with the assistance of expensive, highly trained DBAs. DBAs are intimately involved in the design, installation, and ongoing tuning of high-end RDBMS systems.

NoSQL databases are generally designed from the ground up to require less management: automatic repair, data distribution, and simpler data models lead to lower administration and tuning requirements — in theory. In practice, it’s likely that rumors of the DBA’s death have been slightly exaggerated. Someone will always be accountable for the performance and availability of any mission-critical data store.

4: Economics

NoSQL databases typically use clusters of cheap commodity servers to manage the exploding data and transaction volumes, while RDBMS tends to rely on expensive proprietary servers and storage systems. The result is that the cost per gigabyte or transaction/second for NoSQL can be many times less than the cost for RDBMS, allowing you to store and process more data at a much lower price point.

5: Flexible data models

Change management is a big headache for large production RDBMS. Even minor changes to the data model of an RDBMS have to be carefully managed and may necessitate downtime or reduced service levels.

NoSQL databases have far more relaxed — or even nonexistent — data model restrictions. NoSQL Key Value stores and document databases allow the application to store virtually any structure it wants in a data element. Even the more rigidly defined BigTable-based NoSQL databases (Cassandra, HBase) typically allow new columns to be created without too much fuss.

The result is that application changes and database schema changes do not have to be managed as one complicated change unit. In theory, this will allow applications to iterate faster, though,clearly, there can be undesirable side effects if the application fails to manage data integrity.

Five challenges of NoSQL

The promise of the NoSQL database has generated a lot of enthusiasm, but there are many obstacles to overcome before they can appeal to mainstream enterprises. Here are a few of the top challenges.

1: Maturity

RDBMS systems have been around for a long time. NoSQL advocates will argue that their advancing age is a sign of their obsolescence, but for most CIOs, the maturity of the RDBMS is reassuring. For the most part, RDBMS systems are stable and richly functional. In comparison, most NoSQL alternatives are in pre-production versions with many key features yet to be implemented.

Living on the technological leading edge is an exciting prospect for many developers, but enterprises should approach it with extreme caution.

2: Support

Enterprises want the reassurance that if a key system fails, they will be able to get timely and competent support. All RDBMS vendors go to great lengths to provide a high level of enterprise support.

In contrast, most NoSQL systems are open source projects, and although there are usually one or more firms offering support for each NoSQL database, these companies often are small start-ups without the global reach, support resources, or credibility of an Oracle, Microsoft, or IBM.

3: Analytics and business intelligence

NoSQL databases have evolved to meet the scaling demands of modern Web 2.0 applications. Consequently, most of their feature set is oriented toward the demands of these applications. However, data in an application has value to the business that goes beyond the insert-read-update-delete cycle of a typical Web application. Businesses mine information in corporate databases to improve their efficiency and competitiveness, and business intelligence (BI) is a key IT issue for all medium to large companies.

NoSQL databases offer few facilities for ad-hoc query and analysis. Even a simple query requires significant programming expertise, and commonly used BI tools do not provide connectivity to NoSQL.

Some relief is provided by the emergence of solutions such as HIVE or PIG, which can provide easier access to data held in Hadoop clusters and perhaps eventually, other NoSQL databases. Quest Software has developed a product — Toad for Cloud Databases — that can provide ad-hoc query capabilities to a variety of NoSQL databases.

4: Administration

The design goals for NoSQL may be to provide a zero-admin solution, but the current reality falls well short of that goal. NoSQL today requires a lot of skill to install and a lot of effort to maintain.

5: Expertise

There are literally millions of developers throughout the world, and in every business segment, who are familiar with RDBMS concepts and programming. In contrast, almost every NoSQL developer is in a learning mode. This situation will address naturally over time, but for now, it’s far easier to find experienced RDBMS programmers or administrators than a NoSQL expert.

Conclusion

NoSQL databases are becoming an increasingly important part of the database landscape, and when used appropriately, can offer real benefits. However, enterprises should proceed with caution with full awareness of the legitimate limitations and issues that are associated with these databases.

Saturday, September 11, 2010

Business Week claimed in 2008 that ‘cloud computing is changing the world’. In the same year, commentators quoted on CNET.com suggested cloud computing represented a ‘paradigm shift’ and was ‘the new black’. Hyperbole surrounds this subject.

Cutting through the hype is vital if enterprise level cloud services are to be brought to bear on the contact centre. Most people use the cloud to mean cloud computing. In fact, computing power is simply one service, albeit one of the most common, that can be delivered via the cloud. As a rule of thumb, cloud delivery means a service is pay-as-you-use, scalable, based on shorter contract terms – hours rather than years and consumed via a web portal.

A cloud-delivered contact centre meets these criteria, combining hosted IP telephony and automated, voice-activated software-as-a-service to deliver a package that is deeply scalable and can be purchased in new and flexible ways, such as per concurrent agent and by the hour. This new level of cost granularity will allow chief operating officers and heads of customer service to measure more closely than ever the efficiency of their contact centre infrastructure, unlocking service improvements and additional cost savings in the future.

The cost efficiency of a contact centre is based largely on how successfully it can be operated at or near its capacity. This is the key advantage of a cloud-based contact centre. For example, imagine a travel company taking hotel bookings for the 2010 World Cup in South Africa. The company has invested in contact centre infrastructure sufficient to handle peak demand in a normal month. As the World Cup approaches, peak times may comprise twice as many incoming calls as their contact centre has been manned to cope with.

But imagine now the company employs an automated service that presents the customer with choices about the hotel they want, the room type, length of stay, whether breakfast is included or not, and so on. Critically, the service is based on applications hosted on virtualised servers in the cloud. If the travel company is using software-as-a-service in this way, scaling up services for only a few hours is suddenly feasible, as it no longer requires the company to have free server space itself, or trained customer service agents waiting on hand to process the calls.

In 2010, finding the right technology partner to move contact centres into the cloud, and the right commercial model to buy those services, will be vital. This year, in particular, purchasing decisions will be under scrutiny by finance departments, regulators and investors. What advice, then, can be given to organisations planning to negotiate increasingly complex and nuanced technology and payment models? Here are BT's tips – ‘ten for 2010’:

1)Don’t underestimate ‘self-service’ technologies. They can do more than you think and customers don’t dislike them as much as they might say. Two thirds (60 per cent) of US and UK customers would rather use a voice-based self-service system than an offshore contact centre to improve service efficiency.

2)Voice-based self service can also improve efficiencies and stop avoidable manned contacts – two vital outcomes in the current climate. Remember that with self-service and virtualised options, you can deliver 24/7 customer service without employing a 24/7 rota of agents.

3)Hosted services help you cope. The advantages of hosted services are not just the obvious financial ones such as the avoidance of capital expenditure. One real benefit is the ability to scale. If your demand fluctuates by 40 or 50 per cent, that means for much of the time you are paying for twice as much capacity as you need. Hosting services in the network means you pay only for what you use – and they can simply be scaled by a factor of 10 or 100 if that is required.

4)Take the call to the agent not the agent to the call. Save money and the environment by using cloud-based contact centres, routing calls to agents, instead of having your agents go to a place to receive calls. It saves money on facilities, increases the pool of skilled workers available, increases their productive hours and allows resources to be flexed as needed.

5)Read up on virtualisation! Virtualising services is happening all over the place, right now. It will provide agent savings (of up to 15 per cent, according to BT estimates) while improving the average answer time, thereby improving customer satisfaction.

Hosted contact centres, delivered from the network, are the quickest, most cost-efficient way to virtualise resources. This can improve service quality by connecting the best person with the right skill to the right enquiry every time. Moreover, costs can be optimised, by maximising the use of expensive skills and managing skills centrally as one big ‘pool’. Any agent can be available to any call, just by connecting to the hosted platform.

6)Choose non-geographic telephone numbers. That way, you are not tying your organisation to a specific location. The number can move with you as you grow without you having to change your main contact numbers. Your contact number can even become part of your brand.

7)Consider looking beyond call recording as a ‘tick in the box’ for industry regulation. Instead, view it as an aide to understanding your customers and improving the quality of agent interaction. Let automated analytics loose on your archive of recorded calls and learn things you never knew about your customers, your agents, your IT infrastructure and your products and services.

8)As the delivery of customer contact services becomes increasingly technological, organisations must think carefully about what is mission-critical. Is operating contact centre infrastructure a central aspect of the organisation? Concentrating on core business competences is likely to meet the approval of your shareholders and investors, and letting a specialist provider operate your contact centre infrastructure takes away the problems of supplier management and service support.

9)Protect yourself against rapid change. Innovations such as cloud services, virtualisation and unified communications are likely to bring about a step change in the way customer contact is delivered and priced. The best way to remain at the leading edge of this change is to work with a partner focused on delivering these innovations with the stability and security required by international enterprises and Government bodies.

10)Pick a partner that understands the full range of contact centre technologies – from network management and telephony, to cloud delivery of applications and services.

One of the features of 2010 will be the increasingly innovative use of cloud services, and contact centres are no exception to this trend. For heads of customer service and chief operating officers, the focus this year is to balance exemplary customer service and cost containment. They may find the cloud – without the hype too often associated with it – is the way to achieve both.

Wednesday, September 08, 2010

An increasing number of enterprises are using desktop virtualization to reduce their cost of supporting PCs. But while a virtual desktop infrastructure provides the benefits of centralized applications, it also changes how individual users are supported. If your infrastructure is not suited for the VDI model, performance and stability issues can be profound -- and potentially disastrous.

In a virtual desktop infrastructure (VDI), applications run on a server virtual machine (VM) and are linked to the user via a desktop client app. The actual application execution is remote, and PC storage, memory, and CPU access are virtualized and hosted. The master virtual PC can be projected to any suitable client device, but only the user interface is projected. Since the desktop instances are hosted, they can be affected by the same problems as server virtualization applications -- and more.

The big challenge when planning for VDI hosting is the sheer number of virtual desktops. Most companies using server virtualization run two to five virtual servers per actual server: A large enterprise might host 2,000 to 3,000 virtual servers. However, that same company could have 10,000 or more desktop systems to virtualize. Predicting how all those virtual desktops will use the data center resource pool is a challenge.

Instead of treating PCs as discrete systems with their own operating systems and middleware, software and storage, virtual desktop technology lets enterprises create machine images of various "classes" of systems and load those images on demand. In some cases, users can customize the configuration of the master image in the same way they would customize the configuration of a real system. But customization means more desktop master images to manage and changes to application requirements can make an old master incompatible with a worker's current usage.

In terms of resources, memory may be the toughest VDI issue to manage. Unlike server application components that can have short persistence -- particularly in service-oriented architecture software -- desktop applications are designed to load and stay running for hours: They must be paged out to be removed from memory. That paging can create non-sustainable disk I/O loading. Even if a given set of users run the same basic application, in most cases, they can't run the same exact copy. Therefore, a large memory pool that can hold as many discrete machine images as possible is essential.

Disk storage is another challenge with hosted VDI applications. On real distributed desktops, client system disk usage is supported on different devices and controllers, and thus, it would never collide. When desktops are virtualized and hosted, the host systems has to field disk I/O for all of the virtual desktops at the same time, which can create congestion and performance problems, particularly if work schedules can produce frequent synchronized behaviours. If every user starts his/her day by reviewing a task list, the 9 a.m. I/O impact can be profound. Therefore, it's critical to have very efficient I/O and storage systems on all VDI hosts.

Affordable solid-state drives are an advance that impacts both memory and storage. Solid-state disks and effective multilayer managed caching of machine images and paging can reduce the memory requirements for a given level of application performance.

Multicourse server technology is also an enhancement to VDI support. Remember the total CPU power of 10,000 desktops was available to support the hypothetical enterprise in a standard client/server mode. Compressing those desktops into a set of VM resources is more likely to succeed if every server has several cores where application needs can be allocated to. Otherwise, a collision of activity could reduce performance to near-zero levels for all.

The biggest infrastructure challenge for the hosted virtual desktop model is sustaining the performance of the server-to-user connection. Unlike client-server computing, which exchanges basic data elements between the desktop and the server farm, virtual desktop computing must provide a remote display and keyboard interface that can be significantly more bandwidth intensive. Since the performance of the communication's connection is critical to user satisfaction, VDI management plans have to take the capacity of this link into account. When the desktop and server are in the same physical facility, only LAN capacity is consumed, and companies can improve virtual desktop performance by increasing the speed of their LAN connections (both to the user and between LAN switches). Enterprises can also flatten their LAN infrastructures to reduce the number of LAN switches between real desktop users and virtual desktop host systems.

Many companies are now considering or deploying virtualization and cloud computing, and in the process, they're refining their data center networking. This is a good time to consider and address the network impact on VDI performance. Flattening the data center and headquarters network improves virtualization and cloud computing performance in the data center as well as VDI application performance.

In cases where VDI supports remote workers, performance will normally be linked to the capacity of the remote access connection. The explosion in consumer broadband has made "business Internet" services with access speeds of 10, 20, 50 or even 100 Mbps available at reasonable costs. Using a VPN with such a service may be the best way to ensure good application performance for remote VDI users.

VDI technology is justified by operational savings, but those savings cannot be realized if business operations are disrupted by performance issues. Invest in adequate VDI resources upfront to properly support your enterprise. As always, conduct a limited-scope pilot test to verify the conclusions of an infrastructure assessment. With careful planning, a VDI project can significantly reduce current costs and contain further cost increases associated with growing PC support demands.

Server virtualization is a common deployment in Indian companies. Commercial houses want to avoid physical servers, and paying through the nose for power and cooling, notwithstanding the ever-increasing real-estate prices. While going about virtual server training, you can start with a set of systems lying around. Pre-install it with a virtualization software and test. This should be done purely from the learning and testing perspective and not from the production system's view. Once you have hands-on experience on the software, you can work on the implementation.

The next step is to install server virtualization software. You have a wide choice of vendors, including VMware, Microsoft, and Citrix. Whenever you are going in for a virtual server in your organization, the choice of software should be solution-oriented, not product-oriented.

Find out what your company requirements are. Choose the software based on your requirements and costs. The best way out is to have an IT policy in place, as there will be no discipline without this. Ensure that whenever you are going about server virtualization, there is a blue-print available in front of you. This will help you to know the various tasks and ensure that a certain task 'A' should necessarily be completed to satisfy task 'B'.

After installing the servers, have a set of pilot testers who access them within the network. Virtual server training should be within a private domain where they can do the testing and then these pilot testers give feedback. For maintaining the new infrastructure your IT team can get an IT implementer on board. In addition, there are product-based websites on the Internet that you can refer.

One of the problems with server virtualization is storage. With multiple workloads coming from the physical source, hiccups over random input-output are possible. Treat the virtualized server as a real physical server. Whenever storage or network bandwidth or any kind of operating system resources are being assigned by a particular virtualized server that contain all the virtual machines being accessed by the client, the administrators of those servers must keep in mind that a virtual server is only limited to the capacity which is there on the physical server. So, it shouldn't be over-burdened.

Ensure that the disk resource allocation (which happens when the server has a huge hard disk) is shared network attached storage. This will enable all these virtual machines to communicate with each other from a common storage, saving multiple hard drives on each virtual machine-dedicated set.

Bangalore International Airport Limited (BIAL), known for its smooth operations powered by IT, is a prime example of IT infrastructure set up and management. Reputed to be one of the best operated airports in the world, BIAL has now incorporated its disaster recovery implementation which has a key role to play in the airport’s business.

BIAL has its primary data center and disaster recovery (DR) site at the airport itself. There are about 60 servers of a heterogeneous nature (varying from Windows to Unix to Linux) that host different applications. These servers are spread across two data centers. The servers are mostly HP Proliant, HP 9000, HP Integrity, Dell, and blade servers from IBM.

The disaster recovery implementation’s reach

The Bengaluru airport’s disaster recovery implementation is hosted in a different building, but within the same campus. As U Nedunchezhiyan, the senior manager of ICT (Infrastructure) at Bengaluru International Airport points out, “Disaster recovery implementation as a business requirement is such that we cannot have the DR site far from the airport. If there is an issue at the Bengaluru airport and the DR site is far away, it is not possible to immediately start it.”

The data center has standard BSES power and UPS backup (with N+1 redundancy and regular battery backups). In terms of connectivity, telecom providers have terminated fiber connectivity at BIAL’s fiber head over-point. From this point it goes to BIAL’s telecom center and then to the users. Connectivity is not available directly between the users and telecom service providers. Apart from having its own infrastructure, BIAL also acts as service provider to many tenants within the airport. BIAL also provides a common IT infrastructure for all airlines.

The entire Bengaluru airport campus is being managed by fiber connectivity with redundant loops. As part of the disaster recovery implementation, the airport has ensured network and service redundancy. Arun S, the senior manager of ICT systems for BIAL says, “Our IT needs are completely different from that of other verticals. Even a second’s difference means a huge loss for us.” That’s why BIAL’s disaster recovery implementation is considered to be very critical by the airport authorities.

BIAL’s DR setup is very closely linked to its recent storage virtualization deployment. Currently, BIAL has gone for Hitachi Data Systems for virtualization of its storage infrastructure. Earlier, it had been using HP EVA (Enterprise Virtual Array) storage.

Critical airport applications

Replication for different applications is performed in a different manner in BIAL’s disaster recovery implementation strategy. Business critical applications are protected using synchronous replication. Applications critical for BIAL’s functioning include those such as the airport’s database—the repository for airport operations, ERP and middleware. BIAL’s DR site has replication of all business critical applications. However, it does not perform exact replication of the primary site.

Security plays a crucial part in BIAL’s disaster recovery implementation plan. It begins with physical security. The data center is protected by biometrics access and digital access. Each room is programmed such that only the concerned person has access. There is an in-house audit team that periodically checks for vulnerabilities. BIAL follows the standard ITIL process.

Work on recovery time objective (RTO) and recovery point objective (RPO) is underway at the moment as part of the disaster recovery implementation. Nedunchezhiyan says, “Right now there is tape-based back-up. We are planning to move to disk-based backups in order to improve RTO and RPO.”

The airport has stringent control on downtime since domestic flights dominate the day hours, and a large number of international flights operate at night. As a result, permission is required from every group for any sort of disaster recovery implementation. With the entry of GVK, managing airports scenario might change considerably. As Arun mentions, the IT team might have to rework the entire IT DR strategy with DR site in a different place. But at the moment, that migration is a long way off for BIAL.

Saturday, May 29, 2010

Last night I went to see the new Robin Hood film (not highly recommended) and there are a few battle scenes that made me think about how brutal it was going to war back then. This led me to think about the evolution of technology and how it has revolutionized our lives; not just at home but also at school, work and even in war.

The new tale of Robin Hood is supposed to be an explanation of how Robin Hood became an outlaw. The storyline is based on the tale of a traitor who plots with the King of France to overthrow the king of England. From across the channel they managed to come up with a plan with several trips to and from France and the use of several carrier pigeons.

Trust me

Back in the day you had to rely on people and trust them. If you were planning a war, you had to have faith that if you had arranged something, they would keep their word.

I used to meet my friends at morning on a Sunday near the main sports ground of the Ordnance Estate Ambernath. If they were late, you just had to wait. Nowadays you ring them and re-arrange to meet somewhere else a bit later. Is our reliance on technology making us unreliable people? Are we becoming too dependent on technology?

I would say yes and no. Yes because if I didn’t have internet access I would feel like I was living in the dark ages, which is a little bit excessive and no because of the convenience and countless advantages it brings.

A trip down memory lane

When I was really young I got my first computer, a Compaq, which I adored! I spent ages playing the old-skool adventure games, where you have to manually type turn left, open the door, carriage return etc., and then there was our sweet old Mario. I will never forget the day we added an extra 4 MB of memory to it! A whole 4 MB!! We really thought we’d entered the 21st century! Now we carry our laptops everywhere and technology has made our lives so much more convenient.

Meet in the middle

As with everything you have to find a happy medium. I find it highly rude and annoying that people are tapping away on their crackberry on the train and bus or even while you’re having a conversation! Not only that, everything has become more temporary. Instead of a love letter, we now get a text; instead of a photo lovingly developed and kept in a frame, we just have a file that could easily be lost or deleted – along with the memory.

On the other hand you can now avoid the Saturday rush at the shops by ordering online, keep in contact with friends living abroad, share files instead of using snail mail and even search for a weekend date! We really have come a long, long way since my lovingly Compaq days!

Why not share your memories of the good old days as well, or write your own IT blog!

Thursday, May 27, 2010

India Inc. is decidedly rescinding salary freezes and even making up for ground lost during the downturn. The increments that companies started talking about gingerly late last year are becoming a reality in 2010. Sectors that experienced maximum job squeeze are also making a recovery, though not to the same extent as fast-growing sectors.

A salary increase survey conducted by HR consulting firm Hewitt Associates across 465 companies points to organizations resuming pay raises. Salary increase on average for 2010 in India is projected to be 10.6 per cent, the highest in Asia-Pacific. China and the Philippines follow India with a projected increase of 6.7 per cent and 6.4 per cent respectively.

The projected increment of 10.6 per cent is a three-fifths increase from the actual increase of 6.6 per cent in 2009, according to the survey. Indian-owned companies will in all likeliness outperform MNCs with a projected average increase of 11.4 per cent as against 10.2 per cent by the latter, it says.

Oil and gas along with the power sector has the highest projected salary hike of 12.8 per cent. "Not only have these sectors continued to grow, but are currently witnessing a talent gap," says Sandeep Chaudhary, Leader, Performance and Rewards Consulting for Hewitt in India, explaining high levels of pay hikes.

Salary hikes in banking, financial services and insurance (BFSI) have made a smart return—banking is set to give double-digit increments while in financial services it is still in the region of 8-10 per cent. Average increment in banking was subdued last year and salary freezes ruled across sectors. This year is a different story. "Since Indian arms are contributing to the growth of MNC banks, it's translating into rewards for employees," says E. Balaji, CEO, Ma Foi Management Consultants.

Merit-based increases are getting more aggressive. Says Gautam Chainani, Chief People Officer, Aditya Birla Financial Services, "In financial services there is a move towards aggressive variable pay." The company is handing out 8-12 per cent hikes as against five per cent last year.

Increments in IT services are muted, however.

"That's because business and cost productivity pressures are still high," says an HR executive in an IT services firm, adding, "hikes will be more in the range of six to eight per cent on average."