Introduction

I have been using IBM MobileFirst (formally known as Worklight) since over a year now and have already published one hybrid app for Android and iOS platforms. I used jQuery Mobile UI Library in order to build my UI. Recently I was surfing the internet to learn about Angular JS and in process of doing so I came across Ionic framework. I read about both the frameworks and I found them much more better than jQuery Mobile. So I took a step further and thought why not leverage the features provided by these frameworks in order to build a hybrid app in MobileFirst Platform. In this post I will mention the steps required to configure MFP project to run with Ionic Framework. If you don't want to do any configuration, you can simply download AngularIonicStarterProject.zip|View DetailsMFP project in which everything is pre-configured. So lets get started.

Prerequisite

I am assuming you are already familiar with MFP. In this post I am using MFP V7.1 Consumer Edition.

In order to download Ionic libraries, you will require Node.js installed in your system. You can download it from here. Make sure you download v4.2.4 and not the latest available version since it will cause problems.

I am also assuming you have a basic knowledge of HTML5, CSS3, JavaScript, Angular JS and Ionic Framework. Don't worry if you are not the guru in these (neither am I).

AngularJS

AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you would otherwise have to write. And it all happens within the browser, making it an ideal partner with any server technology. To learn more about AngularJS you can visit this website which have very good set of tutorials for it. As far as its installation is concern, AngularJS will be installed as part of Ionic Framework Dependency so that you don't have to worry about that for now.

Ionic Framework

Ionic is an HTML5 mobile app development framework targeted at building hybrid mobile apps.

Think of Ionic as the front-end UI framework that handles all of the look and feel and UI interactions your app needs in order to be compelling. Kind of like “Bootstrap for Native,” but with support for a broad range of common native mobile components, slick animations, and beautiful design.

Unlike a responsive framework, Ionic comes with very native-styled mobile UI elements and layouts that you’d get with a native SDK on iOS or Android but didn’t really exist before on the web.

The bulk of an Ionic app will be written in HTML, Javascript, and CSS but Ionic also uses AngularJS for a lot of the core functionality of the framework.

Installing Ionic and Angular

Once Node.js is installed in the system, you can follow the following steps in order to get Ionic and Angular framework files.

Open the command prompt and run the following command: npm install -g ionic

Once this command execution is finished, create a new folder, say AngularJS, in your E drive and execute this command cd AngularJS.

Next you need to create a new Ionic Project. By default, Ionic creates a complete Cordova project, but we don’t need that. You can tell Ionic to just create the web assets by passing --no-cordova at the command line. Here is an example: ionic start --no-cordova ionicproject blank . This statement tells the ionic to create a new project named "ionicproject" with "blank" template. When you open the folder, you can see following files and folders:

Configuring Ionic and Angular into MobileFirst Project

Create a new MFP project and inside that project create a new hybrid application.

Copy the ionic folder from <Drive>:/AngularJS/ionicproject/www/lib/ and paste that folder into common folder of the MFP's Hybrid App folder.

Create a new folder named "pages" into common folder of the MFP's Hybrid App folder.

At the end of this activity, your folder structure will look like below:

Now next step is to modify the index.html in order to include ionic and angular dependencies.

Open index.html file and replace its contents with the contents of index.html file which is there in the attached zip file.

The Ionic project provides a convenient package named ionic.bundle.js to help you load a basic Ionic environment. Instead of loadingionic.bundle.js, you can also load these files individually, which may be easier to manage for complex applications. For our example, I have loded the bundeled file which contains:

ionic.js

ionic-angular.js

angular/angular.js

angular/angular-animate.js

angular/angular-sanitize.js

angular/angular-ui-router.js

Now that we've included the Ionic framework and its dependencies in the application, we need to start it up. Because Ionic is a set of extensions to AngularJS, the loading pattern is the same as for AngularJS, which is called bootstrapping AngularJS. However, you need to let the IBM MobileFirst Platform (MFP) complete its own loading sequence before bootstrapping AngularJS, which you can do by instructing MFP to bootstrap AngularJS when it is ready. Inorder to do so, edit the js/main.js file and adjust the wlCommonInit()function as follows:function wlCommonInit() {
// Pre-existing wlCommonInit() content here
// New lines to bootstrap AngularJS
angular.element(document).ready(function() {
angular.bootstrap(document, ['app-main']);
});
}
This code tells AngularJS to bootstrap your application using a base module named App.

Every AngularJS application needs an application module. In our example, we referred to one named app-main when bootstrapping AngularJS. Create that module now in common/js/app-main.js, and put this line of code in it:angular.module('app-main', ['ionic']);
This line defines an app-main module with a single dependency: ionic. Because we've defined app-main as our application module, ionic will be automatically loaded as one of its dependencies. We needs to ensure that it is loaded correctly by adjustingindex.htmland by including following line in it:<script src="js/app-main.js"/>

You also need to modify the MFP default stylesheets so that Ionic will work properly. Edit common/css/main.css and add this rule:body { position: initial; }
If you do not add this line in the mentioned CSS file, you may face unexpected results in the application. You can read more about the issue from here.

Now copy the html files from the pages directory of the attached zip file into your project's pages directory.

Conclusion

In this post, you have successfully create a Hybrid App in IBM MobileFirst Platform by using Using Angular JS and Ionic Framework. You can easily take the attached project as your base project and start building exciting apps for the customers using Ionic and Angular.

IBM WebSphere Application Server is a robust, enterprise class application server that provides a core set of components, resources, and services that developers can utilize in their applications. In this blog, I am going to talk about Application Edition Management and Automatic Deployment through scripting.

What is Application Edition Management?

Many business applications require constant availability. The standard for application availability asserts that applications are deployed on application server clusters. The redundancy of a cluster is essential to provide continuous availability. Interruption-free application upgrade refers to the ability to upgrade while maintaining application continuous availability. In other words, users of the application experience no loss of service during the application upgrade.

When you perform a rollout to an edition, you replace an active edition with a new edition. To provide interruption-free application upgrades, performing a rollout to an edition includes the following items:

Fencing a server from receiving new requests.

Quiescing requests for the application in a particular server.

Stopping the currently active edition.

Starting the new edition.

Resuming the flow of requests to the edition.

Group Rollout

To perform a rollout to a target cluster, you can divide the cluster into groups, and define a group size, which specifies the number of nodes to process at a time. Performing a rollout to a group results in the servers in each group being upgraded to the new edition at the same time. Each server in the group is quiesced, stopped, and reset. A rollout can be performed on only one group at a time in the administrative console. When any member in the new edition becomes available, all requests are routed to the new edition.

As you perform a rollout to the edition, some servers in the cluster move from the previous edition to the new edition, some servers are in the process of making the transition, and other servers have not started the transition. All application requests are sent to any server that has an active, running instance of the latest edition of the requested application. For example, when you perform a rollout from edition 1.0 to 2.0, all application requests are served by edition 2.0 when edition 2.0 becomes available on a server. Any servers that are still running edition 1.0 do not serve requests until this server is updated to edition 2.0.

Atomic Rollout

Performing an atomic rollout to an edition replaces an edition on half of the cluster at a time to serve all user requests with a consistent edition of the application. All user requests are served by either the previous or the new edition; user requests are never served by both editions.

An atomic rollout ensures that all application requests are served by a consistent edition, for example, either edition 1.0 or 2.0, but not by both. The currently available edition is taken offline from half of the servers that comprise the cluster. In those servers, the new edition is activated and started, but those servers are held offline until the next step completes. The next step is to take the currently active edition offline in the remaining servers. At this point, no server has an instance of either edition available to serve application requests. The ODR temporarily queues any request that arrives for this application. After the application is fully offline, the first half of the cluster is brought back online. The second half of the cluster transitions from the previous edition to the next edition and is brought back online.

Automatic Deployment with Ant Script

I amassuming by now that you are comfortable with the edition management concept of WebSphere server. In this section I will try to explain how you can achieve it in your organization once deployment is finished. I am assuming the target deployable artifact (EAR, WAR etc) is already prepared by you. The automated script will perform following task for you:

Install the deployable artifact on server

Add the shared library reference to it

Rollout the current edition

If you don’t have any shared library reference in your application then you can modify the script create your own version. You can use this script in both clustered and non-clustered environment with little modification in code. I will explain both processes here. So let’s get started.

addSharedLibrary : This command will add shared library to the installed application

rolloutApplication : This command rolls out an edition to server or cluster

startDeploymentOnCluster : This command depends upon installAppOnCluster, addSharedLibrary and rolloutApplication. This is the target you need to run in order to start the deployment process on Cluster.

startDeploymentOnServer : This command depends upon installAppOnServer, addSharedLibrary androlloutApplication. This is the target you need to run in order to start the deployment process on Server.

You need to download addSharedLibraryScript.py and save it to <WAS_HOME>/bin folder of your WAS installation. You don't need to update anything in this file.

Once above step is completed, you need to place your deployable artifact at path specified against "server.home.dir" property in build.xml file.

Once step 5 is completed make sure that deployment manager and the server (at least one server from the cluster if you are deploying app on cluster) on which this artifact needs to be installed is running.

Depending upon where you want to install the app, run either startDeploymentOnCluster or startDeploymentOnServer target as below:

Navigate to <WAS_HOME>/bin folder

For Windows: ws_ant.bat -f <path_to_build.xml_file> "<target_name>" (target name has to be in quotes)

For Linux: ./ws_ant.sh -f <path_to_build.xml_file> "<target_name>"(target name has to be in quotes)

That is it, if everything goes well you will be able to install the new version of the application and be able to roll out it.

Conclusion

In addition to providing interruption free delivery of your application to your customer, you can also save the valuable manual deployment time if you use this script.

Every line of code you create comes with a complexity cost. How can you tame this complexity for your large source base? One way is to streamline your delivery turnaround time for enhancements and fixes by visualizing your projects' source code—after all, "a picture is worth…” Some major financial systems are driven by industry standard visual representations that allow those projects to meet agile DevOps demands. Join this session to learn about productivity gains and improve your continuous delivery of software.

Roger brings twenty five years of software product innovation and consultative engagements across several industries.

Roger is an OMG Certified UML Professional, IBM and Open Group Certified Specialist and has been a featured global speaker on Cloud Software, DevOps for Mobile, SOA, Design, and Rational software topics.

Roger is also a volunteer for the American Youth Soccer Organization that promotes a fun, family soccer experience for 1,000 kids in the Eastern Panhandle of West Virginia.

***Dial in codes will be sent a few minutes before the webcast and posted in the online meeting.

By registering for this webcast you are allowing the GRUC to provide your information to IBM and/or webcast sponsors for direct contact regarding IBM products and promotions. You will also receive a complimentary membership to the Global Rational User Community.

This blog will talk about "Graphical Data Mapping Editor" which was introduced in WebSphere Message Broker Version 8.0 (hereafter called IBM Integration Bus).It will help as a quick reference for creating new maps, customizing a message map to include headers and editing the property of Transform Elements.

The final step is to select domain, which completes the creation of a new map

Mapping Editor:

The new created map can be edited with the Mapping Editor .

Mapping Input Source to Output Source:

Customizing to add Headers:

Editing the property of Transform Element:

Edit the property of any individual transform element to add a condition or assign a value as illustrated:

Mapping Editor Navigation:

Disclaimer: Each posting on this site is the view of its author and does not necessarily represents IBM's positions, strategies or opinions. I do not guarantee correctness of the opinions or content or sample code presented here. Use it at your own risk

As one of the core developer of DB2 Connect CLI team, I got an opportunity to work on supporting generic special registers feature. Idea behind this blog is to spread some of the benefits and usage to help application development community understand it better to leverage the same.

Though focus of this blog would remain CLI centric, similar concept exists in other client drivers like IBM DB2 .NET provider and IBM JDBC driver (aka JCC).

IBM Data Server Driver configuration file (by default named as db2dsdriver.cfg) is catching its popularity among the customers due to its capability of allowing different DSNs and database properties configuration in a central repository manner. In addition, being in XML format, it takes a less of an effort for any user to get used to such configuration files. In DB2 Connect V10.1 Fixpack 2, CLI added new capability to db2dsdriver.cfg by allowing users to set special registers generically.

Before I go deep into the feature explanation, let me begin with answering few basic questions:

What are special registers?

A special register is a storage area that is defined for an application process by the database manager. It is used to store information that can be referenced in SQL statements.

To know more about special registers with examples, refer to the following link:

What is the existing method of setting special registers from client applications?

There are set of special registers which can be set (or updateable) by the client applications. Application can modify such special registers programmatically using “SET” SQL statements. There are few special registers for which DB2 CLI provides connection level keywords. Application can set these keywords either via db2dsdriver.cfg or db2cli.ini configuration files.

Limitations using existing method of setting special registers:

Setting special registers programmatically expects modification of the application source code and recompile each time special register needs added/removed/modified. Also, this needs to be taken care in all impacted application programs.

Using special registers which can be set as CLI keywords can be a better approach than former, but with limited list of such keywords, applications do not get complete solution. CLI can be enhanced to support requested special registers as a keywords, however with data server introducing new special registers at each release, this remains an ongoing solution. This expects users to upgrade their client drivers to be able to get newer special register support as keyword.

What is the newer mechanism CLI provides to address above situation?

To overcome the drawbacks of both the above approaches, it was desired to have a more generic solution to be developed. As a result, CLI has introduced a unique section of special registers viz. <specialregisters> in the configuration file db2dsdriver.cfg. This section allows users to specify a list of special registers that they like to configure. Based on the need, <specialregisters> section can be added at a DSN level or a database level or even globally.

During each connection to a given DSN or a database, CLI reads through db2dsdriver.cfg and processes <specialregisters> section in the following manner:

- read each special register name and its value from <specialregisters> section of a given DSN or a database

- “without scanning/interpreting” form a chain of special registers to be sent to the connected data server.

- upon the first SQL of the connection, flow chained special registers to the server

- server will process each special registers of the chain (along with the 1st SQL of the connection) and set it appropriately at the server.

As we can see from the above logical flow, with this feature, CLI has no dependency to know the special registers to validate. It will simply flow the entries from <specialregisters> section to the server and let server do necessary validations. Another benefit we can see here is because flow of the special registers is chained together along with 1st SQL statement of the connection, network trips to set the special registers is saved significantly now.

When server upgrade occurs and user application likes to set newly supported special registers, with this new feature of CLI, all user needs to do is to add that special register in their <specialregisters> section! As we can see, no driver upgrade is needed here in order o use newer special registers.

Illustrating usage of <specialregisters>

Having given some background, I can now proceed with the working of this feature. Let’s begin with adding <specialregisters> section to existing / new db2dsdriver.cfg configuration file:.

1. Special Registers applicable across all DSNs/databases ( residing under global <parameters> section)

CURRENT DEFAULT TRANSFORM GROUP = 'MYSTRUCT2'

CURRENT LOCALE LC_MESSAGES = 'en_CA'"

2. Special Registers applicable for DSN = sample

CURRENT SCHEMA = 'MYSCHEMA'

CURRENT DEGREE = 'ANY'

CURRENT DEFAULT TRANSFORM GROUP = 'MYSTRUCT2'

CURRENT LOCALE LC_MESSAGES = 'en_CA'"

3. Special Registers applicable for database = sample2

CURRENT SCHEMA = 'MYSCHEMA1'

CURRENT DEGREE = 'ANY'

CURRENT DEFAULT TRANSFORM GROUP = 'MYSTRUCT2'

CURRENT LOCALE LC_MESSAGES = 'en_CA'"

The above configured special registers for relevant DSNs/databases come into effect with the first SQL statement given post connection. It is at this point the special register settings are applied at the server.

In the above application logic, "INSERT" is the first SQL statement post connection. Along with this SQL statement, the effective special registers list (as listed in the db2dsdriver.cfg) is formed and these special registers get set at the server. In case any special register setting at server has resulted in any warning or an error, those will be chained to the result of 1st SQL’s response. Application can call SQLGetDiagRec() API to retrieve any warning or error details to diagnose the problem.

Where I cannot use this new feature?

To enable client info properties, it’s not recommended to use <specialregisters> section. Existing mechanism either via CLI keywords or environment/connection level attributes can be used instead.

If application logic desires to set special registers during the connection (not at initial phase of the connection), or if they like to change the special registers in between, then setting special registers programmatically is the only way. New feature is useful only as initial value of the special register for the connection.

In summary, as an application user, one can get below benefits with the new feature:

1. Savings in time and network utilization by reduction in network flows

Reduction in network round trips between client and servers since most optimal DRDA protocol is used while flowing special registers set information to the server.

Moreover by chaining set of special registers along with 1st SQL of the connection saves another network round trip by using piggyback mechanism.

2. Less maintenance and upgrade of the driver:

The new approach avoids necessity of driver level upgrade just to exploit any new server special register. All users need to do is add the new special register entry in the <specialregisters> section to the existing drivers’s db2dsdriver.cfg file (minimal driver level requirement is V10.1 Fixpack 2). Knowing many big organization having thousands of client drivers installed at each workstation, this saving brings lot of relief to them.

3. Centralized maintenance:

Using central configuration method for db2dsdriver.cfg, users can now have much controlled manner to add/remove/edit the special registers for their applications. Also, with flexibility of using <specialregisters> under DSN, database or global level, user can tune their need quite easily.

Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.

The Summit is free to attend for all those attending IBM Innovate. Join us for an exciting session and refreshments to start your attendance at Innovate 2013. For more information and to RSVP visit http://ibm.co/16jTusU

Having worked for some time on LDAP for DB2,I thought it was better if I could highlight how and in which areas LDAP can help ,when working with DB2.So with a platform like that of developerette, I thought I would use this ,so here is a brief introduction on what is LDAP and how it best fits while working with DB2.

What is LDAP?

Today people and businesses rely on networked computer systems heavily to support distributed applications where the key information is stored in central repository.
Such information is often collected into a special database that is sometimes called a directory. The Lightweight Directory Access Protocol (LDAP) is an industry standard access method to these needs.
LDAP defines a standard method for accessing and updating information in a directory through TCP IP protocol. LDAP has gained wide acceptance as the directory access method of the Internet and is therefore also becoming strategic within corporate intranets. It is a centralized storage system of the organization data (just like email).

Useability scenarios for LDAP fitting best for DB2

The figure represents a typical LDAP topology where client connects to "payroll" database using the cataloged information present in the LDAP server.

Recently, there has been a lot of buzz around WEB APIs and
developer versions of popular social networking companies like Twitter and Facebook.
There was a time when the industry was obsessed with SOA and you had to be
doing SOA to do it right. Reusability and Loose coupling were the much needed
aspects. While this is still the case, the use of Web Services to implement the
service-oriented architecture is giving way to REST-based services which are
much faster – to implement as well as to understand.

Also more and more individuals are interested in creating
their own mobile apps with the advent of platforms like Android SDK, IBM
Worklight, Phone Gap and trigger.io

REST is Representational State Transfer. It’s a protocol
based on HTTP which defines everything on the internet with a unique identifier
(state).

So for example your Facebook profile picture is also a
resource on the internet which is uniquely identified. You can GET it using a
URL, you can POST a new picture of yourself, PUT (edit) the existing picture or
DELETE the picture.

The catch is that you are not doing all this using the Facebook
page, but using URLs (or more specifically – Facebook Graph API); the advantage – you can plug this functionality
in a custom app you write.

So how does this all fit in together? Let’s put together the
pieces of this jigsaw puzzle and look at the primary actors/stake holders in
this space.

The Provider Story

The providers are the companies (or individuals) who have
built some custom functionality which is unique to them. They now want to make
it open for anyone to use. So they come up with their APIs (Application
programming Interfaces) which are on the WEB, hence called WEB APIs. Simple,
isn’t it?

The simplest to understand, consumers are simply put the
people who use the WEB APIs. So this is you and me. The consumers can also be
enterprises. We need the APIs. How else would we know how to get that much
desired resource which is lying in some corner of the Internet?

As already discussed you and I can plug-in the functionality
provided by someone else in our own app – something like a mashup.

A very common example is companies using Facebook/Gmail
credentials to let you log in to their websites and then posting content on
your timeline. You can use Facebook Graph API for this purpose.

The primary tenet is
reusability- why build something which someone has already built and is making
publically available.

WEB APIs or REST based services open the gateway for
cross-company collaboration.

You need a cab in a city which is new to you, what do you
do? Yellow Pages...

The Middle Managers are the yellow pages (intermediaries)
which collect information about multiple providers and make them available to
the consumers.

Not all companies have a wide social appeal, most are more
specific and domain-oriented. The central managers come to build the bridge
between producers and consumers which would otherwise not know each other at
all.

In addition to acting like a directory of provider WEB APIs,
middle managers may choose to provide certain value-adds to the providers –
like providing API Analytics, which help the provider improve their services,
or help know who is their primary customer base, when was the API accessed the
most, which geography this is most popular. This is indeed an upcoming trend.
This is where IBM Cast Iron Live for Web APIs comes into picture.

With the advent of the cloud, there was an immediate worry about What happens to the existing Enterprise software and hardware? What happens to my existing apps? Do I need to re-architect everything for the cloud.

The truth is that most enterprises are taking only baby steps towards Cloud Computing. e.g. Email, Salesforce, Web apps/sites, Office apps etc. The main reason for this is the initial skepticism about a new buzzword as well as lack of enterprise-readiness to adopt the Cloud Strategy head on. The main reason for the slow acceptance is mainly that enterprises are just not aware of their business challenge they want to tacke with the Cloud. The most appropriate step is to first understand you problem and seek the question to which the answer is Cloud.

On the other hand, enterprises have already worked on modernization and rationalization of most of their legacy apps. Also they have already implemented a reusable modular architecture in their enterprise by the use of SOA (Service Oriented Architecture) principles. SOA Governance whose need is now being understood by many enterprises is an added winning advantage for their enterprise. Because we already have it, the SOA governance framework can be extended to govern cloud services.

So is Cloud the end of enterprise software? Do enterprises really not need to buy hardware any more?

The answer to this question is largely dependent of the business problem of the enterprise. If its a large telecom company or a bank , doing away with entire enterprise software or hardware makes little sense.

However, for college graduates who are looking at establishing their own startups, the need for spending a lot of money on the hardware is virtually eliminated. They can rent a PaaS solution (the most common is Amazon EC2 <Elastic Compute Cloud> instances) and get to work. In cases like startups or pilot projects, Cloud Solutions are actually a boon.

What are the big companies doing about cloud? Oracle, IBM and Microsoft?

Disclaimer: The following thoughts are my own analysis and most of it is from content of Jason Bloomberg's conference. They do not reflect the views or opinions of my company whatsover.

Oracle has launched a suite of products around IaaS, PaaS and SaaS. Oracle offers Sun hardware for their IaaS solutions and are essentially hosting Oracle middleware on the cloud. They call their SaaS solutions as Oracle Fusion Apps.

IBM's cloud strategy is not aimed at the end-consumers. Their target is the big enterprises and large telecom providers who actually provide enterprises with the infrastructure to host their cloud solutions. So these are more like the ISPs (Internet Service Providers). IBM's solutions are mainly around PaaS.

Microsoft basically brand their cloud solutions under the Windows Azure tagname. Windows Azure Platform and Windows Azure Platform Appliance is their PaaS offering whereas their major SaaS offering are the Office 360 apps.

So how do I get out of this mess? To use or not to use the Cloud?

The answer to this and the enormous Cloud solutions is two words - Architecture and Governance.

It is essential to identify the business problems the Cloud best addresses, and to see where the Cloud fits into the overall IT strategy of the enterprise. What are the pros and cons of Cloud versus any other alternatives, and how the Cloud fits in the overall governance framework.

The important thing to remember is not to grab any solution just because your favourite vendor has launched it, but you analyse objectively if your enterprise really fits into that readmade suit.

I am a product developer? Do I have to redesign by products for the cloud?

The essential thing to understand is that unless the applications are re-architected to take advantage of Cloud benefits like Elasticity and Fault Tolerance, there is little sense in using a Cloud Solution at all. As the phased strategy to Cloud migration suggests, it is very important to take incremental steps to architect your solutions for the Cloud.

When you design with the aim of leveraging Elasticity and Fault Tolerance benefits of the cloud, you will end up with a better architected app. You don't know aheas of time how many Cloud instances your app will be running on, as such is makes perfect sense to spend a little time initially and design your app FOR the Cloud.

Can I ensure my ACID transactions in the Cloud?

We have grown up reading about databases and the magic word - ACID - Atomic Consistent Isolated Durable and as such we believe that all database transaction should necessarily be ACID for several reasons. However, with the advent of transactions in Cloud, it is no longer possible to have immediate consistency of data at all instances. What Cloud assures is Eventual Consistency - i.e. - Data will be consistent after a set amount of time passes since an update.

ACID is gradually giving way to BASE in the Cloud Context.

Basic Availability - Cloud supports parial failures without leading to a total system failure . (Cloud environments are inherently partition tolerant)

Soft State - Any change in state must be maintained through periodic refreshment.

Eventual Consistency - Its okay to stale some data some of the time

The BASE requirement for transactions in Cloud also suggests that companies where real time data and accuracy is of prime importance, Cloud might not be such a good solution. A clear example of where Cloud cannot be a good medicine for all ills. Examples may include, real time inventory management for product availability and banks.

(Banks may not want to adopt cloud for reasons other than BASE - security and government regulations may be major challenges).

Conclusion

The main takeaway for this module would be that the adoption of cloud depends on what is your unique problem. For Cloud, one size does not fit all. An enterprise needs to carefully weight its app's requirement for scalabilty and elasticity and then decide which Cloud deployment option is right for them.

Prof. Madhuri has a unique way of telling her stories, making them interesting and inspiring at the same time. She is a proud professional, daughter, book lover and a guide to her students. She's always ready to try something new and find motivation in the simplest of sources. Her career as a professor is a conscience choice to nurture and interact with young talented minds. Needless to say, her connection with her students is invincible!

Thank you Ma'am, for taking the time to share a part of your life with developerette. We hope to read more from you in the coming days.

Read about Prof Madhuri's unique style and her advice to follow your heart and keep a clear conscience while choosing your different roles as career women, wives and daughters. 1. Please share some details about your professional background, family and interestsI am from a family of five daughters. I being the 2nd eldest ,always had a very responsible role to play . However I also play football during my school days. I always enjoyed shouldering responsibilities , be it in school , home or in the football field. With all five of us going to the same school and seeing the same teachers and knowing the same rules , made our growing up an experience unique in every way.Dad and Mom could not have given us a brighter childhood and upbringing than in the way they have nurtured us. We also made a record at Scared Heart Convent School, Jamshedpur , as the 2nd and the last family to have 5 daughters schooled in the same place and for us as a family with an association of over 2 decades.Well, details of each of my family member may be shared only in a special request in order to not bore away the other majority of readers!! :)Reading is my passion. Now a days programs on History Channel, Animal Planet and Discovery Channel also fascinate me....

After having finished Engineering at B.I.E.T , Odisha , I pursued my Masters in technology from Bharath University , Chennai . Economics is a subject that always fascinated me , and I therefore studied a second postgraduate course on M.A(Economics) from University of Madras in Distance mode. I am now pursuing Ph.D from Biju Pattnaik University of Technology , here at Bhubaneswar .I started my career at Slash Support as an Application Engineer in the year 2005 and then moved on with higher education.

2. What made you choose a career in IT education? Can you explain how you guided your career to be where you are today?I always wanted to teach. I have been doing it since childhood. Helping my younger sisters with their studies was something I always enjoyed. Being close to books always made me happy and reading and exploring new things gives me tremendous pleasure. I wanted a career where I could read something new all the time and I slowly managed to walk into imparting IT education. Most of my friends and classmates are big shots in IT companies, etc... but I never desired it that way. I want to see myself as a Ph.D and I do desire to work further towards a PDF as well. I am quite an orator and find delivering lectures and interacting with youthful minds and souls very satisfactory.

3. How did you guide your career to be where you are today?

By listening to my heart and conscience, and being sound and awake with realization of what choices I am making in life!

4. How have you balanced your professional and personal demands simultaneously?

Well,my upbringing is as such that I was always taught to give a patient listening to my very conscience and that little things always fails to demarcate between things. It sees no boundaries. I guess its all about doing right and wrong things. Its not about Personal and Professional things. As a teacher you can never say that you have Professional hours. You go home and you prepare for you next lecture. You cannot be annoyed even if you want when a concerned parent reaches you at hours other than 10 to 5. I stay balanced by being happy and I do things that make me happy. My parents guide me through my strenuous hours. Staying only 500 Km away from them also gives me an opportunity to meet them as often as I desire. I also have a wonderful group of friends and especially Vinod Sir & Laxmishree mam,who share their moments of research , lecture , family stuff with me and so do I. My HOD , Dr Pattnaik , and Associate Dean Dr Alok are very motivating people who themselves practice punctuality , righteousness and simplicity . Our School of Computer Science celebrates all festivals, organizes picnics and other cultural events as well. I must say that I am blessed with a competitive yet caring and enthusiastic work environment. Well on another note, I also try other things like swimming & yoga.

5. What are some of the challenges you face in your role and how do you deal with stressful days?

The major challenge that I see is the acceptance of responsibility of one as a Teacher . There are some moral guidelines and that differs from teacher to teacher. You cannot have those moral values engraved literally. Teaching to undergraduate students who would soon be a tax payer in the society , means a lot. We have the responsibility of making an upright citizen, but how far we succeed cannot be measured always. We measure the success of the Institute by the number of placements it seeks. Well, Teaching cannot be standardized. Newer technology , new methods , new concepts and a generation already ahead of us is what we have to face. Sometimes I also have to watch Cricket matches which I dislike, just to be well informed to face these kids.Moments of stress come in everyone's life . When I am forced to do what I don't like, I find myself irritated and stressed. I sometimes reason it out,and mostly I share it with my Dad and Mom , depending on the kind of concern I have. Sometimes I share it with my sisters,and well when I realize that nobody could help... I leave it to GOD.

6. The girls in your college are aspiring women in technology. What is your advice to students to stay focused?The Life's a stage that these young ladies are about to step onto ,its about making decisions and choices. Choices and decisions that stay for the rest of your life. While making these decisions you should realize of what is involved. Can you make a living out of this choice for the next 20 years. A career of 20 years is what you should be looking at. As women,we have responsibilities towards parents ,sisters , in-laws and that very special person. While at the same time we also have another world of professional entities. There is no common mantra or strategy of being able to excel in both of these dimensions. Each one of us have a different story to narrate and each one of us is great in our own way.I believe in what Shakespeare said - " Some people are born great. Some work hard to achieve greatness while others have greatness thrust upon them ".My advice to you young ladies is to follow your heart and make the right choices. You have a story to talk about in future.