The purpose of this blog is to provide information on best practices & discuss practical and feasible architectural solutions specially for digital architecture. Also, there will be information about emerging technologies. Feel free to ask any questions or share your comments.

Friday, December 22, 2017

Challenges & opportunities for cloud-native applications

Current technology landscape for cloud-native apps is evolving and Platform-as-a-Service (PaaS) solutions are constantly changing to meet the demands such architecture. As customers need flexibility and openness in terms of choosing PaaS solution (without any vendor or technology lock-in), a key need is to provide a neutral view in terms of:

Which PaaS solution is the right fit for my enterprise (considering unique opportunities and challenges in each enterprise)?

How does it help in realizing the vision of migrating toward cloud-native apps (with microservices as an architectural style of business services)?

The overarching problem of choosing the right technology solution & partner towards enterprise-level cloud-native apps creates an opportunity for all of us to innovate, build and implement the right-fit solution in a progressive manner.

Capabilities of Platform as a service for cloud-native applications

A reference architecture outlined below denotes the key set of capabilities required for building cloud-native apps and helps in choosing the right platform offering these capabilities:

Key platform as a service (PaaS) solution options

CloudFoundry & Kubernetes are great initiatives towards standardization of PaaS solutions but the consensus towards a standardized approach towards PaaS is still in an early stage.

Considering current market landscape with both serverless and other emerging trends, current PaaS solutions can be visualized as below (there are many variations coming out of options shown here by different vendors):

Conclusion

Cloud-native apps development is an evolving field and considering the novelty of the solution, industry players and standards bodies (such as IEEE) are still firming up standards and best practices. As there is no silver bullet in choosing any solution and enterprise context, governance and required capabilities drive the recommended platform as a service (PaaS) solution in supporting cloud-native applications. Disclaimer:All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, correctness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.

Wednesday, September 20, 2017

NGINX has recently (Sep 6-8) organized NGINX Conference 2017 in Portland (OR) and would like to highlight some of the key announcements made by NGINX experts in this blog. There is a paradigm shift by NGINX in recent years of offering a comprehensive application platform for Cloud-native Microservices not just a high-performance web server and the same message has been consistently delivered in this conference.

NGINX CTO (Igor Sysoev) announced their aggressive strategy to move beyond web server and offer a unique application platform for cloud-native microservice application platform with the same level of high performance. Also, it is supported by almost all major cloud vendors in the market today - AWS, Azure, OpenShift, GCP, etc.

As per w3tech figures, NGINX is now #1 web-server for 1 million busiest sites in the world. NGINX has achieved this only 5 years and it is a great achievement for opensource community.

Also, a lightweight technology like NGINX to support challenging demands of Microservices (such as monitoring, high performance, hot deployment, lightweight containers, less/zero maintenance, etc.) has been highlighted and a similar approach will be extended to the new application platform.

Saturday, March 18, 2017

Technology helps in diminishing physical boundaries and watching Google Cloud Next 2017 keynote session on my TV through YouTube was a great example to prove this theory.

Cloud has been at the top of CxO's priority list and technology events such as AWS's re:Invent and Microsoft's Build 2016 re-emphasized Cloud's perseverance. Google's Cloud Next 2017 continues the trend and here are key takeaways from Google Cloud Next's Keynote on Day 1:

#1 - G-Suite adoption by key industry players in quick timeGoogle highlighted the adoption of G-Suite (Gmail, Docs, Drive, Calendar) by business powerhouses such as Colgate-Palmolive and Verizon and very excited to point that with good planning, it can be achieved in the timeframe of 3-6 months.

"Google challenges old notions and showcases that it is possible to move large-scale infrastructure in months as against multi-year approaches."

Google highlighted their Cloud capabilities with examples such as Customer Planet Labs, which takes picture of Earth every 3 hours and stores in Google Cloud.

With increased security, 99.999% up-time, scalability, faster time-to-market, competitive pricing and flexibility, Google Cloud is a promising future and will be putting itself in elite category very soon.

Fei-Fei Li (Chief Scientist at Google Cloud AI) talked about many such experiences (such as the scale of thousands of people having their own self-driving cars, using AI to improve the quality of human life).

Demo of Cloud Vision API was awesome, which extends capabilities from image to video analysis using video intelligence API.

#4 - Key industry players put faith in Google CloudGoogle invited their esteemed clients to talk about their experience with Google and their passion was contagious.

EBay - R.J. Pitman talked about Google Home example (along with live demo) as integration with EBay product catalog to find information about products using voice-commands.

#5 - Better uses of money...Eric SchmidtAnd, finally Eric S. pointed out that Google put $30B in infrastructure and we should not try to replicate the same and use the money to better use. In his own words:

"We put thirty billion dollars - and I know because I approved it. So it's real, right, into this platform. Please do not attempt to duplicate it.You have better uses of your money. - Eric S."

To conclude, an exciting time ahead in Cloud marketplace with Google launching differentiating GCP capabilities in competition with Amazon's AWS and Microsoft's Azure. Google's cloud roadmap looks very promising and its brand in terms of scalability, reliability, and availability makes Google Cloud Platform a very compelling option. Personally, I believe Google's offering in Big Data, Machine Learning, and AI-related services seems to be their core competence in a highly competitive Cloud marketplace.

Tuesday, November 22, 2016

Microservices is the new architecture paradigm
everyone is talking about, but it comes with its own set of complexities. As our industry gets more matured about using Microservices architecture, one of the key learnings coming out as:

The outer architectural
building blocks are playing far more important role than the inner
architecture of each microservice itself.

I will be sharing my perspective on microservices' outer architectural building blocks (In-memory computing
& caching, API Gateway, Containers, DevOps & Cloud Computing) in a series of upcoming blog posts. This blog post is focused on "in-memory computing technology", which plays an important role in microservices architecture.

In-memory computing, in itself, is a vast
technology domain but in the context of microservices architecture, I would
like to keep it focused on following solution offerings:

In-memory database - a database system primarily
residing in memory (SQL or NOSQL).

In-memory database plays a collaborative role in
microservices architecture as usually it can be deployed independently and
provides highly efficient storage support to microservices. Note that to
sustain data stored in in-memory database during deployments, a backup approach
with file-system based storage is required.

These are following key design patterns, which
can be applied in the context of in-memory database:

Pattern A - IMDB per microservice,
where each microservice having its own storage

Applicability: Greenfield
development where you have opportunity to do microservices based
architecture first

In-memory data
grid relies on distributed computing using clustering as underlying approach
and provides a shared component (grid) for CRUD operations using memory.

As microservices
architecture recommends independent deployable units with minimal shared data,
in-memory data grid is not completely aligned to this architectural approach.
However, in-memory grid products can provide independent cache nodes to
respective microservice with variation of distributed cluster to store backup
data in the grid.

In nutshell,
conceptually these are different approaches:

Microservices
architecture - It promotes independence and less dependency for linear
scalability (cluster of instances of microservice but clustering at layer level is not recommended).

In-memory
data grid - It relies on cluster of nodes (data layer) for horizontal
scalability (nodes can be added/removed on need basis for scalability).

Because of the
above reason, vendor like Oracle has started their cloud roadmap with less
emphasis on Oracle Coherence for microservices architecture and more on cloud
based offering for Cache-as-a-service, which can be easily integrated with microservices Architecture. However, this is also similar to common grid
solution with variance that cloud is now playing the common grid. Also,
In-memory grid solution is still relevant and playing its role behind the scenes
(to support cloud's cache-as-a-service).

From
applicability perspective, following patterns can be leveraged for microservices architecture along with in-memory datagrid solution:

Pattern A -
In-memory data grid as event/messages store

Applicability:
Primarily for inter-microservice communication, this pattern can be
leveraged.

Applicability:
To store data in grid for a microservice and use partition approaches to
separate data for each microservice

To conclude,
in-memory computing plays a significant role for Microservices architecture as
an underlying storage mechanism for each microservice. Also, in-memory grid
computing offers an alternative solution, which respects microservice's
principle of independence up to certain extent and provides a out-of-the-box
solution to solve inter-communication & storage challenges for microservice
architecture.

Thursday, March 24, 2016

A
software developer in any project plays a very critical role in realizing
architecture & design. Current modern architecture world needs modern or
smart developers, where they should possess pragmatic skills. Being
Aware, Being Social & Being Smart is what world is looking
forward to.

Smart Developer is the need of the hour

So, how do we define a smart developer? 5 key
areas, which can make a developer a smart developer:

Ability to be focused & goal oriented

It starts with introspection and planning for your career. You
can think of an approach as you take for your code:

§Keep it modular - Personal,
professional - both aspects need to be well-thought of and your TODOs (like in
code), needs to be taken care of on regular basis.

§Keep it clean & comply with rules - As
we follow coding compliance rules, some rules for yourself and keep your
objectives very clean and measurable

§Keep it loosely coupled - Like
your code, don't couple many objectives together - keep it simple and flexible
so that they can vary independently.

§Keep it measurable - Like
your code performance SLA, keep your objectives SLA based and measure it every
fortnightly/monthly/quarterly/yearly as frequent as possible.

Ability to market & sell your idea

oThis is the most ignored aspect and the most difficult part. As
you grow, your ideas need to be told and to be executed and in order to that,
first thing is you need to sell your ideas to people.

oStorytelling is a well-known technique to convey your thoughts
in a way anybody can understand.

oSapientNitro has redefined Storytelling to Storyscaping, which is a new way to tell powerful stories with connected experiences (used in marketing). This can be applied in usual storytelling as well.

Ability to increase your productivity

oA constant effort & thought process to come up with new
& improved ways to do things. A simple example - a Jmeter script to
do unit-level performance test and reduce cost of quality by detecting any
issues earlier.

oShare your knowledge with team (in the form of blog or
webinars). It helps to improve productivity of entire team and also an
opportunity to get feedback from others.

All data and information
provided on this site is for informational purposes only. This site makes no
representations as to accuracy, completeness, correctness, suitability, or
validity of any information on this site and will not be liable for any errors,
omissions, or delays in this information or any losses, injuries, or damages arising
from its display or use. All information is provided on an as-is basis.This is
a personal weblog. The opinions expressed here represent my own and not those
of my employer or any other organization.