NetEnrichhttps://www.netenrich.com
Thu, 23 May 2019 11:50:05 +0000en-UShourly1https://www.netenrich.com/wp-content/uploads/2017/06/favcon_2-150x150.jpgNetEnrichhttps://www.netenrich.com
3232Unlock the full potential of cloud with an AWS Well-Architected Reviewhttps://www.netenrich.com/2019/05/aws-well-architected-framework-review-benefits-challenges/
Wed, 22 May 2019 19:58:34 +0000https://www.netenrich.com/?p=4073The post Unlock the full potential of cloud with an AWS Well-Architected Review appeared first on NetEnrich.
]]>

In the cloud realm, we’re moving at an unprecedented speed. To keep up with the pace, AWS is providing the building blocks for your cloud infrastructure. However, working with AWS may feel like walking into a hardware store. You might get all the things you need for managing your architecture but, it’s a struggle to put them together in one piece for effective implementation. You must also ensure that your business decisions stand the test of time. To help understand these concerns, let’s take the following questions into account.

Do you realize the full value of your Cloud environment?

AWS appeals to many organizations because of its regular features that include a pay-per-use model, the ability to scale based on usage, self-service, and high resiliency. All these benefits are expected to deliver much lower IT costs, better service quality, and faster time to market. However, traditional enterprises run into significant issues when creating a roadmap for their cloud ecosystem. Business leaders need to understand that their existing applications were created using age-old IT paradigms. Consequently, these applications are monolithic and only configured for fixed capacity. Without proper architecture in place, applications fail to incorporate the dynamic features of cloud.

Is the traditional “castles and moats” approach still working?

Technology teams are well versed in building applications using the legacy IT framework. However, they often lack the knowledge of AWS best practices and the right expertise for their cloud-native environment. Most IT environments have adopted a perimeter-based “castles and moats” model to ensure security. But cloud environments need to be more like modern hotels, where you get keycard access only to specific floors and rooms. Cybersecurity will take a major hit if applications continue to be deployed using the castles-and-moats approach.

By 2020, cloud spending will grow six times the rate of general IT spend. Large organizations may have adopted a cloud-first strategy, but many are still struggling with high costs. Also, over the years, cloud architectures become more complex, cumbersome, and costly. The absence of a holistic cloud-first strategy, lack of an automated operating model, and a reluctance to leverage new cloud capabilities, have contributed to rising costs.

When organizations get started with AWS, they often worry about the performance, cost, and security implications of their cloud decisions. Which is why AWS launched the Well-Architected Framework. A review of your existing cloud infrastructure can help compare your workloads management against AWS best practices. You can also acquire the much-needed guidance to develop scalable and flexible systems that accumulate value for your investment and allow you to focus on your customer.

It’s all about the customer

The idea is to provide a relentless focus on customers. The value can be battling pressing issues with performance, costs, operations, and security that get remediated seamlessly. You could also interpret the flow of other/new AWS services, along with addressing issues such as optimized workflows and cloud costs.

With rising AWS adoption, modern infrastructure skills remain scarce. There might be some architectural concerns that need immediate attention. A well-architected framework review helps improve cloud usage, thus raising customer satisfaction. But the critical question we should explore: How can modern architecture give customers an advantage in the marketplace, including better experience, scalability, or faster idea-to-cash?

Build and deploy faster by reducing firefighting and capacity management. Through automation, you can experiment and release value more often to the business. The first thing you should consider is to right-size resources. Automation helps you decommission resources that you don’t need and pause resources that are temporarily not required. Right-sizing resources is an iterative process that involves selecting the cheapest instance and instance type that is available while meeting performance requirements.

Lower and mitigate risks by understanding the uncertainties in your architecture and addressing them before they distract your team and impact your business. The focus should be on recovery from infrastructure or service disruptions and acquiring the right computing resources. A review of your architecture helps test recovery procedures and leverage automation to recreate scenarios that led to previous failures. By monitoring systems for KPIs, you can trigger automation when the threshold is breached. You can monitor demand and system utilization and automate the addition or removal of resources to maintain the optimal level.

Make more informed business decisions by ensuring you’ve made effective architectural decisions that highlight how they might impact business outcomes. Establish clear goals and metrics that your organization can use to measure its progress. The metrics should not only focus on cost, but also the business value derived from your systems. When you start measuring and monitoring your users and applications, and combine the data you retrieve from AWS platform monitoring, you can significantly improve system utilization and ensure that you rely on cost-effective measures.

It’s a great mechanism to learn AWS Best Practices and make sure that your teams are aware of these best practices, which have been learnt by reviewing thousands of customer architectures on AWS. Your teams should be able to leverage the elasticity of the cloud, depending on the ever-changing customer demand.

Lead by Example

Let’s take a case of how one of our clients achieved significant business outcomes following the well-architected review process. Recently, NetEnrich executed the AWS Well-Architected Framework Review for a leading Colorado-based Bank. The client was desperately looking to implement processes and procedures to improve the efficiency and performance of their AWS workloads. Furthermore, they were unsure if their AWS services aligned with current AWS best practices and design principles.

We proposed an evaluation of the client’s AWS environment against the AWS Well-Architected Framework. The client identified workloads for the review and our cloud experts asked the appropriate questions to gauge the stability, scalability, and performance of their existing architecture to propose remediation.

Some of the significant outcomes realized by the client were:

Dramatic reduction in their security pitfalls by implementing multi-factor authentication (MFA) on all local accounts.

Improved system resiliency and efficiency via high availability of their workload into multiple availability zones.

Minimized costs in all areas by decommissioning unaware and unused resources spread across multiple regions.

NetEnrich is helping partners and enterprise customers conduct AWS Well-Architected Reviews to create opportunities that align with technical and business outcomes. Talk to our experts to learn how you can scale cloud infrastructure and make the most of it.

About Author

Tanuj Mitra
Analyst – Marketing, NetEnrich

Currently working in the domain of content marketing communications. Tanuj tries and aligns himself to create content that’s in-sync, smartly worded, and clutterbreaking.

Modern IT sure is getting more complex, and mid-market enterprise clients are demanding more of everything from their service providers. They want the best-in-class service experience, and this means they expect the best of innovation, insights, efficiency, & cost savings. Managed service providers must change their priorities to address these requirements and achieve elite customer satisfaction.

To help your clients transform their IT operations and position your managed services business for future growth, you need to answer some tough questions. Have your service delivery operations been optimized to ensure the highest quality? Do you have an efficient and effective way to introduce new service offerings around emerging technologies? Are your service operations configured to enable workforce optimization and cost-effective delivery of service workloads? How can you achieve elite customer satisfaction and revenue growth?

With so many different considerations, successful service providers are redefining the meaning of modern service operations by delivering consistent high-value services on demand while handling increasingly complex environments in the form of on-premises, hybrid and cloud infrastructure.

Here are 4 tips to help optimize your managed services operations:

Increase Service Operations Efficiency:

Most service providers are finding out that they have immature service operations, which aren’t really optimized for scale. Have all the tools and technologies in your environment been deployed efficiently? To become a successful service provider, you need next-gen monitoring capabilities to manage upcoming technologies effectively. Use service provider tools with abilities to scale globally to service your SMB and Enterprise Customers.

Provide the Ultimate Client Experience:

To provide a high quality and consistent client experience, you need true single-pane-of-glass visibility across all your customer’s infrastructures. Using best-of-breed technologies and service level management ensures high service quality and improved customer satisfaction and retention.

Don’t Lose Talented Staff:

To become a next-gen service provider, you need to quit wasting your talented and highly-skilled personnel on low end technical work. High quality technical resources are expensive and very hard to find. Today’s networking technology—virtualization, data center migrations, and cloud deployment and management—requires specific expertise and refined skills that are in high-demand across the board.

Beat Competition:

If you don’t transform service delivery operations, your competitors will. How can you beat the competition and stay ahead of the game? To help your customers, you need to fully leverage automation and innovation on managed service offerings. Reactive and inefficient service delivery processes lead to wasted cycles and fire-fighting, which provide poor customer experiences.

How can you provide uninterrupted service delivery to clients, introduce new service offerings around emerging technologies, increase efficiency for cost-effective delivery of service workloads, and grow at best-in-class levels?

Without updating legacy service delivery operations and methodologies, MSPs will be unable to scale into new technologies and larger client segments, increase optimization and efficiencies, offer greater breadth and depth in technology solutions, and grow revenues and profits.

If you’re looking to accelerate your customers’ digital transformation and grow your business, your fastest route is with a proven partner. Contact us now to get started right away.

NetEnrich’s channel expert, Justin Crotty share his tips and techniques to optimize service delivery operations and set your business on the path to success. Watch On-Demand Webinar Today

About Author

Abishek Allapanda
Manager – Marketing, NetEnrich

Abishek has more than 7 years of experience in managing online and social media marketing campaigns. He works to develop, execute, and optimize account-based marketing programs and processes.

When you launch any instance into a private subnet in the Amazon Virtual Private Cloud (VPC), it will not be able to communicate, by default, with the internet through an Internet Gateway (IGW). This becomes an issue especially if the instances in the private subnets require direct access to the internet from the Amazon VPC to update application software, download patches, or apply security updates.

AWS provides two options: NAT instances and NAT Gateways to solve this problem as they allow instances to gain Internet access when deployed in private subnets.

NAT Instance

A NAT Instance is an Amazon Linux Amazon Machine Image (AMI) that is designed specifically to accept instances in a private subnet, translate source IP address to public IP address of the NAT instance, and then forward the traffic to the Internet Gateway.

Here’s what you must do to allow instances internet access through the IGW via NAT Instances.

Lab Infra Introduction

In my lab I created 2 subnets under Test VPC (IPV4 CIDR – 10.0.0.0/16). One is a public subnet which is directly connected to the internet via the Internet Gateway, and the other is private subnet which doesn’t have access to the internet.

Public Subnet: 10.0.1.0/24

Route Table (Connected to Internet Gateway)

Private Subnet: 10.0.2.0/24

Route table:

I deployed two Amazon Linux Instances here. One is on the public subnet and the other is on the private subnet.

Instance properties deployed on the public subnet

Then I connected to this instance after which I am able to update all installed packages by using yum (which connects to the public repository via the internet).

Instance created in Private subnet properties

When I tried to update the package using yum, the below error occurred because it was unable to connect to the public repository.

Deploying NAT instance:

Navigate to EC2->instances->Launch instance->Community AMI’s and select first NATed instance

Select and choose the remaining options as per your requirements. Make sure that you are selecting the public subnet under your VPC.

Also, make sure that the required ports will open in security group where you are going to deploy the NAT Instance. In this example, I allowed HTTPS/HTTP to pull the patches from the repository.

Once the NAT instance is deployed successfully, go to the route table where your private subnet is associated. Here, the private subnet is associated under the below route table.

I created the route under that route table by selecting the NATed instance. It passed the call from private subnet to the outside via the NAT instance.

Before checking the connectivity from the private subnet, we must disable the source/destination because each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not the instance itself. Therefore, you must disable source/destination checks on the NAT instance.

How to disable Source/Destination Checks?

E2Select the NATed instance and then navigate to ActionsNetworkingChange source/destination check. Click on Yes to disable the button.

Now if I try to install or update the package using yum, the process is successful.

NAT Gateway

A Nat Gateway is designed to operate just like a NAT instance. However, it is simpler than the NAT Instance due its ease of management and high availability within an Availability Zone. Here’s how to deploy a NAT Gateway.

Deploying NAT Gateway:

Navigate to VPCNAT GatewaysCreate NAT Gateway

Click on Create a NAT Gateway and then you will get the below widget. Now select the public subnet.

Click on create “New EIP” which will provide an automatic IP.

Make sure that the Gateway is active before you modify the routing.

It takes approximately 2-3 minutes.

Now go to the route table (where your private subnet is associated). Following that, edit and create a new route with NAT Gateway after which you will be able connect to the external environment.

To know the differences between NAT Gateways and NAT Instances refer to the link below:

It usually happens when architectural experimentation is done manually at the start of a project. Monolithic architectures are difficult to manage, and it becomes hard to even think about making a change. This means, you cannot make informed decisions.

In AWS these constraints have been removed. It promises complete scalability, agility, and freedom with smart design principles for your systems. Let’s explore in detail how the five fundamental pillars of the AWS Well-Architected Framework can lead to significant benefits.

1. Operational Excellence

You can gauge the operational excellence of a workload by its reliability, agility, and performance. It includes the ability to run systems and gain insights into their operations. The best way forward is to manage and automate changes, respond to events, and run sustained operations.

Six design principles to help drive operational excellence:

Perform operations as code: In AWS you can apply the same engineering designs that you use for application coding. You can define your entire workload as code, script operations and automate their execution by triggering them in response to events. By performing operations as code, you eliminate human errors and enable consistent responses to events.

Annotate documentation: In an on-premise environment, documentation is usually created manually, and it’s hard to keep up with the pace of change. In AWS you can automate the creation of annotated documentation after every build.

Refine operations procedures frequently and make small, reversible changes: You can make changes in small increments that can be reversed if they fail to aid the identification and resolution of issues introduced in your environment. This increases the flow of beneficial changes to your workload.

Anticipate failure: Find out potential sources of failure, so that they can be removed or mitigated. Test for responses to unexpected events to understand the impact. Set up regular game days to test your workloads and team responses to simulated events.

Learn from all operational events and failures which will help you keep operations procedures current.

2. Security

Security is a critical aspect of your cloud infrastructure. The key topics include confidentiality, the integrity of data, identification and managing privilege management, protecting systems and establishing control of the data security events.

To protect your system from critical threats, AWS suggests the following design principles:

Implementation of a strong identity foundation: Implement the principle of least privilege for separation of duties with the appropriate authorization for each interaction with AWS resources. With centralized privilege management, you can reduce or even eliminate the lines on long term credentials.

Enable traceability: Monitor, alert, and audit access and changes to your environment in real time. Integrate loads and metrics with systems to automatically respond and take actions.

Apply security to all layers: Rather than focusing on the protection of a single layer, you apply defence and depth approach with other security controls and use it at all levels.

Automate security best practices: Automated security mechanisms improve your ability to scale cost-effectively. You can create secure architectures including the implementation of controls that are well defined and managed.

Always protect data: Use encryption, tokenization, and access control where appropriate. Create fundamental mechanisms and tools to reduce the need for manual processing of data. This reduces the risk of loss and human error while handling sensitive data.

Prepare for security events: Establish an effective incident management program that aligns with your organizational requirements. Run incident response simulations and use tools with automation to increase your detection speed, investigation, and recovery.

3. Reliability

It focuses on the system to recover from infrastructure or service disruptions, acquire computing resources, and mitigate disruptions. Key topics include foundational elements around set up, cross-project requirements, recovery planning and change handling.

Below are five design principles for reliability:

Test recovery procedures: You can test how your systems fail and leverage automation to simulate different failures or re-create scenarios that led to failures previously.

Automatically recover from failure: By monitoring systems for KPIs, you can trigger automation when the threshold is breached.

Scale horizontally to increase aggregate system availability: You can replace one large resource with numerous small resources to minimize the impact of a single failure.

Stop guessing capacity: You can monitor demand and system utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand.

Manage change in automation

4. Performance efficiency

It focuses on using computing resources to meet requirements and maintain efficiency as demand changes and technologies evolve. The key topics include selecting resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs change.

Democratize advanced technologies: Technologies which are challenging to implement can be easier to consume by pushing analogy complexity into the cloud. Rather than having your IT team host new technologies, they can simply consume it as a service. In AWS these technologies become services for your team to consume. This eliminates resource provisioning and management.

Go global in minutes: You can quickly deploy your systems across multiple AWS regions around the world with just a few clicks. It will lower latency and provide a better experience for your customers at minimal costs.

Use serverless architectures: In AWS you don’t need to maintain servers that carry out traditional computing activities. It also lowers transactional costs when managed services are operated at cloud scale.

Try out comparative testing as well as various configurations to explore what performs better.

5. Cost Optimization

The focus is more on eliminating unused or sub-optimal resources. You should consider matching supply with demand while using cost-effective resources and being aware of the expenditure.

It can be achieved by the following design principles:

Adopt a consumption model: You pay only for your computing resources and can increase or decrease your usage depending on your business requirements.

Measure overall efficiency: You measure the business output of systems and cost associated in delivering it.

Eliminate data center operations costs: AWS does all the heavy lifting so that you can focus on business projects and your customers.

Analyze and attribute expenses: The cloud helps in identifying the usage in costs and systems accurately. It allows for transparent attributions for IT costs to individual business owners. This helps measure ROI and enable system owners to optimize their resources and reduce costs.

Use managed services to reduce the cost of ownership: Managed services in the cloud remove the operational burden of maintaining servers for tasks like sending an email, managing databases. Since managed services operate at cloud scale, they can offer lower costs for transactions and servers.

As organizations look to leverage the full potential of AWS cloud, it is essential to align with architecture best practices and ensure compliance with design principles of the five fundamental pillars.

NetEnrich is already helping partners and enterprise customers adhere and comply with these Five Pillars through our AWS Well-Architected Framework Review services. Our cloud experts use a data-driven approach to deliver a high-performance architecture that includes cost optimization measures, automated security practices, and high reliability with holistic compute resource optimization. Talk to our experts to know how we can help you manage AWS cloud better and faster.

About Author

Tanuj Mitra
Analyst – Marketing, NetEnrich

Currently working in the domain of content marketing communications. Tanuj tries and aligns himself to create content that’s in-sync, smartly worded, and clutterbreaking.

Why Analyze Logs?

In the modern world, business applications continue to evolve, the log data generated becomes huge and complex and file that store the logs continue to grow. To get the meaningful data out of the large chunks of generated data, log analytics tools help in extracting the data as desired. The analysis will also help in deriving the metrics about an application and its performance over a period.

Log Analysis is used to collect, index and store massive amounts of data from any source deployed in the cloud. Since each log file includes audit information and we can use the dashboards to analyse the collected log data and compare results specific to business needs.

Further, log analytics tools can help in identifying the root cause of an issue and consequently give the admins a chance to prevent such issues from occurring in the future. When a problem occurs, the critical concerns are:

Identifying the log file which contains the issue

Locating the server

Searching for the data (e.g., timestamp, version, etc.)

AWS services leveraged for log analytics and visualization:

Amazon Simple Server Storage (S3) is a storage service that can be used to store and retrieve any amount of data

Amazon Athena is a query service that makes it easy to analyse data directly from files stored in S3 using standard SQL statements

Amazon QuickSight helps build interactive visualizations, perform ad-hoc analysis, and get useful business insights from various data sources hosted on the AWS infrastructure

How to build a Serverless Architecture for log analysis?

The following are the steps for building the solution for log analytics on AWS.

Step 1: Upload your log files to S3

The logs generated are uploaded to S3 for further processing. Create a S3 bucket in your AWS account.

Step 2: Create tables in Athena

Athena is used to analyse the data by querying the source datasets.

Open the AWS Management Console and type ‘Athena’ in AWS Services search box. Once you find Athena, click on ‘Get Started’.

Using the Query Editor, run the command CREATE DATABASE to create a new database. You can save the command by clicking the ‘Save as’ option for future use.

Once the query is executed, the new database will appear in the drop-down menu on the left side of your screen. Now select the database that you created.

Create a new Table for the files in S3 as below:

Once you create the table, verify it by browsing for the table on the left-side panel.

To load all partitions of the table, run the command – MSCK REPAIR TABLE . After creating the table, you can run various queries to investigate your logs.
For E.g. select * from

After receiving confirmation on data access via Athena, the next step is to visualize the data using QuickSight.

Step 3: Visualizing Data in QuickSight

Select ‘Quicksight’ from the AWS search bar.

Select the relevant QuickSight edition based on your requirement. Here in this example, we will go with the standard edition for the demo.

After selecting the QuickSight edition, click on ‘Continue’. You will then be directed to the web page shown below. Fill the necessary details and click ‘Finish’.

After creating the QuickSight account from the QuickSight home page, click ‘Manage data’.

Select the ‘New data set’ option as below.

Now select the ‘Athena’ option from Data Sets.

For the data source name, enter the same name as the ‘Athena’ database and click ‘Validate’ to connect QuickSight to Athena. After validation, click ‘Create data source’.

Select the database and table from the following window. Click on ‘Edit/preview data’.

Here you can change the following variables as below:

‘Data type‘ of the data field

‘Rename‘ the data field

‘Exclude‘ a data field if you don’t need it

After completing the changes, click ‘Save and Visualize’. You can now view the QuickSight dashboard as depicted in the below diagram. Here, you can create your dashboard by adding visuals.

Choose Add on the application bar, and then choose Add visual. Select the fields to use from the Fields list, pane at left. Then create a visual by choosing a visual type.

You can also customize the visuals per your requirements.

Creating

Renaming

Changing fields

Changing visual layout

Conclusion: This is how we can leverage the AWS services to process, analyse and visualize the logs generated from different sources of log data.

ABOUT AUTHOR

]]>AWS Backup and Restore as a Servicehttps://www.netenrich.com/2019/04/aws-backup-and-restore-as-a-service/
Tue, 09 Apr 2019 17:10:35 +0000https://www.netenrich.com/?p=3611The post AWS Backup and Restore as a Service appeared first on NetEnrich.
]]>

As the data grows, protecting the data concerning availability and security becomes challenging. Many of the AWS services have their service level backups, i.e., Snapshot for EB, etc., but when it comes to backup management for all services, tracking the progress and monitoring the backup processes becomes difficult.

Earlier this year, AWS announced the availability of AWS Backup as a “fully-managed centralized backup service.” AWS Backup achieves automated backups across a company’s various assets stored in AWS cloud, as well as on-premises. It also provides a centralized AWS Management Console via which organizations can manage their backup strategies.

The following are benefits of the cloud-native backup solution.

Centralized management of backups

The AWS Backup solution provides centralized backup management for all supported services. The backups for each function can be planned, tracked, and restored from a single pane of glass.

Policy-based backups

Sometimes backups could be performed outside of business hours. AWS Backup allows one to define the policies with a set of rules to manage backup schedules. Consequently, backup plans can be created and resources can be assigned accordingly. With the backup solution, one can define backup schedules, frequency and even lifecycle in the rules. The service also offers one to take the backups on-demand.

Automate the backup process

Once the policies are defined with rules, the backups can be performed automatically which frees the user from maintaining custom scripts or any other solution. The polices can be applied to the resources just by tagging the related resources, making the backup strategy easier.

AWS Backup also supports hybrid backups which help users enable backups for on-premise s datacenters as well as using AWS Storage Gateway.

How it works

The process is shown in the following figure:
1. Create a backup plan
2. Assign resources to the plan
3. Monitor the backup process
4. Restore the backup

Below is a step by step guide to enable cloud-native backups:

Create a backup plan:

1. Log in to the AWS Console and navigate to the Backup console. Click Create backup plan to get started.

2. On the Create Backup plan page, you can select either of the options as shown in the below image. Let’s start by choosing to Build a new plan.

3. On the same page, navigate to Backup rule configuration where you must define the backup frequency and retention policies as shown below:

As shown above, you can set the frequency, backup window and also manage the lifecycle of the backup by setting the transition and expiration. The default backup vault can be used, or you can create a new vault which will store data that is backed up.

4. The recovery points can be tagged as below, and the backup plan itself can be tagged.

Once the backup plan is created, it will be shown under Backup Plans. The resources can only be added once the backup plan is designed.

5. On the above page, navigate to Resource assignments and click Assign resources to add resources to the backup plan. This provides a way to combine resources either by using the Tags or the Resource ID. For the demo, let’s go with the Resource ID for EBS as shown in the figure. Using the Tags, all resources that needs to be part of the backup plan should be tagged, the same tag can be specified here, so that the backup runs on the resources matched with the specified tag.

6. Once the resource is assigned, it takes some time for the Job to get started. Navigate back to the dashboard to check the active Jobs.

Create On-Demand Backup:

Start by choosing to Create an on-demand backup that provides the options shown below.

This creates backup on-demand, and the related Job can be tracked from Jobs.

When you click the Job, it shows the Restore point ARN as below which can be used to restore data.

On the dashboard, it will show the overall jobs status, i.e., Backups and you can restore them as shown below:

Restore a backup:

1. On the AWS Backup console, choose to Restore the backup to start the restoration of backed up EBS volume.

2. Click on the Resource ID to see the recovery points available as shown below:

3. Select the recovery point and click the Restore button which will start the restore process. The restore job can be tracked under the Restore jobs tab of the AWS Backup Jobs.

About Author

Bhaskar Desharaju
Cloud Engineer, NetEnrich

Bhaskar has more than 5 years of experience in IT industry on Cloud and Automation technologies. He started working AWS and MS Azure from last 4 years and holds certifications on MS Azure and AWS.
Certifications: MS Azure Solution Architect (535), AWS CSA – Associate

Are you Well-Architected?

More often, than not, it’s hard to answer that question with a simple yes or no. Cloud implementation is complex, and it’s not easy to be confident about the decisions made by you/your team, or if they’ll stand the test of time. AWS has made this easier, by providing the awesome AWS Well-Architected Framework (WAF). This framework provides a structured approach to compare your cloud environment against Amazon’s best practices, with pointers on how to improve over time.

If your organization is just getting started on AWS, the WAF helps kick-off the planning process to build a sturdy foundation for a scalable and flexible cloud infrastructure that meets technical and business needs. For businesses already on cloud, a complete WAF review can help gain critical feedback and measurement on their cloud journey.

AWS Well-Architected Review

Once you’ve started using the framework, an AWS Well-Architected Review encourages you to follow a consistent and structured approach. If you’re looking to rapidly benchmark applications, infrastructure, and cloud operations, the review enables you to architect, build, migrate, and optimize architectures that follow AWS best practices and guidelines.

Design Principles: For each pillar, there are design principles that must be considered and addressed while designing cloud-based implementations.

Questions: The review is driven by experts asking questions about your AWS services. Functional and non-functional topics are explored to better understand the current state of your workload’s architecture and implementation.

Remediation: Based on the analysis, you are provided recommendations with insights into what’s working and what’s not. These results are not a scorecard; they are a measurement. The framework will change over time, so it provides critical feedback on your current approach.

Remediation of items identified during the review help optimize your current workloads and cloud consumption costs. Need more reasons to do it? AWS is also sweetening the pot by providing billing credits for customers who execute a Well-Architected Review and begin the remediation process.

Why You Need A Well-Architected Review

Build and deploy faster: Reduce time spent on capacity management and use automation to experiment and realize value more often.

Lower or mitigate risks: Understand risks in your architecture and address them before they impact your business and distract your team.

Make informed decisions: Ensure you’ve made active architectural decisions that highlight the impact they might have on your business outcomes.

Learn AWS Best Practices: Make sure your teams are aware of best practices which are refined by reviewing thousands of customer architectures on AWS.

Initially, only AWS Solution Architects could conduct Well-Architected Reviews, however, this changed recently as AWS opened it up to a few select partners who can perform reviews. As launch partners for the Well-Architected Review Program, NetEnrich can perform comprehensive evaluations of your on-premise or AWS workloads to leverage the full potential of AWS Cloud.

To better understand how the AWS Well-Architected Framework can impact your business, watch the on-demand webinar by Taylor Gaffney from NetEnrich. Taylor reviews AWS best practices and explores open-ended questions to measure your architecture against the five pillars that make up the Well-Architected Framework, as well as the design principles within each of the pillars.

Introduction:
Cloud-native has become one of the operative words of modern application delivery, although it’s a somewhat ambiguous term. If you’re looking to understand what cloud-native means in practice, it’s useful to consider what it looks like to build an application delivery pipeline that is fully cloud-native.

Let’s do that in this article by examining Azure cloud-native services and application development.

What is cloud-native?

In a nutshell, cloud-native refers to applications or services that are designed first and foremost for the cloud.

It’s important to note that this doesn’t mean that a cloud-native app can’t also be deployed on-premises, or in a hybrid infrastructure. In the real world, many organizations have yet to move to an exclusively cloud-based model (and many never will, for good reason).

You can still be cloud-native without switching 100 percent to the cloud. That’s because cloud-native is different from earlier application delivery strategies — not by eliminating other forms of infrastructure from the picture, but instead by making the cloud the primary infrastructure for development and deployment and treating other forms of infrastructure (like on-premises bare-metal or virtual servers) as secondary components.

A containerized application delivery pipeline is a good example of a strategy that is cloud-native, because containers can easily be deployed in any public or private cloud. However, you could also run containers on-premises if you wanted — or you could even deploy an application as a set of containerized microservices, some of which run in a public cloud and some on-premises.

Azure cloud-native development

At a high level, cloud-native application delivery is compatible with any type of cloud, whether a public one like AWS, Azure, or Google Cloud, or a private cloud built with a platform such as OpenStack.

That said, not all clouds are made equal when it comes to supporting cloud-native application deployment. In many respects, Azure cloud-native services stand out for offering a rich set of features and enhancing cloud-native application delivery.

Azure App Service

First and foremost among these (in my view, at least) is the Azure App Service, which makes it easy to take an application written in virtually any major programming language and deploy it to fully managed cloud infrastructure.

In addition to being compatible with most applications, Azure App Service also supports a range of deployment scenarios using Windows or Docker containers, Web apps or mobile apps. App Service is also handy if you simply want to build an API, not an entire app.

Last but not least, Azure App Service provides dozens of pre-built apps that you can use as the basis for building and deploying your own app.

In short, whether you’ve already written your app or are building a new one from scratch, Azure App Service simplifies the task of deploying it in a flexible way on Azure cloud-native development infrastructure.

Azure Cloud Services

Azure Cloud Services provide similar functionality to the App Service, but with even less management required on the part of developers. With Cloud Services, you launch an app in the cloud, and Azure handles availability management, scaling and cost optimization for you.

Cloud Services also provides staging environments where you can test an app before deploying it to the cloud. In this way, the service comes in handy when building a complete CI/CD pipeline.

Azure Container Registry

If you are taking advantage of containers to build applications in such a way that they can be easily deployed across different types of environments, Azure’s Container Registry service makes your containers even more cloud-friendly. The Container Registry lets you host container images in a secure location in the cloud, then use those images in conjunction with a variety of other cloud-based services.

Azure Automation

One frequent challenge posed by cloud-native computing is that cloud-native environments simply entail more parts, and therefore, more complexity to manage. Whereas a traditional on-premises infrastructure might consist of just a few servers, databases and applications, cloud-native infrastructure could include a variety of physical and virtual machines, distributed applications and storage systems, APIs and more.

Deploying and managing all of these components is a challenge, to say the least, if you try to do it manually. However, Azure’s Automation service streamlines the task considerably by automating most of the work required to deploy cloud-based services, even on heterogeneous infrastructure.

Azure App Monitoring

Being able to monitor applications is important in any context. However, when you want to go cloud-native, you face the risk that the monitoring and analytics tools you use for on-premises infrastructure won’t work in the cloud.

Azure Monitor addresses this challenge by providing an easy, native way to monitor cloud-based resources. What’s more, Azure Monitor lets you define data sources from on-premises infrastructure, too. That means Azure Monitor provides a holistic way of monitoring your cloud-native stack, even if it includes non-cloud components.

Conclusion: Using Azure to go cloud-native
Cloud-native is the future; it’s very unlikely that we’re going to return to an age where the cloud is not at the center of most software stacks and infrastructure. If you’re struggling to adapt to this future, consider all of the services Azure offers for making cloud-native computing painless.

Are you struggling to create multi-account AWS environments from scratch? Would you like to quickly create new accounts, which are secure and built with AWS Best practices, monitoring, and governance in less than 30 minutes?

If that sounds impossible, it really isn’t.

AWS Landing Zone helps you automate the creation of pre-configured, secure, multi-account cloud environments based on AWS best practices. It’s how you can scale AWS to your enterprise efficiently: in a repeatable manner with central control and monitoring.

Typically, the creation of new accounts involves answering some key questions. Do you need a shared services account, along with a Master Billing Account? How can you get log data out of other accounts into your logging account? How to set up user accounts, permissions, and cross-account permissions? How to integrate with Active Directory? How to ensure all this follows AWS best practices and is a Well-Architected Framework?

With so many different considerations, teams usually create accounts with their own unique setup, which takes a long time to get working.
AWS Landing Zone Solution provides:

1. Multi-account Approach

AWS Landing Zone helps customers move quickly to set up a secure, multi-account AWS environment based on AWS best practices. You can save time by automating the setup of an environment for running secure and scalable workloads while implementing an initial security baseline through the creation of core accounts and resources.

2. Integrated DevOps

AWS Landing Zone can be integrated with your internal Gitlab to continuously push changes into Dev and then promote them to production environments. We can also set up Slack alerts and notifications on the pipeline process while automating security and governance for account creation.

3. Automated Account Provisioning

A quick setup of new AWS accounts that contain AWS best practices, security, monitoring, and governance, is made easy with Landing Zone. Without it, completing various configurations for new accounts would take weeks to set up and validate.

5. Governance

How do we govern a multi-account environment with automation? With AWS Config we can automatically enable, configure rules and aggregate dashboards, which will help highlight non-compliant/compliant resources.

AWS Landing Zone is automatically configured to receive alerts on non-compliant resources. We can also go one step further and build an option to remediate resources that are non-compliant utilizing AWS CloudWatch and Lambda.

If you need help in streamlining accounts, enhancing transparency and manageability of deployments, contact us today.

Here’s what you should know

When transferring large amounts of data between on-premise infrastructure and cloud with existing tools and methods, one needs to maintain the following:

Data encryption

Data integrity

Quick movement of data

Data monitoring

Maintaining the file’s metadata and ACL at the destination would become difficult and take more time to copy data successfully, and vice versa.

The typical use cases of moving large amounts of data are:

Moving to cloud or data migration

Protecting data i.e., moving the data to cloud as a secondary storage for backup

Moving data to cloud for further analysis using AWS Services

AWS has introduced the Managed DataSync service to overcome problems when fulfilling the above use cases. DataSync is an online data transfer service which automates movement of data between on-premise and cloud storage such as S3 or EFS and pay for the data that you transfer. It takes care of the common data transfer tasks automatically such as data integrity, encryption during transfer, maintaining security information and fast transfer by optimizing network bandwidth. DataSync service will also enables one-time, recurring data transfer, replication backup and recovery.

DataSync service is an agent-based solution that makes use of existing storage or file system or Network File share protocol.

The benefits of DataSync service are:

Simplifying the data transfer securely along with data integrity

Faster movement of data via multi-part, parallel uploads

Reducing operational costs of data transfer

How AWS DataSync works:

DataSync is an agent-based solution that requires you to install an agent in the on-premise infrastructure that connects to existing on-premise storage or Network File Share protocol.

Here are the steps for enabling data transfer from on-premise to cloud.

1. Log in to the AWS Console, navigate to DataSync and get started with the data transfer tasks.

2. As part of this, a DataSync agent needs to be installed for the use case such as on-premise to AWS or AWS to on-premise.

3. Create an Agent and use the agent key to get registered with DataSync service after installation at the on-premise side.

4. Once the agent is enabled, a task should be created for the data transfer. For the source, select the location at the on-premise side and the storage system.

5. Once the source location is selected, AWS destination needs to be selected such as S3 or EFS. For S3, the destination bucket needs to be selected along with the IAM Role for the S3 access.

6. DataSync service enables you to select options such as copy permissions, metadata and Network bandwidth to be used which will control the data transfer. Once the options are selected, it creates tasks to start the copy.

7. Once the task is created, start the task that initiates the data transfer.

8. The following is the life cycle for the task that transfers the data.

9. The tasks status can be monitored from the history dashboard.
10. DataSync can be integrated with CloudWatch which monitors the data transfers and notifies the status.

Conclusion:

If you’re looking to enhance your online data transfer by 10 times, faster than any other open-source tools, AWS DataSync is the best option. Also, getting started with DataSync is simple — just deploy the DataSync agent on-premise, connect it to a file system, select your AWS storage, and initiate the data transfer.

About Author

Bhaskar Desharaju
Cloud Engineer, NetEnrich

Bhaskar has more than 5 years of experience in IT industry on Cloud and Automation technologies. He started working AWS and MS Azure from last 4 years and holds certifications on MS Azure and AWS.
Certifications: MS Azure Solution Architect (535), AWS CSA – Associate

Exam Tips and Tricks

If you’re like me and planning to add to your ever-growing AWS Certifications, even after you completed the 5 major ones then these tips will help. Between the three specialty certifications (Network, Security and Big Data) I decided to pursue the Security Specialty Exam only because working with AWS daily Security has become the number one thing a client talks about. Not to scare anyone from taking the exam but out of all the ones that I’ve taken, this exam was harder than either of the Professional exams.

Before I get started, I advise you to

Get a good night’s sleep

Have a healthy breakfast

Limit your caffeine consumption

Here’s a quick run-down of specific items that I would focus on, please review the following important exam prep.

The Exam is 170 minutes long, so manage your time wisely.

Flag any question you are unsure of and move on.

Typically, there are 1-2 blatantly incorrect answers, one very right answer and two that could work.

I used the extra sheet of paper and put what number question I was on and then A, B, C, D to match the number of answers available. Then I crossed off the ones that were incorrect, circled the correct one and if I still couldn’t figure out the answer, I flagged it.

Sometimes the answer is provided in another
question within the exam

Key Areas To Put Your Focus On

KMS – Focus on all the different KMS options

API commands (Encrypt, Decrypt, Recrypt)

CMK – AWS created vs Imported

How to enforce annual rotation of keys

AWS Config

The type of rules that can be setup and how to automatically remediate non-compliant rules utilizing lambda

Know the difference between Cloudtrail vs Cloudwatch

SSL communication from on-premise to ec2 including how legacy applications communicate when changing from an ELB to ALB

S3 access

I didn’t have any questions on Bucket ACL’s but know the difference between an ACL and Policy

Cross-Account Access (S3)

How to regain access to an EC2 or change the key pair if they’ve been compromised

Understanding the role of Azure in DevOps:

It’s easy to understand the why of DevOps — that is, to explain which benefits DevOps provides.

What’s much harder to wrap one’s head around is the how of DevOps. Making a plan for actually achieving the goals of DevOps (such as continuous delivery, maximum automation and continuous visibility) is very challenging.

The challenges are due in part to the fact that doing DevOps is not as simple as adopting a specific tool or practice; instead, there are so many possible routes to DevOps. And with so many DevOps tools and resources to choose from, it can be hard to know which ones will lead your organization most effectively to DevOps implementation.

Extending the DevOps model will enhance Agile operations and collaborations so that IT infrastructure, application development and operations work as one.

Benefits include 75 percent reduction in time to market.

In this article, we recommend one particular set of tools that offer many benefits for doing DevOps: the Azure cloud. While Azure is not the only way to achieve DevOps, Microsoft’s cloud offers a variety of services that greatly simplify the work required to move toward a comprehensive Azure DevOps practice.

Let’s look at those Azure DevOps features and services.

1. Azure BoardsAzure Boards is to Trello what Azure Repos is to GitHub: a cloud-native approach to managing tasks and workflows. Like Trello, Azure DevOps Boards features enables you to create clear visual interfaces for tracking who is responsible for which tasks within a DevOps organization (or various other kinds of projects).

In addition, Azure DevOps Boards provides native integrations with a variety of other tools and services, from Slack to GitHub. It also includes an analytics feature to help track the health of projects.

2. Azure ReposBeing able to communicate and collaborate across teams is also critical for DevOps. To do this, you need not only scalable communication tools like Slack, but also a way to provide easy access to the code that your various teams are working on.

Azure Repos, which lets you build private Git repositories hosted in the cloud, provides a solution on this front. By hosting your code in Azure Repos, you ensure that everyone on your team can access, track and contribute to it — if they have the proper level of access, of course.

Here, you’re probably thinking, “Why wouldn’t I just use GitHub?”

You could use GitHub to set up private repositories, but Azure Repos is more dynamic. Do you know why?

Because it offers some unique features, like the ability to integrate repositories with Webhooks and APIs. These features make it easy to integrate Azure Repos repositories effectively into a large, scalable CI/CD pipeline.

3. Azure PipelinesA continuous delivery pipeline is a must-have for DevOps. In order to be effective, such a pipeline must automate most of the tasks required to deliver software, from development to deployment to production monitoring. The pipeline should also facilitate clear communication between the various teams that manage these tasks.

Azure DevOps Build Pipelines provides an easy way to set up such a pipeline. Pipelines provides a fully hosted, cloud-based environment for building, testing and deploying software.

I can hear you thinking: “But the catch is that Pipelines can only deploy to Azure, right?” Well, no. Azure Pipelines can be used to deliver software for any mainstream environment or platform — even competing clouds like AWS.

In addition, Azure DevOps build Pipelines also works with any language. Flexibility is another key component of DevOps, and Pipelines helps deliver it.

4. Azure DevTest LabsBeing able to scale software delivery and automate time-consuming tasks like infrastructure provisioning, all without compromising software quality, is also important for doing DevOps well.

Azure DevTest Labs benefits are endless. It helps you achieve these goals by automating much of the work required to set up testing environments for software. Instead of wasting time provisioning test environments by hand, or skimping on testing because it takes too much time to set up, you can use DevTest Labs, which supports Windows as well as Linux environments, to integrate thorough testing into the rest of your CI/CD pipeline.

Conclusion

Again, there is no single way to do DevOps. A variety of tools and strategies will help you achieve the goals associated with DevOps.

But not all tools and strategies are created equal, or are equally easy to implement. If you want a fast, scalable, cloud-native on-ramp to DevOps, consider the Azure cloud, which provides a range of services that cater specifically to the needs of DevOps teams.