I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Please check the box if you want to proceed.

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

before release. But AWS debugging is equally useful to infrastructure engineers.

Post-hoc analysis can help improve performance, stability and efficiency. Enterprises pay for AWS infrastructure by the hour, making those attributes acutely important. AWS debugging can save money as an actual platform for software dev and test and also as a necessary optimization and monitoring exercise. Whether you're a developer whose goal is more efficient, less expensive application debugging or an IT administrator charged with limiting AWS spending while improving performance efficiency, AWS provides a wealth of tools for optimizing code and infrastructure environments.

AWS as a test-and-dev platform

Cloud services commonly infiltrate the enterprise through infrastructure for application development and testing. The ease, high speed and reduced cost of spinning up a few Elastic Compute Cloud (EC2) instances put AWS at the top for cloud providers within the development community. Initially, use is ad hoc, treating AWS like a big, rentable virtual server farm and using the same tools and processes built for internal VMware or Hyper-V environments. But such a simplistic implementation just trades one set of servers for another. Instead, using developer-specific services is a better way to reduce overhead.

AWS has a set of services designed specifically for code development, deployment and management, addressing the full application lifecycle that can significantly improve developer productivity.

AWS CodeCommit is a source-code control system that provides private repositories; it's fully compatible with Git tools. CodeCommit stores code, binaries and metadata while taking advantage of AWS' built-in redundancy and high availability. Most developers are already familiar with GitHub, so the learning curve and changes required to existing tool chains is minimal. Administrators can also integrate with AWS Identity and Access Management (IAM) to use existing user and group definitions to build project- or role-based access controls.

AWS provides several services and features that allow IT administrators to monitor usage, control and track costs, as well as flag unusual or suspect configurations.

AWS CodeDeploy automates application deployments to EC2 and on-premises instances. It pulls code from CodeCommit or other Git repositories, rolls back unsuccessful changes and works with third-party configuration management software, like Chef, Puppet and Ansible, along with continuous integration software, like Jenkins. CodeDeploy users can scale to thousands of instances using the AWS Management Console, and the service automatically performs rolling updates to minimize downtime.

AWS CodePipelinepulls the entire application delivery chain into a single service. CodePipeline automates code builds, tests and deployment whenever there is a committed code change or triggered release event. Each stage of the pipeline has a set of actions that, when completed, trigger a transition to the next stage. For example, updating source code might trigger a build, which, upon completion, would invoke a deployment to test systems. In this sense, CodePipeline is conceptually similar to the extract, transform, load integration/transformation process that developers use.

AWS initially built the CodePipeline service for internal use to speed deployments, but the company discovered that automating the pipeline also improved reliability.

AWS also has a full complement of software development kits, integrated development platform toolkits and command-line interface tools for developers building cloud applications. Although the three DevOps services are also primarily intended for AWS apps, the CodeCommit Git repository and CodeDeploy automation service can be used for other platforms including on-premises deployments.

AWS debugging for financial gain

AWS provides several services and features that allow IT administrators to monitor usage, control and track costs, as well as flag unusual or suspect configurations.

Amazon CloudWatch is the core AWS monitoring service that tracks resource usage for EC2 and Relational Database Service instances, Elastic Block Store volumes, DynamoDB tables, and any metrics or log files that custom applications and services generate. It provides real-time stats, charts and alarms and can trigger events based on use limits -- such as Auto Scaling, sending messages over Simple Notification Service or Simple Queue Service, activating Lambda functions or shutting down unused zombie instances.

AWS Trusted Advisor is an automated wizard that scans an entire AWS environment looking for anomalous service and security configurations, compliance with AWS-defined best practices, performance problems and opportunities to optimize costs. A recent update added checks for Simple Storage Service, Redshift and Reserved Instances, as well as the ability to set service limits based on user and group policies set in IAM.

AWS Cost Explorer is a consolidated billing console for all AWS tools in an account. It includes standard dashboard views, such as monthly spend according to service, by linked account -- if using a master account covering individual accounts for different departments or workgroups -- and real-time daily spending. Admins can build custom reports using filters and tags to specify use over different time windows, individual services and individual projects or departments. Tags are a good way to allocate costs back to internal budgets.

AWS Budgets and Forecasts track spending against a financial plan and send alerts when exceeding specific budgetary targets. Budgets can be as granular as tracking spending according to availability zone, service type, linked account or tag. The Forecasts feature processes historical usage and spending data with AWS algorithms to create cost estimates three months into the future. The feature's statistical estimates come with 80% and 95% confidence intervals.

Join the conversation

3 comments

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Your password has been sent to:

Please create a username to comment.

We have’t experienced any significant efficiency problems as of yet. One thing that took some getting used to was working with provisioned throughput in DynamoDB so that our queries were optimized for making the best use of provisioned read and write units.