]]>As the market leader in public cloud solutions, Amazon Web Services offers customers a formidable range of EC2 instances. You’re practically spoilt for choice, with more than 50 on-demand instance types, from small burstable instances for low-traffic websites to supersized machines for high-end web servers, large-scale number crunching and big data analysis.

Specifications vary from 500MiB of RAM to a mammoth 1.95TiB, onboard storage from 4GB to 48TB and attached EBS storage volumes from 1GB to 16TB. They also include an extensive range of compute levels and specific processor capabilities. And hourly rates range from $0.0065 up to $14.4 (p2.16xlarge instance at N.Virginia price) for Amazon’s on-demand Linux instances.

But, with so much variation in pricing and capabilities, selecting the right instance for your enterprise IT workloads is no easy challenge. So, in this introductory guide, we’ll take you through the main points you should consider to get the best balance between performance and cost from your Amazon cloud deployments.What Types of Instance Are Available?
Amazon makes it easy to find the instance you need by breaking them down into different combinations of vCPU, memory, storage and networking capacity to suit different use cases. Each combination is known as an instance type, where each instance type is a range of different-sized instances that share common characteristics and features:

General PurposeInstance types: T2, M3 and M4
Amazon’s general-purpose range provides you with a balance of compute, memory and network capability, which makes them suitable for mainstream enterprise applications, such as small to medium-sized databases, web servers and cluster computing.

The T2 family offers a number of smaller entry-level instances that are ideal for low-traffic websites, development environments and nascent enterprise applications. It also offers bursting capabilities, based on a credit system, where you accrue credits when your vCPUs are idle and consume credits when they’re active.

The M3 family offers a slightly higher memory-to-vCPU ratio and is characterized by onboard SSD storage for high disk I/O workloads. The M4 is the latest generation of general-purpose instances, which offers EBS optimization at no extra cost.

Compute OptimizedInstance types: C3 and C4
The compute-optimized C3 and C4 families give you a higher ratio of vCPU to memory and the best price per vCPU across the entire AWS instance range. They’re geared towards compute-intensive applications such as high-traffic websites, online gaming and high computational scientific workloads.

The main difference between the two families is that C3 instances are backed by instance store, with up to 640GB SSD directly attached storage, whereas C4 machines are EBS-backed instances with dedicated EBS bandwidth and new-generation Intel processors optimized specifically for EC2.Memory OptimizedInstance types: X1 and R3
The memory-optimized instance types, X1 and R3, are designed for memory-intensive applications, such as performance-sensitive databases and larger deployments of SAP and Microsoft SharePoint. Both instances are backed by instance store and, in terms of memory, offer the best value for money in the EC2 range.

The R3 comes in five different sizes, with SSD storage up to 640GB, and is often used for its clustering capabilities in distributed big data analytics. The X1 is available in two flavours, the largest of which is a 128-vCPU supersized machine with a colossal 1,952GiB of memory and 3.84TB of local storage. Though the X1 is one of the most expensive EC2 families, it comes in at a lower cost per GiB of RAM of all instance types. The X1 is heavily geared towards SAP HANA, but also offers potential for rehosting large-scale legacy applications.GPU OptimizedInstance types: G2, P2
The G2 and P2 families take advantage of the parallel processing capabilities of GPU, supplementing general vCPU with high-performance NVIDIA processors..

Powered by GPUs hosting 1,536 CUDA cores and 4GB of video memory, G2 instances are suitable for 3D application streaming, machine learning, video transcoding and a multitude of GPU-accelerated scientific and engineering applications.
The P2 family is Amazon’s latest generation of GPU-optimized instances. It offers more memory per vCPU, 2,496 parallel processing cores and 12GiB of in-processor memory for each GPU and larger overall instance sizes of up to 16 GPUs. It is suitable for high-performance databases, large-scale machine learning, computational finance workloads and other server-side GPU compute workloads.

Storage OptimizedInstance types: I2 and D2
Amazon’s I2 and D2 instances offer low-cost directly attached storage and are designed for storing, processing and analyzing large amounts of data in the cloud.

The I2 family uses high I/O SSD ephemeral storage, which makes it a good fit for performance-sensitive NoSQL databases such as MongoDB and Cassandra. By contrast, the D2 family uses HDD, offers the best-value throughput performance on EC2 and is suitable for high-end MPP data warehousing, log processing and MapReduce jobs.

Each instance type generally comes with a choice of Windows and three Linux distributions—Red Hat Enterprise Linux, SUSE Linux Enterprise Server and a free version based on Red Hat and CentOS. Windows and license-based Linux instances are up to twice the price of their basic Linux counterparts. But you can also bring your own license (BYOL), in which case you’ll pay the same.What Instance Size Do We Need?
One of the first things you’ll notice when comparing instances is that AWS uses two different units of measure for processing power. Amazon’s own compute unit, the ECU, is its own benchmark measurement designed to give users a consistent and predictable amount of CPU capacity. The vCPU, on the other hand, follows a traditional approach to defining processing power, which is familiar to users of other virtualized environments.

As AWS uses different processors for different instance types, two instances with the same number of vCPUs may not necessarily offer the same compute capability. Therefore you should only use vCPU as a rough guide to the processing power you need.

For more detailed specifications, you can refer to Amazon’s list of EC2 instance types. And it goes without saying that you should track your usage history, and monitor the performance and cost of your instances, bearing in mind you can resize or migrate to another instance any time.Do We Need Instance Store or an EBS-Backed Instance?
Whereas virtual machines backed by instance store offer the speed benefits of directly attached storage, EBS-backed instances provide far more functionality.

Unlike their instance store counterparts, you can stop and start them to reduce cost whenever your applications aren’t needed. You can resize them dynamically without having to migrate your applications. You can create point-in-time snapshots of both your running instances and your mountable EBS data volumes. They also have built-in redundancy and boot more quickly. And you can also use instance snapshots to create template machine images.

Most EBS-backed instances now offer dedicated bandwidth, known as EBS Optimization, which reduces latency between your different EBS volumes. But, despite the clear advantages of EBS, instances based on instance store are useful for applications that benefit from fast read and write speed. Nevertheless, instance store is both ephemeral and less durable. So you may need to build fault tolerance into your storage architecture or consider backing up your data to EBS or S3.

What Pricing Model Should We Choose?
As well as on-demand pricing, AWS offers alternative pricing models such as Reserved Instances (RIs) in which you commit to reserved capacity in exchange for a discounted hourly rate. If you’re flexible about when your applications run then you could also consider Spot Instances, where you bid on spare EC2 capacity.

However, even though it’s important to be aware of the cost-saving potential of Reserved and Spot Instances, it makes sense to start with the on-demand model until you have a better idea of your utilization. Once you’ve established a clear pattern of your on-demand resource usage you’ll be able to make informed decisions on the likely RI capacity you’ll need and have a clearer picture of the potential savings. Learn moreThe Choice Will Only Get Bigger
When estimating the cost of your AWS deployments, don’t forget to include other services you might need, such as load balancing and Amazon’s CloudFront content delivery network. And beware of hidden costs such as data transfer and unused Elastic IP addresses.

But also remember your choice of instances will only get bigger, as AWS continues to enhance its offering with new instance families for different computing challenges. So keep your eyes peeled for new opportunities to improve your application performance and increase cloud cost efficiency.

And who knows? AWS may one day offer a fully flexible build-your-own virtual machine service with a mix-and-match inventory of vCPU, memory, storage and network resources. But, as far as cost monitoring and optimization are concerned, that could be the biggest challenge of all.

]]>AWS re:Invent 2016, the world’s largest annual gathering of Amazon cloud professionals, is now just a matter of days away and kicks off in Las Vegas on 28 November. The 5-day cloud spectacular will be hosting more than 400 bootcamps, hand-on labs, certification workshops and breakout sessions, where you can enhance your skills and knowledge in practically any aspect of AWS and the cloud.

As one of the fundamental considerations in every enterprise cloud deployment, cost and performance optimization remains a common thread in many of the sessions at this year’s event. But with such a wide choice of offerings, how do you know which one is right for you?

So, in this post, we’ve cherry-picked five of the best cloud optimization sessions for the enterprise cloud professional—ranging from introductory level, for people who are new to a topic, right through to highly technical deep dives aimed at experts in their field.

1. ARC310 and ARC310-R– Cost Optimizing Your Architecture: Practical Design Steps for Big SavingsSession Level: Advanced
This session will take you through some of the key elements of cost-aware cloud architecture, with a focus on breaking down and distributing your application components to make efficient use of the AWS services on offer.

You’ll learn how to use Amazon’s infrastructure-as-code (IaC) service, CloudFormation, to create and maintain cost-efficient architectural patterns in conjunction with AWS services such as EC2, EC2 Container Service, Lambda, RDS and S3.

Through use of practical examples, architects and developers will come away better equipped to address cost in their system designs as well as performance, function and scale.

2. ENT305 –Setting the Stage for Instant Success: Getting the Most out of Your AWS DeploymentSession Level: Advanced
An opportunity to see first-hand how Cloudyn’s solutions help clients to optimize their cloud resource consumption through a real-life enterprise case study.

Global sports and entertainment ticker seller, Ticketmaster, share their experiences of using Cloudyn to manage their large, complex and dynamic workloads deployed to thousands of EC2 instances across a multitude of regions and availability zones.

You’ll learn how the software gives the ticket-sales giant actionable insights into its AWS deployments, helping them to get the most out of their cloud investments and improve the bottom line.

3. CMP307 and CMP307-R – Save up to 90% and Run Production Workloads on Spot (Featuring IFTTT and Mapbox)Session Level: Advanced
Spot Instances are excess AWS compute capacity available at significantly lower prices than standard on-demand EC2 machines. But exactly how can your organization tap into the huge cost-saving potential of Spot Instances without compromising availability?

In this session, you’ll see how high-profile AWS customers, such as IFTTT and Mapbox, have achieved huge cost savings and while maintaining service continuity through scalable, cloud-friendly architectures that leverage new Spot features.

4. ENT401 – Unlocking the Four Seasons of Migrations and Operations: Enterprise Grade, Cloud Assured with Infosys and AWSSession Level: Expert
The cloud has brought about a revolution in enterprise IT, where agility and innovation have become two key components to business success. But transforming IT at enterprise scale is a complex undertaking, presenting significant challenges in today’s hybrid landscape of on-premise legacy systems and modern public cloud deployments.

This session looks at four key challenges to building a cost-efficient and responsive hybrid cloud environment—workload migration, enterprise IT integration, security and operations management.

5. ENT206 – Lift and Evolve: Saving Money in the Cloud Is Easy, Making Money Takes HelpSession Level: Introductory
This introductory session takes cloud migration beyond the basic lift-and-shift approach by exploring vanguard AWS technologies, such as Lambda, DynamoDB and CodeDeploy, which can not only help enterprises reduce IT costs but also drive business growth.

It tackles the issues of bringing agility to complex enterprise-scale deployments, through a number of case studies, which include how the Coca-Cola Company uses Puppet in conjunction with a variety of emerging AWS products to efficiently manage hundreds of workloads across the globe.

Start Planning Your Schedule
Now’s the time to start planning your own activity schedule, so you get the most out of a busy few days.

You can follow Cloudyn_buzz and #reinvent on twitter to get the latest updates on the event. You can check out this year’s session catalog or view the full event agenda for a round-up of all activities over the five days. Best of all, you no longer need to worry about missing your favorite event—as this year you can reserve your seat for breakout sessions in advance.

And if cloud optimization is one your most pressing concerns then don’t forget we’ll be exhibiting in the AWS re: Invent Central Expo Hall. Come and see us at booth #2225 or email us to schedule a meeting with one of our cloud optimization experts.

]]>https://www.cloudyn.com/blog/aws-reinvent-2016-five-cloud-optimization-sessions-professionals-cant-afford-miss/feed/0What Next at AWS re:Invent? A 5-Year Timeline of Keynotes and Product Launcheshttps://www.cloudyn.com/blog/next-aws-reinvent-5-year-timeline-keynotes-product-launches/
https://www.cloudyn.com/blog/next-aws-reinvent-5-year-timeline-keynotes-product-launches/#respondWed, 23 Nov 2016 10:42:41 +0000http://dev-www1.cloudyn.com/?p=8433AWS re:Invent 2016, the cloud vendor’s annual customer and partner conference, is just around the corner. Now in its fifth year, it will take place in Las Vegas from 28 November to 2 December 2016 and is expected to play […]

]]>AWS re:Invent 2016, the cloud vendor’s annual customer and partner conference, is just around the corner. Now in its fifth year, it will take place in Las Vegas from 28 November to 2 December 2016 and is expected to play host to more than 27,000 cloud professionals.

The world’s largest gathering of the global AWS community has traditionally served as one of Amazon’s most important platforms for announcing new products and services. And this year should prove no exception.

In this post, we run through some of the major service announcements and keynote messages the company has made at previous re:Invent conferences, and finish off by looking at what we might expect from the market leader at this year’s upcoming event.

A History of Innovation
2012–2014: Building the Product2012
In 2012, AWS CEO Andy Jassy denounced private cloud and blamed the traditional vendors for redefining the term “cloud”.

“Every business we run [fits that model] and most of the old guard don’t like those types of business, which is why they’re pushing private cloud because it doesn’t disrupt their businesses,” said Jassy at his 2012 re:Invent keynote.

However, the highlight of the first ever re:Invent was the announcement of Amazon Redshift, a new, fully managed, cloud-based data warehouse service. Designed as a scalable, more economical and easy-to-use alternative to traditional on-premise data warehousing, the company launched Redshift to meet the continuing demand for storing and querying large volumes of clean, structured data.

Amazon also announced two new supersized instances with big data applications in mind. The cr1.8xlarge instance was optimized for memory-intensive applications, such as performance-sensitive databases, while the hs1.8xlarge was built for high storage and throughput performance. Both cr1 and hs1 machines have since been superseded by instances in the R3 and D2 families respectively.

2013
In his keynote, Amazon CTO Werner Vogels revealed that during that year, the company released 243 updates and features, which he described as a “record pace of innovation even for our own standard”.

Also at the event, AWS announced WorkSpaces—a cloud-based desktop offering. The service set out to compete with traditional on-premise virtual desktop infrastructure (VDI) and also to support mobile and remote working practices.

Amazon also addressed the increasing demand for high-performance computing with the launch of a new generation of compute-optimized C3 instances. The C3 family is geared towards compute-intensive applications, such as high-traffic websites and rapid mathematical calculations, as well as distributed analytics and online gaming.

2014
Every year AWS doubled the number of services on offer. This amazing growth was underlined at the 2014 re:Invent when Andy Jassy proclaimed that “Cloud is the new normal”.

The 2014 conference also saw a strong emphasis towards distributed systems and microservices, with the launch of the highly anticipated Amazon EC2 Container Service and event-driven, serverless compute service AWS Lambda.

What’s more, the event marked the arrival of Amazon Aurora. The new database as a service (DBaaS) aimed to lure customers away from the likes of Oracle and Microsoft with the promise of lower costs, higher performance and improved scaling capabilities. Other key announcements included the pending launch of the C4 family of EBS-backed instances and also improved EBS performance across the board.

Enterprise Focus2015
In 2015 AWS switched gears and addressed the enterprise and public sector market. GovCloud was their primary target. Jassy’s keynote included customer presentations by Capital One and GE about data center consolidation and moving significant parts of their infrastructure to AWS. In addition, by contrast with previous “only public” pitching, AWS also acknowledged the clear need for hybrid clouds amongst enterprises.

This year also marked the introduction of two new EC2 instances at opposite ends of the price spectrum. The new burstable t2.nano is an economical option for dev/test environments, microservices and low traffic websites. The new supersized, memory-optimized X1 instance is geared towards SAP HANA and rehosting large-scale legacy applications.

Amazon also reaffirmed its ambitions to attract more enterprise customers with the launch of a new Database Migration Service, which minimizes disruption to applications that rely on a database during its migration. It also released a free Schema Conversion Tool to help customers preserve the relationships between their data. And the vendor announced a new 50TB physical data transfer appliance, known as Snowball, which enterprises can use to move data in and out of the platform.
What Next for 2016?
Following a new partnership with VMware we can expect more emphasis on hybrid cloud and enterprise migration. In addition, we expect to see more announcements of tools and products that can support Amazon’s main strategic partners, such as telecommunications and network providers Ericsson and Telstra.

As well as beefing up its enterprise offering we can expect more moves in pursuit of the public sector. This would come hot on the heels of Amazon’s recent announcement that its GovCloud offering had earned a major seal of approval from the U.S. Department of Defense (DoD).

Cloudyn at re:Invent 2016
AWS re:Invent offers so much more than product announcements and networking opportunities. It’s also a great place to discover new technologies, ways to improve your productivity and solutions to your own cloud problems.

We’ll be holding live sessions and demonstrations, where you can learn how to maximize your cloud potential through enhanced visibility into your cloud deployments, integrated showback and chargeback capabilities and optimal compute capacity planning.

]]>https://www.cloudyn.com/blog/next-aws-reinvent-5-year-timeline-keynotes-product-launches/feed/0Cloudyn is Now Available on the AWS SaaS Marketplacehttps://www.cloudyn.com/blog/cloudyn-now-available-aws-saas-marketplace/
https://www.cloudyn.com/blog/cloudyn-now-available-aws-saas-marketplace/#respondTue, 22 Nov 2016 13:22:54 +0000http://dev-www1.cloudyn.com/?p=8423We are pleased to announce that Cloudyn is now available as an offering on the AWS SaaS Marketplace. Customers can sign up for Cloudyn directly on the AWS Marketplace and pay for the service through their monthly AWS bill.

]]>Cloudyn is an enterprise-grade SaaS solution that helps companies to manage and optimize their multi-platform, hybrid cloud environment or ecosystem in order to fully realize their cloud potential.

At Cloudyn,we are focused on simplifying the complex cloud environment by providing our customers with visibility into their usage, performance and cost, coupled with insights and actionable recommendations for smart optimization and cloud governance. Cloudyn also enables accountability through accurate chargeback and hierarchical cost allocation management.

We are pleased to announce that Cloudyn is now available as an offering on the AWS SaaS Marketplace. Customers can sign up for Cloudyn directly on the AWS Marketplace and pay for the service through their monthly AWS bill. Having Cloudyn on the AWS SaaS Marketplace offers both Cloudyn and AWS customers a number of benefits:

Receiving one integrated bill: Customers will now be able to pay for Cloudyn through a consolidated AWS bill, thereby simplifying the payment process, with one bill for all cloud-related services

]]>https://www.cloudyn.com/blog/cloudyn-now-available-aws-saas-marketplace/feed/0AWS re:Invent – Cloudyn Gets Set for Cloud Event of the Yearhttps://www.cloudyn.com/blog/aws-reinvent-cloudyn-gets-set-cloud-event-year/
https://www.cloudyn.com/blog/aws-reinvent-cloudyn-gets-set-cloud-event-year/#respondWed, 09 Nov 2016 12:14:13 +0000http://dev-www1.cloudyn.com/?p=8390Now in its fifth year, re:Invent is AWS's global customer and partner conference, where users gather each year to meet the experts, learn about new products and technologies, improve their productivity, find solutions to their cloud problems and network with fellow industry professionals.

Have you ever wanted to see for yourself how our cloud performance monitoring and cost optimization solution works in practice? Then now’s the time to book your place at AWS re:Invent 2016, which takes place in Las Vegas from November 28 to December 2, 2016.

We’ll be holding live sessions and demonstrations, where you can learn how to maximize your cloud potential through enhanced visibility into your cloud deployments, integrated showback and chargeback capabilities and optimal compute capacity planning.

As a platinum sponsor of the event, Cloudyn will be there with a team of cloud experts to answer any questions you may have. We’ll be meeting AWS users and existing Cloudyn customers to find out more about their cloud deployments and the individual challenges they face. This is in order to continue to help our customers manage their cloud deployments and continuously monitor and optimize performance and cost management.

What Is re:Invent?
Now in its fifth year, re:Invent is AWS’s global customer and partner conference, where users gather each year to meet the experts, learn about new products and technologies, improve their productivity, find solutions to their cloud problems and network with fellow industry professionals.

Most of the activity takes place in the Sands Expo, part of the world-renowned Venetian Resort Hotel Casino, which will host hundreds of training sessions, including full-day boot camps, hands-on labs and breakout sessions, covering all aspects of AWS and the cloud from introductory to expert level.

In addition to mainstream cloud themes, such as content delivery, system architecture, migration and security, other sessions tackle hot topics such as big data and analytics, containers and server-less computing. Attendees can also gain industry recognition by taking AWS Certification training and exams during the event.

More than 19,000 cloud professionals attended re:Invent in 2015. But this year is going to be even bigger and better—with two new venues, twice as many breakout sessions and an expected attendance of around 24,000.

Cloudyn’s Preparations
We’re currently planning a host of activities for our exhibition booth in the AWS re:Invent Central Expo Hall. These will include live sessions on cloud best practices, with real use cases and presentations by Vittaly Tavor, Co-Founder and VP Products at Cloudyn, Brent Eubanks, VP Technology Optimization at Ticketmaster, and Rich Sutton, VP of Engineering, Digital Risk and Social at email security solutions provider Proofpoint.

Vittaly and Brent are also preparing a 60-minute breakout session, where you’ll be able to see first-hand how the online ticket retailer monitors and optimizes its AWS deployments using Cloudyn’s cloud management solution. Titled “Setting the stage for instant success: Getting the most out of your AWS deployment”, the session will take place on Wednesday, 30 Nov at 14:30. To find out more, check out the AWS re:Invent 2016 session catalog.

re:Invent isn’t just about serious matters such as cloud cost optimization. It’s also the biggest cloud party of the year. So we’ll be getting into full swing with fun giveaways such as selfie sticks, Star Wars LEGO and other exclusive gifts. Not only that, but we’ve also lined up a cool game at the booth to get the fun going.

See You There!

Don’t forget to look out for us at the event. We’ll be exhibiting at booth #2225 in the AWS re:Invent Central Expo Hall D. You can also email us to arrange a demo or schedule a one-on-one meeting with one of our cloud management experts.

In the run-up to the event we’ll be publishing blog post updates with more details and session recommendations. So don’t miss a thing! Subscribe to our blog by RSS or follow us on Twitter for event news and the latest cloud optimization insights.

]]>Over the last 10 years the pace of innovation at AWS has been nothing short of extraordinary. And that trend looks set to continue after the $10 billion global cloud business announced an exciting range of new services and features at its annual Chicago summit in April.

Many of the new offerings have financial implications for the AWS enterprise user. These include new services that can improve performance, efficiency and compliance, but at the same time introduce potential new costs to your monthly cloud bills. What’s more, the company also announced the imminent release of another new offering, AWS Application Discovery Service, which will help organizations plan their migration to the cloud.

In this post, we’ve narrowed down the key messages that came out of the conference into six announcements most likely to be of interest to enterprise IaaS users and adopters.

So let’s get started …

1. AWS Application Discovery Service

In a move to support adoption of its cloud platform, AWS is set to launch its brand new Application Discovery Service. According to the company, the tool will facilitate the migration process by helping systems integrators determine the interdependencies between applications running on their on-premise infrastructure. By analyzing usage and configuration data from servers, networking and storage equipment, it will automatically identify applications that run independently of one another and those that should be migrated as a group.

The tool will also collect operational data, such as host CPU, memory, disk use and network latency, which will allow you to compare subsequent performance in the cloud with your on-premise baseline. The service should help reduce the time and cost involved in planning a migration. However, Amazon has yet to confirm a release date.

2. Two New Types of EBS Magnetic Volume

The leading cloud vendor has also bolstered its range of low-cost EBS volumes, with two new types of magnetic storage specifically designed for high throughput to support large databases and big data workloads.

Throughput Optimized HDD (st1) is the optimal choice between cost and performance, with a baseline throughput of 250 MB/s for a 1TB volume up to a maximum burst of 500 MB/s. It’s already available in all US regions and costs $0.045 per provisioned GB-month in the US East region.

Cold HDD (sc1) is designed for similar workloads, but provides a lower cost alternative for data that’s accessed less frequently. Throughput performance starts at 80 MB/s for a 1TB volume up to a maximum of 250 MB/s. Likewise, it’s available in all US regions and costs $0.025 per provisioned GB-month in US East.

For both volume types, I/O is included in the price.
With the introduction of these, Amazon actually reduced the price in comparison to the previous generation Magnetic EBS volume which is priced at $0.05/GB.

The Amazon EBS Pricing page offers more detailed information about all the vendor’s storage rates, with examples showing how charges are calculated.

3. S3 Transfer Acceleration

A new paid feature that’s designed to speed up bulk transfer of data over longer distances in and out of S3 buckets. It works by routing your transfer through an AWS CloudFront (CDN service) edge location with the lowest latency. It’s easy to use, with no gateway servers, firewalls or special protocols to worry about. You simply tick a checkbox in the AWS Management Console to activate the service.

S3 Transfer Acceleration is charged in addition to data transfer. So if you’re moving large amounts of data between regions, it’s important you’re aware of the full costs involved. However, the service does offer you a risk-free price guarantee, where you’re only charged if acceleration is likely to make a difference in transfer time.

The cost is $0.04 per GB for all transfers via the Internet into and out of S3 and applies to transfers accelerated by edge locations in the US, Europe or Japan.

4. New 80 TB Version of Snowball Appliance

Snowball is a relatively new introduction (announced at Re:Invent 2015) that allows enterprises to move huge amounts of data in and out of S3 using an industrial-strength storage device, which AWS ships out to customers on request. The first incarnation was available as a 50TB appliance and was charged at $200 per job plus shipping.

But the Chicago summit saw the launch of a second, higher capacity version, which is able to store up to 80TB of data. It’s available to all US regions and costs $250 per job plus shipping. The fees for both appliances cover 10 days of onsite usage, after which an additional rate of $15 per day applies.

5. Amazon Inspector

Amazon officially announced the news that its EC2 security scanning engine has now completed its preview phase and is generally available to all AWS customers. Amazon Inspector automatically assesses applications for p
otential security issues by identifying vulnerabilities and deviations from security best practices.

It’s available as a 90-day free trial, subject to a maximum of 250 agent-assessments. Prices then start from $0.30 per agent-assessment for the first 250 assessments.

]]>Maintaining healthy operation of your enterprise IT infrastructure has always been central to business productivity. But, as more and more companies move their workloads to the cloud, performance monitoring has shifted towards a new set of challenges and objectives.

In traditional IT, operational metrics were essential to the smooth running of your environment—to ensure a fast and responsive service and avoid loss of productivity as a result of downtime. But the cloud has shifted the goalposts. Redundancy and scaling have reduced the risk of service outages to a near minimum. And performance issues are no longer just a matter of user experience and continuity of service. They’re now also a matter of everyday operational costs.

Turn this on its head and it’s easy to see how cloud spending behavior can actually be used as an operational metric for identifying performance issues and abnormal or malicious activity. So in this post, we highlight five common causes of sudden high cloud resource consumption that require attention at operational level.

1. Malicious Attacks

One of the most common types of cyber threat is the denial-of-service attack, which is designed to slow down or crash your server by flooding it with illegitimate requests. Typically, this can trigger unexpected auto scaling to handle the sudden workload or send your data transfer costs through the roof. Similarly, a brute force attack ramps up the load on your cloud resources, as it systematically tries different sequences of characters until it decodes an encrypted password. And if a hacker gets hold of your access keys they’ll have the potential to launch hundreds of instances and send your cloud costs skyrocketing.

The very fact malicious attacks are a potential cause of spikes underlines just how important it is to take urgent action whenever your cloud resource consumption suddenly shoots up. Because not only will it help keep the lid on your costs, but it could also prevent potential harm to your business—in the form of website downtime, loss of data or data theft.

2. Slow SQL Queries

Although SQL is a powerful database query language, it’s relatively simple to learn. But learning SQL is one thing. Optimizing it is another. Depending on the nature of the query, some database functions will be more efficient than others. And you can create two different queries that return exactly the same result but with vastly different run times.

Even a simple change to a SQL command, such as introducing a wildcard character (%) at the beginning of a LIKE pattern or retrieving more column data than needed, can suddenly bump up the cost of querying a database.

Another common pitfall to database management is indexing. Column indexes speed up queries in the same way an index makes it easier to find information in a book. An inexperienced user may not use indexes at all, so every query has to scan an entire database table. Alternatively, they may go overboard and index everything, making database updates slower and more resource intensive.

3. Inefficient Coding and Architecture

Just because a system works doesn’t necessarily mean it does it in the most efficient way. So, as with optimizing your SQL queries, you may also need to resolve inefficient coding and system architectures.

For example, your website may benefit from better handling and optimization of images, typically by using browser caching, file compression and CSS image sprites. You should also consider database caching to streamline the process of serving up dynamic web pages. And you may need to check for a new or recently updated CMS plugin, which may be responsible for slower performance and sudden increased operational costs.

And finally be aware that rehosted applications are more susceptible to operational efficiencies than those that have been rearchitected to take advantage of modern cloud features.

4. Surges in Website Traffic

Sudden bursts of website traffic may not be a direct operational issue. But how you manage them certainly is. If you use either of the two main cloud platforms, AWS and Microsoft Azure, you should enable auto scaling to handle fluctuations in traffic. Nonetheless, if your enterprise runs an e-commerce or consumer-facing website, you should be aware of seasonal purchasing patterns and forthcoming marketing campaigns, so you can accurately forecast cloud costs and avoid any unnecessary false alarms.

Not only that, but your marketing teams should also implement measures to regulate their own cloud costs. They should optimize website content and PPC ads for better targeted traffic, so they convert more visits into sales. As a result, they’ll not only increase return on their marketing investment, but they’ll also get better value for money from their cloud infrastructure expenditure.

5. Shadow IT

The advent of on-demand, pay-as-you-go cloud computing has made it easier than ever for individual business units to start their own shadow IT projects. But applications and processes, implemented without the knowledge, approval and support of the central IT department, may not be designed with operational efficiency or their deployment environment in mind.

Another issue with shadow IT is the risk to enterprise security and compliance. So if you identify a shadow IT project as the root cause of a cloud cost spike, you should view it as a potentially more serious problem.
Stay in Control
Given there are other factors, such as traffic surges to your website, cloud cost spikes are not always down to operational issues. Other causes, such as large-scale batch processing and big data analysis, are clearly normal enterprise activity. Nevertheless, you should stay on your toes and continually analyze how well operational tasks are performing. That way, you’ll avoid the more expensive alternative of scaling up your instances unnecessarily.

Also remember that some cloud costs gradually creep up without obvious signals or usage spikes. Charges for idle features, such as orphan snapshots and unused elastic IP addresses (EIPs) often go unnoticed. And running traditional antivirus software in the cloud will be more expensive than a lightweight cloud solution, which offloads most of its processing to the vendor’s own infrastructure.

The pay-as-you-go nature of on-demand computing means cloud cost spikes are inevitable. But it’s also important you quickly pinpoint the exact cause. And that’s where cloud cost monitoring and optimization tools can help—by giving you aggregated views over all your deployments and the high level of granularity you need to track those root causes down.

]]>Driving cloud adoption across your enterprise is no easy challenge. But one way to change people’s mindset and steer users towards a quicker, easier, open and more transparent method of ordering IT services is to offer them a cloud services catalog.

In short, a cloud services catalog is a central self-service portal for ordering a range of standardized cloud-based IT offerings. And it offers many benefits to enterprise IT.

It can help guide users away from outdated solutions towards more modern and powerful technologies. It can serve as an effective tool to enforce regulatory requirements and common enterprise standards. It will make it easier for end users to identify and provision the IT resources they need. And, through an integrated pricing structure for chargeback and showback, it will raise awareness of the financial implications of the pay-as-you-go delivery model, encouraging business units to use the cloud responsibly.

But creating a cloud service catalog is far from straightforward. You need to do your homework and identify the services and features that offer your organization the best possible balance of functionality, cost and performance. So, in this post, we take a look at seven points you need to consider when drawing up a list of requirements and offer you a few starter tips to designing your first catalog.

1. Business cloud needs: Determine the business goals across your enterprise and the infrastructure resources required to achieve them. What new IT projects are in the pipeline? Which of these are suitable for the cloud? How are your workloads likely to change in future? And what about plans for modifying legacy systems? Individual business units may also have specialist IT requirements, such as specific public cloud environments for processing data streams or analyzing big data.

2. Service capabilities: The leading cloud vendors, such as AWS and Microsoft Azure, offer a myriad of different services and pricing structures. So you’ll need to undertake thorough and meticulous research of the options available on both your public and private cloud. Then select the services that are best aligned with the business processes they’re intended to support. You should also look to drive innovation and cost efficiencies by shaping your company’s IT needs through the capabilities you offer.

3. Purpose-built clouds: Don’t forget that some of your business applications may have specific requirements that only a purpose-built cloud can deliver. For example, a backup and disaster recovery solution may not perform well on a generic cloud designed for many different purposes. This may call for a cloud dedicated purely to receiving backup jobs and sending recovery data, where availability or performance isn’t affected by other applications running in the same environment.

4. Governance and compliance: When drawing up your list of services, make sure you capture all regulatory and compliance specifications required by each individual business unit across your organization. Then compile a full list of all the different features required to meet them. For each feature, list which regulations require it and which services provide it. Then use this information to classify each of your services according to the features they offer. This will make it easier for business units to identify and select the right cloud services that meet their own compliance obligations.

5. Access roles: To support efficient cloud management and aid security and compliance, you’ll also need to define a set of standardized access roles. These will give you control over the services that are visible and available to each user or group of users in your organization, depending on their access permissions.

6. Geography: You’ll also need to consider the geographical location of each user, as this could have implications on service levels and compliance. To address these issues, you may need to include special requirements for load balancing or, to comply with local regulations, offer different processing and storage services to users based in certain states or countries.

7. Pricing: Your pricing structure should give users full visibility over the cost allocation to each of the services and options in your catalog. And, wherever possible, allow them to make clear financial and performance comparisons between service offerings. And don’t forget to include a backend chargeback or showback system as part of your catalog specifications.

How to Design Your Catalog

To ensure you’re well prepared, it’s important you have a good idea of how to design your catalog at a very early stage.

Start by outlining the building blocks or basic templates for the various infrastructure requirements, such as web services, compute services and storage. Then add in your value-added services, such as backup, load balancers, firewalls and security. Next expand your templates by adding more detail, such as choices of different CPU, RAM and storage specifications. Once you’ve finalized the services on offer, move on to creating your workflows—in other words, the processes your users will need to follow to select and provision their services.

But remember: First and foremost, your cloud service catalog should offer capabilities that are strongly aligned to your enterprise business needs. It should also be easy to measure and manage—to ensure optimum availability and performance. It should be flexible and adaptable, so it can accommodate service changes and additions, and also offer vertical and horizontal scaling of resources.

The cloud service catalog will support your role as the organization’s IT broker by allowing business units to authorize and provision their own applications and resources through a set of simple automated processes. That way, you’ll embrace each unit’s IT autonomy while maintaining control over enterprise IT operations as a whole.

The Next Generation of Service Catalog

Traditionally, CIOs have viewed the ITIL framework as the ultimate standard for modeling their service catalog. But, as enterprises gradually transform the way they consume their IT, many are now beginning to question its role. Not only that, but the future of the IT service catalog is likely to follow in the footsteps of app stores and public cloud services, as users increasingly expect the same type of experience with their internal IT.

The next generation of service catalog will also need to address the financial challenges of the pay-as-you-go cloud model. Instead of being purely transactional, where users simply select and provision resources, it should also provide them with the ability to monitor usage and identify unused or underutilized resources—so they can reduce chargeback on their IT consumption.

But why put off this level of functionality until another day? Because for some enterprises this isn’t just the future. It’s already happening right now.

]]>https://www.cloudyn.com/blog/cloud-services-catalog-7-point-checklist-enterprise-success/feed/0No More IT vs. Business: 5 Ways to Become an IT Business Leader Todayhttps://www.cloudyn.com/blog/no-vs-business-5-ways-become-business-leader-today/
https://www.cloudyn.com/blog/no-vs-business-5-ways-become-business-leader-today/#respondTue, 16 Aug 2016 07:55:00 +0000https://dev-www1.cloudyn.com/?p=7872As technology becomes integral to every organizational department, IT leaders are becoming important participants in nearly every business decision.

]]>In the past, an IT leader’s main role was to serve the business and focus on the latest and greatest technology. They supervised an IT department that ran a company’s servers and helped find the best applications or software. While this helped achieve practical corporate objectives, IT leaders weren’t involved in the overall business plan of the enterprise, nor did they realize a real impact for the organization.

Today, as technology becomes integral to every organizational department from marketing to purchasing to sales, IT leaders are becoming important participants in nearly every business decision. They have to act as change management leaders and drive innovation to support new business models, revenue growth for both their company and their clients, and customer retention.

5 Ways to Become an Innovative IT Leader

So how can CIOs and other IT leaders incorporate these strategies to become more relevant to an enterprise’s overall business goals? Below we discuss five tips on how to have a more lasting impact on your business and become an innovative, customer-driven IT business leader today.

1. Find Out How Your Customers Drive Revenue

CIOs today not only have to understand their business, they have to understand their customers’ business. This represents a big change for IT leaders, whose role has evolved dramatically from a custodian of applications and systems to serving their customers. Roger Gurnani, Chief Information and Technology Architect of Verizon, for example, is involved in leading one of the biggest Internet of Things (IOT) implementations in the world, bridging strategy and business technology together. This includes being tasked with managing wireless network and telecom network architectures, along with the IT technologies that drive business processes. Verizon’s IOT network, called ThingSpace, relies heavily on data analytics and machine learning algorithms to automate all of Verizon’s networks, as well as the application ecosystem, mobile devices and sensors connected to it.

This automated system enables Verizon to better serve its customers – for example by directing technicians to people’s homes with the appropriate skills and spare parts to correct a problem. By doing so, Verizon ensures that they constantly innovate and scale their infrastructure to provide a high-quality experience for their customers.

2. Know and Highlight Your Differentiators

To continue to innovate and provide a high level of customer service, you have to produce differentiators. For example, Intel’s CIO, Kim Stevenson, developed a big data analytics platform that is expected to produce $1 billion in value for the company. With a probability model based on machine learning,​ ​the platform describes what a sales win looked like, and notifies salespeople of which resellers to call and ​in which order. The platform has helped change the way Intel sells and markets its products, and has become essential to how they run their business.

Another example of a differentiator is the ability to create an innovation team. Contrary to popular opinion, the increasing use of the cloud and ‘as a service’ solutions does not necessarily mean the need to reduce IT staff, but rather provides an opportunity to expand your staff by creating a highly-skilled innovation team that can add value, scalability, ​and agility to your business.

3. Generate Productivity Improvements

Demonstrating your ability to generate productivity improvements will also keep you in line with the overall goals of your enterprise. Improving the skills and methodologies with which your staff works can help do so, and agile methodology is a big part of this. Although it was created for software development, agile can be used in IT to emphasize speed and communication between departments. As an IT leader, you know for example which people in your organization are open to new ideas, and you can identify these resources as ones that will invest effort in training, experiment with new software or applications, and build bridges between other departments. As is the case with agile software development, traditional silos between departments are thus removed, and valuable customer and transaction data can be shared across teams and disciplines.

In addition to modern processes and methodologies, CIOs should also look for modern technologies such as machine learning – based on algorithms that can learn from data without relying on rules-based programming. IT can become more intelligent by supporting the business with machine learning solutions, and also tracking, analyzing and revealing issues – especially in the world of distributed systems performance. The more you train your applications to understand your data, the better your predictive analytics capabilities will be. CIOs can take a call center app or their corporate Intranet, for example, and build a data set from it to teach machines to solve a problem.

4. Focus on the Financials

As IT becomes more central to businesses, CIOs are more and more focused on projects that create revenue. According to recent Gartner research, business and revenue growth will be defined by how well companies leverage their technology and digital transformation. Business leaders today see revenue growth as a top priority, and they are increasingly willing to invest in digital initiatives to help drive corporate expansion.

Jim Fowler, GE’s CIO, is one example of how to make this work. At GE Capital, Fowler used analytics and automated tools to drive $400 million into GE Capital over two years. That revenue came via a number of IT-related projects, including a sophisticated and innovative fleet analytics tool.

5. Move from IT-Focused to Customer-Focused

The days of IT teams being custodians of technology have changed. Now there is a need for IT to be leaner and more innovative and evolve into a more customer-focused part of the business. Linking your IT with your customers’ IT is one thing to consider, as it can help produce increased efficiency and innovation within your own organization. For example, integrating your technology to both your suppliers and customers can help produce more efficient order processing and financial administration for your business.

For IT leaders today, this kind of customer-focused mindset seems to be taking hold. According to a recent Gartner study, between 2013 and 2015, the percentage of CIOs giving priority to customer growth and innovation increased from 32 percent to 38 percent, while the percentage focused on the traditional priorities of effectiveness and efficiency actually decreased from 51 percent to 42 percent.

The Cloud is Fundamental

Taking the above five recommendations into consideration, traditional enterprise CIOs must recognize the cloud as a crucial element in order to make these recommendations happen. Running agile and DevOps, automating processes and employing machine learning and big data solutions — all of these need a cloud infrastructure in order for the organization to fully utilize their benefits. If you haven’t done so yet, make sure to experiment with using cloud services for your business through a public cloud provider such as AWS or Azure. These solutions let you efficiently launch resources or services, while rapidly evolving and improving to align with the organization’s needs. The flexibility they offer enables improved deployment, cost savings, and reduced risk – enabling a new emphasis on innovation and speed to market, and the ability for IT to become more business-focused than ever before.

]]>From the very early days of the public cloud, low upfront costs and drastically reduced lead times have made pay-as-you-go computing a particularly attractive choice for start-up businesses. Then, as vendor propositions came of age, a wave of large-scale enterprises began to see the huge business benefits of modern, elastic, on-demand computing environments and started moving their workloads to the cloud.

But recently, a small number of high-profile companies, such as file-sharing service Dropbox and leading inbound marketing and SEO company Moz, have moved in the opposite direction by taking sovereignty of their data and building their own in-house private clouds.

They follow in the footsteps of social gaming platform Zynga, which moved its service out of AWS in 2009. But, for Zynga, the decision to cut loose from Amazon proved a catastrophic failure. Not only did it struggle with the sheer complexity of building its own large-scale IT infrastructure but it also made the move just as sales revenue imploded.

So why would such a company choose to ditch public cloud in the first place? And why have other prolific cloud users done the same? In this post, we examine the key drivers behind such decisions and the implications they could have for other enterprise-level public cloud consumers.

1. Operational Costs

The last 20 years have seen the emergence of a whole new generation of companies that build their entire business model exclusively around the provision of digital services. And the bigger their customer base grows, the more compute resources they consume. If such a business is built in the cloud, even with highly efficient cost monitoring and usage control, it will inevitably see its operational costs escalate as it rapidly grows.

At some point, it may outgrow the cloud and reach a scale where it can afford to start building its own purpose-built data centers. It’s important to remember that, however tight their operational margins, vendors such as AWS and Azure still ultimately exist to make a profit. So for some businesses, such as Dropbox and Moz, it may make more economic sense to host their data on their own infrastructure.

2. Vendor Lock-In

Vendor lock-in is a problem in which a company cannot easily move its on-premise or cloud-based IT to a competing vendor’s infrastructure. When a company hosts its IT on a specific cloud vendor platform, it has to adopt the provider’s own API and other proprietary technologies. These may be incompatible with those of its competitors, making migration to an alternative provider a complex and expensive process. Not only that, but vendors charge for moving data out of their platforms and may also tie customers into lengthy, restrictive contracts.

For Dropbox, the transition to in-house data centers proved an epic feat of IT engineering. What’s more, it was a race against time, as its main contracts with Amazon were coming to an end. So, with the amount of data stored in its cloud ballooning by the day, Dropbox came to the conclusion it was the right time to make a move.

3. Digital Sharecropping

Closely tied to vendor lock-in, digital sharecropping is a new phenomenon that was created by the Internet. The concept comes from the term sharecropping, which refers to the problems southern-state farmers faced as a result of planting their crops on someone else’s land.
With vendor lock-in, you at least have a choice where you host your data, even though it could be costly to move to another provider. By contrast, with digital sharecropping, you have no choice at all, as your entire business model is dependent on one particular vendor.

For example, companies that rely on organic traffic from Google have been ruined overnight as a result of changes to the search engine’s algorithms. Businesses built around Facebook were badly hit when the social network significantly reduced the organic reach of their posts. And, similarly, third-party cloud solutions companies that sell only products and services based around one cloud provider are at the mercy of that provider and could be subject to changes that have the potential to put them out of business.

There’s no evidence to suggest any company has ever moved their data off the cloud through fear of digital sharecropping. And that includes Dropbox, who are known to have cited raw economics as the basis for their decision. All the same, the three main cloud vendors, Amazon, Microsoft and Google, are now beginning to challenge Dropbox with their own file-sharing products — Amazon Zocolo, Microsoft OneDrive and Google Drive. If Dropbox had remained on AWS, it’s not clear what problems it might’ve eventually encountered by hosting on a platform owned by a direct competitor.

4. The Wrong Fit

Sometimes a company might simply find the cloud no longer fits their individual needs. Cloud vendors may lack certain enterprise-grade features for specific products or the right environment to ensure software stability and compliance. And the only satisfactory option is to build a purpose-built on-premise system.

This was one of the major factors in the decision by Moz to build its own customized infrastructure. Similarly, when Dropbox moved to its own data centers, it didn’t just build a like-for-like replacement for S3. Instead, it took the opportunity to build a custom system tailored to its own specific technical challenges.

The Trend towards Hybrid Cloud Models

A crucial facet to all three of the high-profile cases we’ve highlighted is the fact that none of the companies involved has completely abandoned the public cloud. Zynga was forced to migrate most of its data back to AWS. And, despite making a successful switch to in-house infrastructure, both Dropbox and Moz continue to use AWS for some of their workloads.

These examples go to show that, in the case of the cloud, it’s not just a simple matter of being all in or all out. Moreover, they underline the current trend towards the adoption of hybrid cloud models, where enterprises use their in-house resources for predictable, everyday workloads and cloud infrastructure services for bursting during peak demand and supporting rapid IT growth.

In other words, hybrid cloud arrangements offer enterprises the best of both worlds. And that’s where the financial balancing act between the benefits of fixed costs and the flexibility of the pay-as-you-go model begins.