vCloud Automation Center – vCAC 5.1 – Amazon EC2 Configuration

Usually most people go straight for connecting vCAC to vCenter, but I have decided to connect to Amazon EC2 first. I’m doing this for a few reasons, but mainly because anyone reading this has access to EC2. All you really need is any computer with a Desktop Virtualization tool like VMware workstation and you can test vCAC with Amazon EC2. If you don’t have an Amazon AWWS account go to http://aws.amazon.com and sign-up.

Signing up for Amazon AWS is free and what’s even better is you can also provision “Micro.Instances” for free for an entire year as long as you stay within these guidelines. The basics are this:

750 Hours of Linux/Windows Micro Instance Usage per month. (613Mb Memory). This is enough to run a single micro instance for the whole month.

750 Hours of Elastic Load Balancing plus 15GB of data processing

30GB of Elastic Block Storage

5GB of S3 Storage with 20,000 Get requests and 2,000 Put requests

And some other goodies…..

You can run more than one micro instance at a time as long as the consecutive run time of your machines doesn’t go over 750 hours a month. Once you provision an instance it automatically counts as 15 minutes used. I don’t bother trying to calculate by the 15 minutes so the way I look at it is I can perform 750 provisioning tests per month if each test is less than an hour.

CIO, CTO & Developer Resources

Backgroud information

Before we begin the configuration there are a few things we need in place. If you don’t already have vCAC installed and the foundation laid check out these posts to get going:

What were going to configure

In order to configure EC2 integration we are going to setup some additional components of vCAC as outlined below:

Credentials -Credentials will be utilized by out endpoints to authenticate us to the infrastructure element managers that we are going to communicate with.

End Point – Endpoints are how we manage connections from vCAC to other infrastructure elements in the environment. There are endpoints that allow us to communicate with EC2, vCenter, vCloud Director, vCenter Orchestrator, Hyper-V, NetApp Filers, as well as Physical Servers such as HP iLO, Dell iDrac, and Cisco UCS.

Enterprise Group – Although we already created an Enterprise Group we are going to add Compute Resources to the group in this exercise. FOr more information on what Enterprise Groups are see my earlier article “vCloud Automation Center – Laying the foundation“.

Reservations – A resource reservation is how we provide available resources to our provisioning groups. Resource Reservation are a one to one mapping to provisioning groups. Resource reservation will get created for any type of resources you want to make available to your groups. we will discuss these in more detail in another article.

Global Blueprints – A Blueprint is really a service definition that details what the consumer can request and all the policies and configuration of that service. We will create an Amazon Ec2 Blueprint that a consumer can request through the service catalog in this example. I will cover Blueprints in greater detail in another article.

Configuring vCAC to provision to Amazon EC2

Crating Credentials

1.) The first thing we need to do is log into the vCAC console at “http://[host]/dcac“, then go to the “vCAC Administrator” menu on the “Left” and select “Credentials“.
2.) On the “Credentials” page select “New Credentials” in the “Upper Right” corner.

3.) Give your “Credential” a “Name” and “Description“. We then need to get your Amazon AWS “Access Key ID” and “Secret Access Key” which are covered in the following steps. The “Access Key ID” will be your “Username” and the “Secret Access Key” will be used as the “Password“.

Getting your AWS Access Key ID and Secret Access Key

4.)Login to your Amazon AWS account at “http://aws.amazon.com“. At the top “Right” corner “Hover” over “My Account/Console” and then select “Security Credentials”

5.) Scroll Down the page until you set to the section labeled “Access Credentials” and you will see your “Access Key ID” displayed. Copy and paste this in the “Credentials” “Username” field.

6.) Next “Click” “Show” to display your “Secret Access Key“. Copy and paste this into the “Credentials” “Password” Fields.

7.) Once you have input your “Username” and “Password” click the “Green” check on the “Left” hand side.

Creating an EndPoint”

8.) Next go to “vCAC Administrator” menu and “Click” “Endpoints” Once the “EndPoints” page displays “Hover” over “New EndPoint” and select “Amazon EC2“.

9.) Give your “Endpoint” and “Name” and then “click” the selection box next to “Credentials“. Select the “Amazon EC2” “Credentials” you just created and “Click” “Ok“., then “Click” “Ok” on the “New Endpoint” Screen.

10.) You will now see your newly crated Endpoint listed on the Endpoints screen. At this point vCAC executes a workflows that connects to Amazon AWS and validates your Credentials. If your credentials are validated the workflow will proceed to do a Data Discovery. The discovery will detect the available Amazon EC2 resources available for use. Once the discovery if finished the Amazon EC2 resources will become available within the “Enterprise Group” for selection.

Adding Compute Resources to an Enterprise Group

11.) Next let’s go to the “vCAC Administrators” menu and select “Enterprise Groups“. Once on the “Enterprise Groups” page “Hover” over the “Enterprise Group” we created and “select” “Edit”

12.) In the “Enterprise Group” we now see the “Amazon Regions” that are available. Select the “Amazon Region” that you would like to use and “Click” “Ok“.

13.) Next if you go to the “Enterprise Administrators” Menu on the left and select “Compute Resources” you will see a “Compute Resource” for each “Amazon Region” you selected. Once the “Compute Resource” is available we can create a “Resource Reservation” to assign to our “Provisioning Group“.

Creating a Reservation

14.)On the “Enterprise Administrators” menu select “Reservations” and then “Hover” over “New Reservation” in the upper right corner and select “Cloud”

16.) vCAC will “auto-generate” a “Name” for the “Reservation” however you can change the name if you like. The select the “Drop Down” dialog next to “Provisioning Group” and “Select” the “Provisioning Group” we created.

17.) Next if you like you can set a “Machine Quota” to limit the number of machines that can be provisioned on to this “Amazon AWS Reservation“. You must set a “Priority” for the “Reservation” which is used to assist in making placement decisions if you have multiple reservations. I will talk more about this in another post. Once you have set your “Priority” “click” the “Resources” tab above.

18.)”Amazon AWS” utilized “Key Pairs” for enhanced security of machine management tasks. You ave a few options within vCAC. You can let vCAC “Auto-generate a key pair per Provisioning Group“, “Auto-Generate a key pair per Machine“, or you can use a “Specific key pair” that you have already created through the “Amazon AWS” console. I’m going to use the “Auto-Generated per Provisioning Group” option in this example.

19.) Next we need to select the “Locations” within the “Selected AWS Region” that we want to make available for use. I’m going to select them all. Then we need to select the “Security Group” we would like to make our machine part of. The “Security Group” can be looked at as a firewall rules for your machine. I’m going to select my “Default” “Security Group“. Optionally you can select a “Load Balancer” to attach the machine to as well. I will cover this in a later article. When you are finished “Click” “Alerts” above.

20.) Here you can optionally enable “Alerts” that will send notifications if the “Reservation” is nearing capacity. Set the “Quota Threshold” for your alert, the email addresses to be notified, and the “Reminder Frequency” and click “Ok”

21.) You will now see your newly created “Reservation” listed on the “Reservations” screen. Now select “Global Blueprints” located under the “Enterprise Administrators” menu.

Creating a Blueprint

22.) Once you are on the “Global Blueprints” page “Hover” over “New Blueprint” and select “Cloud”

23.) Once on the “Blueprint Information” tab give your “Blueprint” a “Name“, and optionally change the “Display Icon“. Next assign it to a “Group(s)” and then optionally override the “Prefix” associated with this “Blueprint“. Then you can optionally set the max number of machines a user can request for this blueprint and a daily cost if you wish. Once complete select the “Build Information” tab above.

24.) On the “Build Information” tab change the “Blueprint Type” to “Server”

25.) Then next to “Amazon Machine Image” click the “Selection” box.

26.) Once the dialog box appears you can filer the results at the top to narrow the result for the AMI you would like to use. If you selected multiple regions for use make sure the AMI is in the Region you want to use. Select the “AMI” you would like to use and click “Ok”

27.) “Optionally” you can “override” the “key Pair” setting that we configured in the “Reservation“.

28.) “Optionally” you can “Enable” network options for the “Bluepeint“. The will allow the requester to select the “Security Group” they would like to apply to the machine if more than one was selected in the “Reservation“.

29.) Next select the “Instance Types” you would like the requester to be able to choose from.

30.) Then select the “Security” tab above.

Making a Request

31.) “Hover” over the newly created “Blueprint” on the “Global Blueprints” page and select “Request machine” to test our configuration. You can also go to the “Self Service” menu and select “Request Machine”

32.)On the “Confirm Machine Request” page click the “Drop Down” next to “Instance Type” and select the type of “Instance” you would like to request.

33.) Then click the “Drop Down” next to “Provision Into” and select “Non-VPC Location” because we do not have a “VPC” configured.

34.) Next select the “Drop Down” next to “Location” and select a location to provision to.

37.) “Optionally” if you added more than one “Security Group” to your “Reservation” and “Enabled” “Network Options” in the “Blueprint” you can select a different “Security Group” for your machine. Click “Ok” when finished.

38.) Next under the “Self-Service” menu select “My Machines” to track the status of your request.

39.) Your newly “Requested” machine will appear under “My Machines” and the status will show “Requested“. Note: If you machine does not show up click refresh as it can take a few seconds for it to appear.

40.) If you continue to “Refresh” the page you will see the requests updated “Status“. The next “Status” your “Request” will go to is “CloudProvisioning“.

41.) After your request goes to “CloudProvisioning” If you login to your “AWS Console” and go to “AWS Management Console“, then “EC2“, and then “Instances” you will see your newly provisioned machine in the “Pending State”

42.) Once finished the machine state in “vCAC” will go to “MachineProvisioned“, Then “Turning On“, and finally “On”

43.) You will now see your machine “Running” in the “AWS Console“.

44.) In “vCAC” if you “Hover” over your newly created machine you will see the “Machine Options Menu” select “Edit”

45.) On the “Machine Information” tab near the bottom you will see “Admin Password“. Here you can show the “Local Password” for your newly provisioned “Amazon AWS Instance” Click the “Storage” tab above. Note: It can take Amazon 30+ minutes to make the password available even through the AWS Console. Once it is available from Amazon, it will not be available in vCAC until vCAC performs a data collection.

46.) On the “Storage” tab you can add “EBS” storage “post-proviosioning” if you would like. Click on the “Network” tab above.

47.) On the “Network” tab you can assign an “Elastic IP Address” if you have made them available through “Amazon AWS“. You can also change the “Security Group” and assign the machine to a “Load Balancer” Click “Ok” when you are done. More on these option soon.

There are a few important things to note. If you add additional services such as Elastic IP Address, Elastic Block Storage, Elastic Load Balancers, Sucurity Groups, etc through the Amazon AWS Console they will not appear as available in vCAC until after the next Inventory Data Collection. You can perform a manual data collection as well as change the data collection frequency by doing the following:

Go to “Enterprise Administrator” menu and select “Compute Resources“

Hover over the “Compute Resource” and select “Data Collection“

Under the “Inventory” section you can set the “Frequency” in hours as well as manually “Request” a “Data Collection“.

If you “Request” a “Data Collection” you can select “Refresh” at the bottom of the page to get the status of the collection.

Sid Smith, founder of DailyHypervisor is considered to be a cloud expert in the IT field with over 10 years experience in Virtualization, Automation, and Cloud technologies. Sid Smith started in the industry designing and implementing large scale enterprise server and desktop virtualization environments for fortune 100 and 500 companies. He later went on to become a key employee at DynamicOps the well know creators of Cloud Automation Center. In July 2012 DynamicOps was acquired by VMware who has adopted Cloud Automation Center as a center piece for it’s vCloud Suite of products.
Sid has helped dozens of fortune 100 and 500 enterprises successfully adopt both private and public cloud strategies as part of their IT offerings. The result of which was large operational and capital savings for his customers. Sid continues to help large enterprise customers reach their hybrid cloud strategies at VMware. On DailyHypervisor you will find exclusive content that will help you learn how to adopt a successful cloud strategy through the use of VMware Cloud Automation Center, Open Stack, and other industry recognized cloud solutions.

"We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.

The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals...

Internet of @ThingsExpo has announced today that Chris Matthieu has been named tech chair of Internet of @ThingsExpo 2017 New York
The 7th Internet of @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, New York.
Chris Matthieu is the co-founder and CTO of Octoblu, a revolutionary real-time IoT platform recently acquired by Citrix. Octoblu connects things, systems, people and clouds to a global mesh network allowing users to automate and control design flo...

The WebRTC Summit New York, to be held June 6-8, 2017, at the Javits Center in New York City, NY, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 20th International Cloud Expo and @ThingsExpo. WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web co...

Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office.
In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...

Complete Internet of Things (IoT) embedded device security is not just about the device but involves the entire product’s identity, data and control integrity, and services traversing the cloud. A device can no longer be looked at as an island; it is a part of a system. In fact, given the cross-domain interactions enabled by IoT it could be a part of many systems. Also, depending on where the device is deployed, for example, in the office building versus a factory floor or oil field, security ha...

In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.

The idea of comparing data in motion (at the sensor level) to data at rest (in a Big Data server warehouse) with predictive analytics in the cloud is very appealing to the industrial IoT sector. The problem Big Data vendors have, however, is access to that data in motion at the sensor location.
In his session at @ThingsExpo, Scott Allen, CMO of FreeWave, discussed how as IoT is increasingly adopted by industrial markets, there is going to be an increased demand for sensor data from the outermos...

Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value.
In his session at 20th Cloud Expo, Ed Featherston, director/senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.

In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential.
Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...

SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York.
The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY.
"The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...

Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...

"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.

What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ...

Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation.
In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...

As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...

You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time.
In his session at 19th Cloud Expo, Mark Allen, General Manager of...

Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities.
Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.

The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location.
With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...

Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...

As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.

When was the last time you’ve ever heard anyone say “IT Applications & Operations”? Frankly, in my 30+ year career in IT, I don’t believe I’ve ever heard anyone use this term. The typical term we hear is IT Infrastructure & Operations. These two go together like Peanut Butter and Jelly, which tells us a lot about how we view the field of IT. For those that may not be familiar with the role of IT Operations, Joe Hertvik does a great job here of describing IT Operations Management as someone enga...

Multiple agencies across the U.S. government are paying closer attention to the software they are buying. More specifically, they want to know what open source and third party components were used to build the software applications. The report notes:
U.S. Food and Drug Administration (FDA) wants to know what open source components are being used in medical devices.

I’m a huge fan of open source DevOps tools. I’m also a huge fan of scaling open source tools for the enterprise. But having talked with my fair share of companies over the years, one important thing I’ve learned is that you can’t scale your release process using open source tools alone. They simply require too much scripting and maintenance when used that way. Scripting may be fine for smaller organizations, but it’s not ok in an enterprise environment that includes many independent teams and to...

In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential.
Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...

For large enterprise organizations, it can be next-to-impossible to identify attacks and act to mitigate them in good time. That’s one of the reasons executives often discover security breaches when an external researcher — or worse, a journalist — gets in touch to ask why hundreds of millions of logins for their company’s services are freely available on hacker forums.
The huge volume of incoming connections, the heterogeneity of services, and the desire to avoid false positives leave enterpri...

Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...

Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.

@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.

The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location.
With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...

Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair
The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY.
"DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...

This is a guest post from Cloudinary, a cloud-based image and video management solution. We are always looking for ways to help companies deliver digital experiences that will meet customers expectations in terms of content and performance. Tackling these 5 challenges is a good step towards delivering a top-notch digital experience.
We are in the midst of a great evolution when it comes to website design. Formerly text-heavy sites now rely on eye-catching images and video to draw in visitors,...

There’s a funny thing about digital transformation: we are simultaneously over-hyping it and understating it. On the one hand, every tech company in the world is talking about it. It doesn’t matter how mundane the technology; every company is somehow relating their products to digital transformation.
On the other, many people are failing to grasp the import and impact of what digital transformation really means. In far too many cases, business and IT leaders are dismissing it as nothing more ...

Today’s IT environments are increasingly heterogeneous, with Linux, Java, Oracle and MySQL considered nearly as common as traditional Windows environments. In many cases, these platforms have been integrated into an organization’s Windows-based IT department by way of an acquisition of a company that leverages one of those platforms. In other cases, the applications may have been part of the IT department for years, but managed by a separate department or singular administrator.
Still, whether...

The holiday shopping season, a time when Americans flock to the malls or online to find those must-have gifts, is about to kick off. Kids are pouring over catalogs and compiling their wish lists, adults are looking at the Black Friday ads to find the best bargain, and retailers are hoping they don’t make the news for failure to meet customers’ expectations. Every year, retailers go into Black Friday thinking they have done everything they can and are prepared for the onslaught of visitors, but s...

SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...

Hewlett Packard Enterprise advanced across several fronts at HPE Discover 2016 in London, making inroads into hybrid IT, Internet of Things, and on to the latest advances in memory-based computer architecture.
A leaner, more streamlined Hewlett Packard Enterprise (HPE) advanced across several fronts at HPE Discover 2016 in London, making inroads into hybrid IT, Internet of Things (IoT), and on to the latest advances in memory-based computer architecture. All the innovations are designed to hel...

@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...

I was walking down the street in Toronto one morning juggling a large tray of Tim Horton's coffee. Standing at a busy corner, waiting for the walk signal, I overheard the following conversation:
Woman: "I can't believe they delivered the new application with all those options. No one said they'd be live."
Man: "I know. I made the changes last week, and for some reason, they went into QA but weren't tested. So I assumed that we weren't rolling them out."

If you haven’t heard yet, CollabNet just put out some very big news for managing and gaining value from DevOps.
We introduced CollabNet DevOps Lifecycle Manager (DLM) — a platform designed exclusively for providing a single pane of glass, dashboard, and traceability views across your DevOps toolchain and processes from planning to operations and that can be traced back to planning and development.

We have been seeing a sudden rise in the deployment of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). It looks like the long “AI winter” is finally over. It is interesting to note that AI was mentioned by Alan Turing in a paper he wrote back in 1950 to suggest that there is possibility to build machines with true intelligence. Then in 1956, John McCarthy organized a conference at Dartmounth and coined the phrase Artificial Intelligence. Much of the next three decade...

How can a dinosaur adapt to the modern world? Well, if your ‘dinosaur’ happens to be a mainframe environment, then we suggest you take a good long at DevOps. The next generation of application delivery and agile methodologies are illuminating the challenges and solutions mainframe engineers face on a daily basis. As Continuous Delivery and DevOps evolve, so too should your mainframe and its processes.

Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...

Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...

The volume of transactions running through websites and mobile apps make customer-facing applications crucial to online businesses. If these applications perform well for their users, they generate revenue for the business. If they don't, they affect the credibility of the business, which in turn affects the overall revenue. It is therefore imperative that businesses understand how well their revenue-critical applications are behaving for their end users.
From an IT team's point of view, unde...

There are many companies offering network monitoring solutions to small, medium and big companies. The question is: is installing a monitoring software in our IT infrastructure really economically viable? Here we will touch some key points, which are directly affected by network monitoring software.

Businesses have always had to transform to find better and more efficient ways to deliver value faster to their users, customers or consumers. The motivating factors are shorter lead times, automated and streamlined value flow, as well as reduction of overall costs and bound capital, requiring enterprises to transition to a continuous innovation and optimization model. Prominent examples […]

Jumping on the Agile bandwagon might help, but only if done right. What makes a good Agile project and what makes a bad one?
The move to Agile in the last decade has resulted in projects that finish faster, produce better software, and come in under budget. Look up any new, hot tech company and you'll find articles lauding their Agile philosophy. You might think that success is guaranteed if you get your team to commit the Agile Manifesto to memory.
The problem with looking at this in a single...

Home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus as a DevOps benefit.
To learn how security technology leads to posture maturity and DevOps business benefits, we're joined by Jennifer Cole, Chief Information Security Officer and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee, and Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps. The dis...

What is inner source? I spoke about it during my webinar on Tuesday, Nov. 8, but here's a review.
At its most fundamental level, inner source is about replicating successful work practices of the open-source world to commercial software projects.
There are numerous examples of open source software making big splashes in the commercial space - Linux, Firefox, Apache - and inner source takes many of the lessons learned from these massively successful projects and shows you how you can apply some...

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.