Despite the recent flood of high profile network breaches, hacking attempts are hardly new. In 1995, I was attending school in Helsinki when I discovered a password "sniffer" attack in our university network. In response, I wrote a program called the "secure shell" to safeguard information as it traveled from point to point within the network. This new program shielded all of our data and ensured that these kinds of attacks didn't jeopardize our logins.

This program, SSH, works by developing an encryption key pair - one key for the server and the other key for the user's computer - and encrypting the data that is transferred between those two keys. Currently, almost every major network environment - including those in large enterprises, financial institutions and governments - uses a version of SSH to preserve data in transit and let administrators operate systems remotely. Organizations use SSH to encrypt everything from health records to logins, financial data and other personal information.

Management of Keys a Low PriorityDespite the fact that SSH keys safeguard extremely sensitive information, companies have been incredibly casual at managing SSH key generation, access and location throughout their network environments. It's similar to a home security company making numerous copies of a person's housekeys, throwing them all over the streets and never changing the lock. The only things needed to pick up one of these keys and use it to access encrypted data are interest, time and a little know-how.

Organizations are constantly leaving themselves open to security breaches and noncompliance with federal regulations by not being more diligent about SSH key management. Many are incapable of controlling who creates keys, how many are created, or where they are positioned in the network after being dispensed and those discrepancies will lead them to network-wide attacks.

Swept Under the RugThe issue has remained concealed in the IT department, guarded by its vastly technical nature and frequent organizational challenges. System administrators may not appreciate or understand the full scope of the problem because they typically only see a small piece of their environment. On the other side of the company, even if executives and business managers recognize that there is an issue, they are usually too busy to evaluate its scope or possible implications.

SSH key mismanagement is as mysterious as it is widespread. Through dialogs with prominent governments, financial institutions and enterprises, we have determined that on average most companies have between eight and over 100 SSH keys in their environments that allow access to each Unix/Linux server. Some of these keys also permit high-level root access, allowing servers to be vulnerable to "high-risk" insiders. These "insiders," including anyone who has ever been given server access, can use these mismanaged SSH keys to gain permanent access to production servers.

Mismanaged SSH Keys Give Viruses the AdvantageEach day, the probability increases of such a breach occurring. Attacks are becoming more prevalent and sophisticated, and news stories about network breaches are popping up daily. Using SSH keys as an attack vector in a virus is very easy, requiring only a few hundred lines of code. Once a virus secures successful entry, it can use mismanaged SSH keys to spread from server to server throughout the company.

Key-based access networks are so closely connected that it is extremely likely that a successful attack will travel through all organizational servers, especially if the virus also uses additional attack vectors to increase privileges to "root" after breaching a server. With the high number of keys being distributed, it is likely that the virus will infect nearly all servers within minutes, including disaster recovery and backup servers that are typically also managed using such keys.

Industry Regulations FloutedOrganizations lacking proper SSH key management protocols are not only vulnerable to security breaches, they are also out of compliance with mandatory security requirements and laws. SOX, FISMA, PCI and HIPAA are all industry regulations that require control of server access as well as the ability to discontinue that access. Additionally, companies may also be disregarding internal security practices (in some cases, policies mandated by customers).

The SSH protocol and its most commonly used implementations do not create these risks. Rather, it is the result of faulty protocols relating to SSH keys, inadequate time and means to research the problem to develop solutions, lack of understanding of the implications of the issue and the hesitancy of auditors to flag problems that they do not have solutions for.

Clearly the issue of SSH keys being improperly managed cannot be glossed over forever. Without auditing, controlling, or terminating SSH key-based access to their IT systems and data properly, most healthcare providers, enterprises and government agencies are easy targets for an attacker.

Steps to Combat the RisksBefore steps can be taken to solve a problem, it must be identified as a legitimate issue. It may take multiple IT teams to begin a remediation project and will require proper endorsement and support within the company.

There are multiple steps that make up the core of the remediation project:

Automating key setups and key removals; eliminating human errors, manual work and reducing the amount of administrators from hundreds to almost none.

Controllingwhat commands can be executed using each key and where the keys can be used from.

Enforcingproper protocols for establishing keys and other key operations.

Monitoring the environment in order to determine which keys are actively in use and removing keys that are no longer being used.

The Future of SecuritySSH continues to be the gold standard for data-in-transit security but the management of SSH network access must be addressed by organizations in the current threat landscape.

Nearly all of the Fortune 500 and several prominent government agencies are inadvertently putting themselves at risk to major security threats from hackers or rogue employees because they continue to operate out of compliance. This problem cannot be solved overnight. It will take numerous years and thousands of well-trained people to fully combat the problem. It must be the entire organization's responsibility to address the issue. Time must be allotted and it must become a priority to ensure that SSH user keys are properly managed in their companies.

Tatu Ylönen is the CEO and founder of SSH Communications Security. While working as a researcher at Helsinki University of Technology, he began working on a solution to combat a password-sniffing attack that targeted the university’s networks. What resulted was the development of the secure shell (SSH), a security technology that would quickly replace vulnerable rlogin, TELNET and rsh protocols as the gold standard for data-in-transit security.

Tatu has been a key driver in the emergence of security technology, including SSH & SFTP protocols and co-author of globally recognized IETF standards. He has been with SSH since its inception in 1995, holding various roles including CEO, CTO and as a board member.

In October 2011 Tatu returned as chief executive officer of SSH Communications Security, bringing his experience as a network security innovator to SSH’s product line. He is charting an exciting new course for the future of the space that he invented.

Tatu holds a Master of Science degree from the Helsinki University of Technology.

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers.
If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.

Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters.
The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end us...

One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...

Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.

"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.

This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...

SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.

You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently.
In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.

The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016.
Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one.
In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...

Learn how to solve the problem of keeping files in sync between multiple Docker containers.
In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience.
In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...

Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?

Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools.
A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...

The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...

At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.

Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.

Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy.
How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...

Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology.
Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve.
Digital Transformation means:
handling all the data types being found and created in the organizat...

Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...

MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs.
The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with ...

Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization.
In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss:
The acceleration of application delivery for the business with DevOps

JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers.
If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.

Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters.
The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end users by eliminating the need to buy costly infrastructure and instead rent it from cloud providers, i...

One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could be used to serve other requests. Distribution should, in theory, be more even and enable better pred...

Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.

This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing, the opportunity, and the team that brought me to SOASTA.

You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently.
In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.

Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?

Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools.
A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or business processes it is used for.

The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out and touch somebody is becoming reach out and touch everything.” In my previous blog, we talked about...

At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.

Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.

Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology.
Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve.
Digital Transformation means:
handling all the data types being found and created in the organization
understanding that through mobility, data is being generated and analyzed on the edges of the e...

Puppet Labs has published their annual State of DevOps report and it is loaded with interesting information as always. Last year’s report brought home the point that DevOps was becoming widely accepted in the enterprise. This year’s report further validates that point and provides us with some interesting insights from surveying a wide variety of companies in different phases of their DevOps journey.

Microservices are hot. And for good reason. To compete in today’s fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery.
But is it too soon to talk about keeping microservices lifecycle costs under control?

Summer is finally here and it’s time for a DevOps summer vacation. From San Francisco to New York City, our top summer conferences list is going to continuously deliver you to the summer destinations of your dreams.
These DevOps parties are hitting all the hottest summer trends with Microservices, Agile, Continuous Delivery, DevSecOps, and even Continuous Testing. Move over Kanye. These are the top 5 Summer DevOps Conferences of 2015.

Microservices Journal focuses on the business and technology of the software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs.

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.