AWS primarily and Azure, of late, dominate today’s discussions around storage, backup and compute power. A quick glance at headlines from technology journalists, and a reader can glean a common coverage theme that ties these writers together — the ongoing discussion around the benefits of going all-in on the public cloud. However, in most cases technology journalists are writing about larger corporations, or big name installations, which may or may not reflect the actual trends taking place in the marketplace, especially at mid-size companies and organizations experiencing a growth spurt.

As one who is regularly engaged with the CIOs at mid-size and smaller companies and organizations, I don’t see them going all-in on the cloud right now; rather some are pulling back from it and either opting a hybrid cloud solution, or are going all-in with on-prem backup solutions. In fact, according to a survey published by SMB analyst firm Techaisle LLS, the hybrid cloud is now being used by 32 percent of midmarket (100 to 999 employees) organizations, and that figure is expected to remain relatively flat at 31 percent into next year, in spite of what your read in the press about AWS or Azure penetration.

Multi-Level Security Information Systems, better known as MLS systems, have proven merit in the DoD arena in terms of providing a security net and thwarting threats to data and infrastructure within a unified system. Certainly this type of implementation would make sense commercially, but in the fast moving, ever-changing enterprise space, CTOs have historically been hesitant to adopt some of the advancements in these trusted operating systems on top of optimized hardware.

That doesn’t stop my ever curious industry colleagues from asking me for advice about the exact level of complexity involved with MLS, and whether or not the potential ROI and added security enhancements are real.

DockerCon sailed through Seattle recently, leaving behind in its wake a new swath of rapid adopters plus a trail of related company and product announcements. Docker itself produced perhaps the most exciting announcements of all with the launch of its DockerStore, a searchable marketplace for validated software and tools used in the Docker format, plus the launch of version 1.12 of its software, currently in public beta.

But the most important message delivered during the event came from Docker’s CEO, Ben Golub, who stated during his keynote address (video below) that upwards of 70 percent of enterprise companies have now implemented containerization.

At the recent Bio-IT World Conference in Boston, I had the privilege of speaking to an audience made up primarily of life sciences and medical researchers. My main message concerned the cloud, specifically the trend of cloudbursting. This audience is extremely important to me personally, as my daughter was diagnosed with autism at the age of two.

She is now seven years old, stands 3’9” and is in the first grade -- the sweetest kid you could ever meet -- and though she is reading at grade level, her life is not without serious challenges. As a parent, when you see your child hooked up to machines and sensor probes that monitor her brainwaves due to a recent string of epileptic seizures, it is heartbreaking to watch.

The storage landscape has been continually shifting over the last five years and the next year should prove no different. With the transformation of proprietary-based storage systems taking the lead, technology advancements that lower latency and increase SSD capacity will only serve to advance this storage landscape shift.

From SMBs to enterprise IT data centers, opportunities for growth and success over the next year exist with the vast array of available storage options, but the challenge lies in finding the ideal approach for your business and use case. It is paramount today that business and technology leaders not only have the capability to securely store massive amounts of data, but also the ability to make sound business decisions based on that data.

The Paris climate talks just concluded, with at least one mainstream media source hailing the resulting agreement as “the world’s greatest diplomatic success.” Back home, aside from those married to the issue, we hardly noticed, especially those of us in the technology industry. Life continued as normal, we drove to work and back home in our SUVs, spent the evening engaged in some form of electronic entertainment and then went to bed in our climate-controlled homes, while miles away our companies' data centers hummed through the night, powering the Web for millions of night owls.

Heading into next week’s Supercomputing 2015 conference in Austin, the topic on the tip of every attendee’s tongue is simply, "supercomputing." Researchers, industry leaders and computational users are gathering by the thousands to investigate how many ways can we explore the power of the microchip in a supercomputing environment.

And it can’t happen at a better time. What once was an American stranglehold -- the illustrious title of the country owning the most powerful supercomputer -- has slowly slipped from our grasp following the crash of the Cold War.

Maybe, as Americans, we devalued the supercomputer’s place in the race to capitalize on the PC, and subsequently lost our focus? Some examples of the most famous supercomputers in the American experience -- WOPR and Watson -- have no commercial, government, military or research value. Watson is merely an advertisement for IBM and best known for its Jeopardy appearances, while WOPR was a character in the 1980’s popular Cold War flick, War Games.

As the director of scientific computing for the Fred Hutchinson Cancer Research Center in Seattle, Dirk Petersen, and industry acquaintance of mine, needs his internal IT organization to store and catalogue large amounts of unstructured and genomic data, all of which is critical to his organization and its many constituents. A data loss caused by server or storage hardware failure would be problematic for his organization and its researchers at best, and catastrophic at worst.

Petersen’s IT team is given a limited budget each year to purchase and maintain its researchers’ storage systems, making it difficult to afford the inherent costs and overhead associated with the classic storage manufacturers. And not just hard costs. These storage platform purchases, in my opinion, can require significant up-front investment, hinder the ability to mix and match solutions along the way and force an organization’s IT team to conduct dreaded forklift upgrades that drain resources.

Last month we explored the pros and cons of open-source OpenStack, a platform I admittedly love, but which is not meant for everyone (for reasons laid out in that post). Today the topic shifts to OpenStack security. Why security? Because security is not only a hot media topic, but also one that automatically forces the CIO/CTO to analyze his or her own security situation within the organization. Is your open-source OpenStack network secure?

OpenStack is a framework that had a lofty goal of providing infrastructure resources to consumers (developers, end users, business units) in a rapid, self-service manner. This singular, self-serve aspect is what has made the platform so popular. The need for security with OpenStack, however, wasn’t addressed until much later, after it had become a clear problem internally. It was a classic technology afterthought — sacrificing safety for speed and efficiency.

It’s inescapable: Companies and organizations of all sorts are looking to adopt open-source OpenStack. In fact, according to a survey conducted by Linux.com at the 2014 CloudOpen in Chicago, OpenStack is now the most popular open-source cloud project, followed closely by Docker and KVM. The OpenStack Foundation now includes 800 corporations and organizations as members, and 2,000 code contributors.

Yet I’d argue that many of the organizations interested in adopting open-source OpenStack are not in the best position to do so, nor are they taking the right KPIs into perspective when evaluating it. I believe that adopting OpenStack has the potential to save money and increase computing power — I’m a huge fan, in fact — but a successful deployment really depends upon your company's provider relationships, and the depth of your company’s technical staff and their capabilities.