Yeah, talk about a crappy situation. We have two of their nodes running most of our VM's. I can't even get into the management interface now due to a bug in their software so I'm scrambling to move the VM's. Sucks because the product was actually pretty awesome.

Are you headed to Cloud Expo this week? NIMBOXX CTO and founder, David Cauthron will be on stage delivering a case study presentation titled, "How Software-Defined Data Center Technology is Changing Cloud Computing.”

Learn how a mid-sized manufacturer of global industrial equipment bridged the gap from virtualization to software-defined services, streamlining operations and costs while connecting the infrastructure between its corporate data center and remote partner sites.

I have had the pleasure of sitting on the National Science Foundation’s Industry/University Cooperative Research Center for Cloud and Autonomic Computing (CAC) this year and was recently appointed chairman. I am honored to be selected by my peers and believe the work being done by this organization and across research universities in conjunction with private and public industry is critical to the future of cloud computing.

The CAC is taking an important leadership role in research and defining the standards and benchmarks that will guide the next generation of cloud adoption and growth. As chairman, I am committed to working closely with the entire Industry Advisory Board to ensure we are laser focused on the right priorities. Those include fostering public-private partnerships on topics directly related to emerging cloud standards, and creating a trained workforce capable of sustaining the activities and opportunities created through these joint efforts.

There is a lot of work ahead of us as we look to 2015 and beyond, but being part of an organization like CAC gives me confidence that we are focused on the right challenges and opportunities facing our industry.

On Monday we announced our partnership with VDIworks, a leading provider of virtual desktop enablement and management software. We’re genuinely excited about continuing a journey that started more than 10 years ago when many of us were VDI pioneers (at different companies) – and that is to deliver the benefits of a scalable virtual desktop infrastructure without the cost and complexity.

Back in the day, we were running VMware Workstation and Ground Storm X (GSX) Server on blades before Microsoft had any idea what to do about licensing. We were working with Wyse and Neoware for thin clients running Windows XP-embedded operating systems that, with licenses, cost almost as much as laptops. RDP was the leading remote desktop protocol, better suited for remote server management than providing a robust desktop experience.

Since then we’ve been involved in many VDI advancements, including things like VMware and Windows licensing changes, remote protocols such as PC over IP (PCoIP), and the emergence of connection brokers and full VDI management. While we continue to drive disruptive change in the data center infrastructure, VDIworks has continued to provide leading subject-matter expertise in VDI-specific data center solutions.

Together, we have a yin-yang thing going on that has matured over a decade and allows us to deliver a new standard of end-to-end VDI solution to our customers.

If you’d like to hear more about our VDI offerings or see a demo, please send an email to info@nimboxx.com.

Think that deploying a high-performance virtual infrastructure is an expensive and time-consuming ordeal? Think again. Now you can deploy a virtual environment in a matter of minutes without having to spend big bucks on hypervisor licenses.

Want to know how? Join this webinar with industry analyst Mark Bowker of ESG and our very own Trent Fitz, NIMBOXX CMO, where we’ll discuss how to build a powerful, affordable virtual computing infrastructure.

You’ll walk away knowing:

How virtualization can save your company money

Hyper-convergence basics: what it is, why it’s cool and what options are out there

The state of virtualization market trends and adoption

How to make informed virtualization budget and purchase decisions

What’s next in the evolution of virtualization / hyper-convergence

Bonus: One lucky attendee of this webinar will win a Parrot AR.Drone 2.0 Elite Edition Quadricopter! Register today and we will see you on November 6th at 1pm CT

Speakers:

Mark Bowker - Analyst at Enterprise Strategy Group

Senior Analyst Mark Bowker joined ESG in 2006, and he focuses on all things related to virtualization and cloud computing. Mark researches various cloud and virtualization technologies and evaluates the impact that the solutions have (or will have) on IT strategy and the broader marketplace. His other research areas include data center management, application workload deployment in next-generation data centers, and the external influences that drive the adoption of data center technologies.

Prior to joining ESG, Mark ran the IT organization for a business consulting and technology services company. A Microsoft Certified Systems Engineer, Mark is experienced in designing, implementing, and expanding network and system infrastructure for global organizations.

He holds a B.S. in management information systems from American International College and a continuing education certificate from Worcester Polytechnic Institute in UNIX administration and C++ programming. Mark is also a private pilot. On weekends, one can usually find him flying through the New England skies.

Trent Fitz - CMO at NIMBOXX

Trent is responsible for brand strategy, product marketing and demand generation at NIMBOXX. Trent has more than 18 years of marketing, product management and business development experience in cloud computing, virtualization and network security. He was most recently vice president of worldwide marketing at SailPoint Technologies, the leading provider of enterprise and SaaS-based identity management solutions. Prior to that, Trent served as vice president of marketing for Trustwave, which he joined through the acquisition of Mirage Networks. Trent enjoys living an active lifestyle and supports numerous local and global philanthropic organizations that raise awareness and contributions through sporting events and endurance challenges.

If you’re a reseller that’s been searching for an easy and affordable way to help your clients jump-start their virtualization and private cloud initiatives, look no further. I’m thrilled to unveil our new Fusion Partner Program.

Since I started working at NIMBOXX one month ago as director of channels, I’ve watched NIMBOXX receive an outpouring of support from the partner community. Having been a part of the VAR community for a long time, I have a deep understanding of the factors that contribute to success for the customer-partner-vendor triad. It’s typical for technology vendors to focus on the channel only after they realize they can’t manage everything through direct sales. NIMBOXX is so far ahead of the curve compared to other vendors in this respect, and has recognized from the beginning that selling through channel partners is key to delivering the NIMBOXX value pervasively to customers around the world.

We’re just beginning our journey in transforming the data center as we know it, and I couldn’t be more proud to lead the charge in enabling a worldwide channel program to make the NIMBOXX vision a reality.

We’re very excited to announce that NIMBOXX was just named a finalist in TechTarget's Modern Infrastructure Impact Awards, Best Converged Infrastructure category. Organizations selected in the Best Converged Infrastructure category offer compute, network and storage hardware infrastructure that is significantly improved with software.

This nomination validates the impact our ‘atomic unit’ of the software-defined data center is having on the industry, and underscores the value we're delivering to customers and partners of all sizes, shapes and verticals.

Judges determined the finalists and now it’s up to you to decide the winner through a public vote. Voting is open through October 31, 2014.

Like most companies, we receive our fair share of comments from our customers. In previous jobs, these calls were as often negative as they were positive. But here at NIMBOXX, talking to our customers is an absolute pleasure. While we’re definitely not perfect, the feedback we're getting about our data center-in-a-box has been one awesome positive conversation after another -- and something we're extremely proud of across the company.

It’s one thing for us to say our solutions deliver, but it's another thing to hear it directly from end users. Want to see what all the excitement is about? Take two minutes to watch this customer testimonial video and you'll find out.

As is the case with many emerging technologies, there is plenty of confusion around what we call hyper-converged infrastructure. To put some clarity around the situation, we’ve outlined the three most commonly used terms for this kind of data center system.

As you review these terms, keep in mind the following: 1) they’re generally talking about the same kind of thing but 2) there are significant architectural differences between approaches to that thing, regardless of what you call it.

So let’s start by looking at the three terms:

Hyper-converged Infrastructure (HCI) – This term was coined by Steve Chambers (@stevie_chambers), who currently works for Microsoft. He came up with the term in a blog post that is no longer available, but it included a virtualization platform roadmap that laid out the evolution of systems that were Semi Converged, Fully Converged, Super Converged, and Hyper Converged. Although I don’t have Steve’s original definitions, HCI is commonly understood as a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor.

Integrated Computing Platform (ICP) – This term was coined by Mark Bowker (@markbowker) at analyst firm Enterprise Strategy Group (ESG). ESG defines an integrated computing platform (ICP) as a virtual computing infrastructure solution that integrates hardware and software components into a single consumable IT system. ICPs combine independent pieces of IT infrastructure that are normally operated separately through policy and common functionality to form simplified computing platforms targeted at virtualized and cloud environments.

Server SAN – According to Stu Miniman (@stu) at Wikibon, Server SAN is defined as a combined compute and pooled storage resource comprising more than one storage device directly attached to separate multiple servers (more than one). Communications between the direct attached storage (DAS) occurs via a high speed interconnect (such as InfiniBand or Low Latency Ethernet), where coherency is managed by the software of the solution. Server SAN multi-protocol storage can utilize both spinning disk and flash storage. Ideally, Server SAN is configured in enterprise applications to ensure high availability.

So we can split hairs, and the people who coined the terms may disagree (they’re invested in their own terms), but for the sake of others trying to sort this out, it’s fair to say these three terms effectively mean the same thing.

At NIMBOXX, we chose to use the term hyper-converged infrastructure because (1) the solutions are an evolutionary extension of what is known as converged infrastructure and (2) we feel it best represents the intent of the technology we’ve developed and the direction most enterprises are heading today.

The differences between a hyper-converged system and servers with a bunch of disks are engineering and software. Hyper-converged solutions leverage improvements at the storage controller software layer to allow these systems to scale out. The more appliances you add, the greater the performance and capacity. Instead of scaling up by adding more drives, memory, or CPUs, you scale out by adding more appliance modules.

In addition to the simplified architecture, there's a simplified administration model. The hyper-converged systems are managed via "a single pane of glass." Instead of having a set of applications and a team to manage your storage array, a team to manage virtualization, and a team to manage the server hardware, one team (or in some environments one person) can manage the complete hyper-converged stack.

While the intent of this post is to help clarify some potentially confusing industry buzzwords, it’s important to remember that not all hyper-converged systems are created equal. Vendors tend to fall into two camps: they either depend on another vendor’s stack and play a limited role (e.g., SDS), or they build the entire stack themselves and provide the platform for a full software-defined data center deployment.

The converged infrastructure market is heating up, and while many of the big hardware vendors work to re-engineer their existing solutions, the real innovation is coming from new entrants such as NIMBOXX, whose solutions have been purpose-built from the ground up to address customers' evolving data center needs.

Ovum is the latest analyst firm to validate how our approach to providing a fully-managed infrastructure stack — from bare metal to virtual machine — not only solves customers' technical challenges but makes deployment and scaling easier than ever.

Curious about what NIMBOXX could mean for your organization? Download the Ovum report today.

The SMB market opportunity is huge. Currently, SMBs represent 44% of the total IT spend worldwide, a figure that Gartner predicts will cross the trillion dollar mark by 2015. That's some serious cash -- yet a market that traditional storage and virtualization players have practically ignored. Just because these businesses don’t have 10,000 employees, doesn’t mean they don’t have sizable data center needs. In fact, the data centers in these organizations rival what many large enterprises had just a few years ago. That's where we come in. The NIMBOXX data–center–in–a–box is exactly what these businesses are looking for to ease the pain of going virtual using more costly, fragmented, proprietary solutions that have dominated the market until now.

Today we're proud to announce that another innovative company has taken the step toward the software-defined data center, using NIMBOXX. Professional Civil Process (PCP), a leading Texas process server and trusted partner to the legal community for more than 35 years, has deployed the NIMBOXX hyper-converged infrastructure technology to virtualize PCP’s data center. They selected NIMBOXX for its high performance, ease of use and teams that's "a pleasure to work with", not to mention, ". . . icing on the cake that we won’t spend a dime on hypervisors from VMware or Microsoft.” For more info, read the press release here.

﻿PCP turned to the NIMBOXX hyper-converged technology, which includes server, storage, networking and security in a single system. Unlike other hyper-converged technologies, the entire software stack was developed by NIMBOXX, and required no integrations or additional license costs. It also delivered an impressive 180,000 IOPS and provided the failover capabilities required by PCP.

﻿“There are many small businesses like us that have the performance and availability demands of larger enterprises, but don’t have the budget or staff to manage the cost and complexity of traditional solutions,” said Rick Keeney, president and founder of PCP. “NIMBOXX is changing the game. Performance is off the charts, the systems are simple to use, and the team is a pleasure to work with. It’s icing on the cake that we won’t spend a dime on hypervisors from VMware or Microsoft.”

Rarely a day goes by that someone doesn’t ask me about IOPS, so I figured its time to put the pen to paper and share a few thoughts on the subject.

According to Wikipedia, IOPS (Input/Output Operations Per Second, pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). I will add to this an input is a write and an output is a read to a storage device.

At the highest level, IOPS has become one of the most common measures of storage system performance. But is it truly an accurate prediction of real world performance?

In many cases, yes, but in others, no. First off, not all storage performance metrics are good predictors. If we compared the maximum transfer rate of a spinning hard drive, it would be how fast that hard drive could read sequential data. By sequential, I mean all of your data is perfectly lined up on the hard drive and the time to read (reads are generally faster than writes) it is just how long it takes to spin across all your data. Which is fast. However, in the real world, data is almost always randomly scattered throughout a hard disk. A real world test would be to read here, then read there, write here and write there. This real world test would yield results much closer to expected results and would be much slower than the maximum transfer rate on a spinning hard drive.

So IOPS can be an indication in real world performance, but it’s no guarantee.

IOPS does not tell us if the data was sequential or if it was random and we know that counts. It also does not tell us the size of those reads or writes. Several small writes will choke up most storage devices. A vendor could legitimately say they get crazy high IOPS when they measure giant sequential reads only, but in almost all real world use cases that would provide very little indication of the solution’s performance in your unique infrastructure. When in doubt, ask your vendor. I would be very suspicious over a high I/O number by itself.

IOmeter is one of the most common, freely available benchmarking tools. Because of the workload-specific variation in real-world I/O behavior, IOmeter has a series of tests that can be run. Some are read-only and some write-only, some are random and some are sequential, different block sizes, different queue depths, etc. If you know the characteristics of I/O in your environment, you can approximate performance by selecting the appropriate benchmarking characteristics. IOmeter also lets you select an “all-in-one” test, which runs every variation and provides the results. If you can’t accurately approximate your real-world I/O, the next best thing is getting the overall picture by running an all-in-one test so you can see where the highs and lows are and at least be aware of those as you’re making decisions. While most vendors configure the benchmarking tests to suit their own strengths, any I/O numbers you see published by NIMBOXX are numbers that were attained in the all-in-one test.

In short, IOPS is an easy-to-understand metric that can be used to compare storage device performance and “possibly” predict real world results. I say possibly, because there are many different ways to calculate IOPS. Some ways will produce numbers that can accurately predict real world performance, other numbers are just fluff from the marketing department.

1,015 IT Pros Follow NIMBOXX

Similar Vendors

About NIMBOXX

NIMBOXX lets companies of all sizes realize the benefits of the proven hyper-scale data center model developed by the largest public cloud providers. NIMBOXX delivers a virtualization platform that converges servers, storage, networking and security. The solution is simple, powerful and affordable. We’ve created the building block of a software-defined data center, blurring the line between public cloud computing and private IT infrastructure.