Recently I’ve been living a double life. By day a mild-mannered functionary for Forrester Research, helping I&O professionals cope with the hurly-burly of our rapid-paced world. By night I have been equipping myself with an iPhone, iPad and trying out any other mobile devices I can get my hands on, including Dell Stream, Android phones, and the incredibly appealing new Apple Macbook Air. While my colleague Ted Schadler has been writing on these devices from a more strategic perspective, I wanted to see what the daily experience felt like and simultaneously get a perspective from our I&O customers about their experiences.

So, the first question, is the mobile phenomenon real? The answer is absolutely yes. While the rise of mobile devices is a staple of every vendor’s strategic pitch, it also seems to be a real trend. In conversations with I&O groups, I have been polling them on mobile devices in their company, and the feedback has been largely the same – employees are buying their own consumer devices and using them for work, forcing I&O, security and email/collaboration application owners, often well outside of plan, to support them. Why can’t IT groups “just say no”? The answer is that IT in rational companies is fundamentally in the fundamental business of enabling business, and the value and productivity unlocked by these devices is too much to pass up.

There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.

Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.

I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:

IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.

Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.

Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.

Look for the new "Community" tab on the Forrester site. This is your access to a community of like-minded peers. You can use the community to start and participate in discussions, share ideas and experiences, and help guide Forrester Research for your role. The success or failure of this community effort depends largely on you. The analysts will participate, but in this forum they have less weight than do you, the Forrester I&O user. So help us, help your peers, and help yourself make this an active and thriving online community. Some thoughts to maybe get you going: Have any particularly good or bad experiences with products, solutions or technology? What key enablers are you looking at as you transform your data centers and operations? What does "cloud" mean to you? Any thoughts on vendor management and negotiations? This is just a random stream of consciousness selection. Make the community yours by adding your own topics.

I attended Intel Developer Forum (IDF) in San Francisco last week, one of the premier events for anyone interested in microprocessors, system technology, and of course, Intel itself. Among the many wonders on display, including high-end servers, desktops and laptops, and presentations related to everything Cloud, my attention was caught by a pair of small wonders – very compact, low power servers paradoxically targeted at some of the largest hyper-scale web-facing workloads. Despite being nominally targeted at an overlapping set of users and workloads, the two servers, the Dell “Viking” and the SeaMicro SM10000, represent a study in opposite design philosophies on how to address the problem of scaling infrastructure to address high-throughput web workloads. In this case, the two ends of the spectrum are adherence to an emerging standardized design and utilization of Intel’s reference architectures as a starting point versus a complete refactoring of the constituent parts of a server to maximize performance per watt and physical density.

It was reported that sometime over the past weekend the number of tweets and blogs about VMworld exceeded Plankk’s limit (postulated by blogger Marvin Plankk, now confined to an obscure institution in an unidentified state with more moose than people), and quietly coalesced into an undifferentiated blob of digital entropy as a result of too many semantically identical postings online at the same time. So this leaves the field clear for me to write the first VMworld post in the new cycle.

This year was my first time at VMworld, and it left a profound impression – while the energy and activity among the 17,000 attendees, exhibitors and VMware itself would have been impressive in any context, the underlying evidence of a fundamental transformation of the IT landscape was even more so. The theme this year was “clouds,” but to some extent I think themes of major shows like this are largely irrelevant. The theme serves as an organizing principle for the communications and promotion of the show, but the technology content of the show, particularly as embodied by its exhibitors and attendees, is based on what is actually being done in the real world. If the technology was not already there, the show might have to find another label. Keeping the cart firmly behind the horse, this activity is being driven by real IT problems, real investments in solutions, and real technology being brought to market. So to me the revelation of the show was not in the fact that VMware called it “cloud,” but that the world is really thinking “cloud.”

I recently had an opportunity to spend a day in three separate meetings with infrastructure & operations professionals from three of the top six financial service firms in the country, and discuss topics ranging from long-term business and infrastructure strategy to specific likes and dislikes regarding their Tier-1 vendors and their challengers. The day’s meetings were neither classic consulting nor classic briefings, but rather a free-form discussion, guided only loosely by an agenda and, despite possible Federal regulations to the contrary, completely devoid of PowerPoint presentations. As in the past, these in-depth meetings provided a wealth of food for thought, interesting and sometimes contradictory indicators from the three groups. There was a lot of material to ponder, but I’ll try and summarize some of the high-level takeaways in this post.

Servers and Vendors

These companies between them own in the neighborhood of 180,000 servers, and probably purchase 30,000 - 50,000 servers per year in various cycles of procurements. In short, these are heavyweight users. One thing that struck me in the course of the conversations was the Machiavellian view of their Tier-1 server vendors. Viewed as key partners, at the same time the majority of this group of users devoted a substantial amount of time to keeping their key vendors at arm’s length through aggressive vendor management techniques like deliberate splitting of procurements between competitors. They understand their suppliers' margins and cost structures well, and are committed to driving hardware supplier margins to “as close to zero as we can,” in the words of one participant.

There has been turmoil and angst recently in the 0pen source community of late over Oracle’s decision to cancel OpenSolaris. Since this community can be expected to react violently anytime something is taken out of open source, the real question is whether this action has any impact on real-world IT and operations professionals. The short answer is no.

Enterprise Solaris users, be they small, medium or large, are using it to run critical applications; and as far as we can tell, the uptake of OpenSolaris as opposed to Solaris supplied and sold by Sun was very low in commercial accounts, other than possibly a surge in test and dev environments. The decision to take Solaris into the open source arena was, in my opinion, fundamentally flawed, and Oracle’s subsequent decision to change this is eminently rational – Oracle’s customers almost certainly are not going to run their companies on an OS that is built and maintained by any open source community (even the vast majority of corporate Linux use is via a distribution supported by a major vendor and under a paid subscription model), and Oracle cannot continue to develop Solaris unless they have absolute control over it, just as is the case with every other enterprise OS. In the same vein, unless Oracle can also have an expectation of being compensated for their investments in future Solaris development, there is little motivation for them to continue to invest heavily in Solaris.

Historically, the positioning of Dell versus its two major competitors for high-value enterprise business, particularly where it involved complex services and the ability to deliver deeply integrated infrastructure and management stacks, has been as sort of an also ran. Competitors looked at Dell as a price spoiler and a channel for standard storage and networking offerings from its partners, not as a potential threat to the high-ground of being able to deliver complex integrated infrastructure solutions.

This comforting image of Dell as being a glorified box pusher appears to be coming to an end. When my colleague Andrew Reichman recently wrote about Dell’s attempted acquisition of 3Par, it made me take another look at Dell’s recent pattern of investments and the series of announcements they have made around delivering integrated infrastructure with a message and solution offering that looks like it is aimed squarely at HP and IBM's Virtual Fabric.