Although it’s a shadow of the R&D powerhouse that brought us the transistor, laser, fiber optics and evidence of the Big Bang, Bell Labs is still around and planning for a future beyond the horizon of today’s technology. The latest evidence of the organization’s forward thinking is courtesy a new book by Marcus Weldon, the President of Bell Labsand CTO of parent company Alcatel-Lucent. Despite its cryptic title, The Future X Network isn’t about a top-secret project on the paranormal, but a look at where exponential trends in technology development, personal and device connectivity and data collection and analysis might take business, the global economy and society at large.

Weldon, who is in charge of Alcatel-Lucent’s technical strategy and product direction, said his motivation for writing the book was a perception that the ICT industrywas too focused on technology silos and not fully seeing or acknowledging the scale of changes wrought by a nexus of global, high-speed connectivity, billions of connected devices (IoT), cloud services and non-stop data streaming, collection and big data analytics. It’s a confluence of changes he contends will lead to a “new technological revolution” analogous to the agricultural or industrial epochs of prior centuries. He says our looming omni-connectivity is sufficiently profound to be disruptive on multiple levels: whether the way people live or how businesses operate.

It’s conventional wisdom that the Internet and mobile devices have transformed society, however the changes so far are merely the precursor to bigger things ahead. Weldon sees several elements fueling a new technological era:

Instrumenting our world via connected, intelligent objects, i.e. IoT, is just beginning meaning that the number of devices networks must accommodate will be orders of magnitude higher than today.

This translates to thousands of people and millions of machines per wireless cell area. Add in streaming video and rich data feeds and the network must massively scale both bandwidth and connection handling.

The growth will happen very fast. Within five years network operators and cloud service providers must be ready for an explosion of clients and data.

Source: Marcus Weldon, The Future X Network

The book identifies three elements to what Weldon admits could turn out to be an inflection point in already exponential traffic growth.

The foundation for change will be a “new cloud-integrated network” to support instrumenting every aspect of our world.

The catalyst for change is IoT and “the vast array of networked machines that will digitize our world in ways that have previously only been imagined.”

The final, but necessary piece is the technology to make sense out of a cacophony of data. As Weldon sees it, for “digitization to produce value in the form of efficiency, <requires> the ability to capture and analyze the data streaming from these devices and effectively turn big data into the smallest amounts of actionable knowledge, what we refer to as augmented intelligence.”

It’s not surprising that someone from the communications industry sees the key change agent as connectivity, but like any good scientist, Weldon has data to support the thesis. In Kurzweilian fashion (c.f. The Singularity Is Near) Weldon’s book is full of graphs illustrating exponential technological growth, whether in the speed of network connections, CPU performance or total data being analyzed, and like any good futurist, he sees no reason the party won’t continue. Indeed, Weldon says we will need, and find ways to achieve, 100-times today’s network capacity by 2020 in order to achieve the vision of constantly connected, automated and assistive intelligent devices.

Source: Marcus Weldon, The Future X Network

Most of the book is devoted to detailing the technological and architectural advancements Weldon believes are necessary, and is optimistic will happen over the next decade. For example, he says the IoT-fueled growth in connected clients and data requires a fundamentally different network design that can scale both capacity, but more importantly, the control (signaling) plane. Such massive scaling requires a highly distributed, virtualized and extremely low latency, i.e. millisecond-level control layer that’s much faster than LTE (which has signal latency in the tens of milliseconds).

Due to the transient nature of connected devices like cars or drones and the data itself (think video surveillance, security incidents, streaming entertainment), will required a dynamic network management backplane with distributed processing capability. Weldon calls the computational design an “edge cloud,” in which real-time digital automation is placed close to end users or devices, “providing local delivery of services while maintaining the critical aspect of global reach.”

Source: Marcus Weldon, The Future X Network

Like any work of crystal ball gazing, The Future of X Network is less about offering prescriptive guidelines and more an invitation to reflective, spirited debate. The litany of narrowly targeted white papers that are a research lab’s bread and butter only cover the narrowest aspects of the host of related, but interdependent projects Weldon sees as necessary to deliver the services a world of connected people, machines and objects will require. Indeed, Weldon says his motivation was to describe a comprehensive technological agenda as a way to ignite industry discussion. By this measure, his book is successful and should be background reading for every technology leader and researcher looking to ride, and better yet, exploit a tsunami of technology-fueled change that Weldon believes will reshape business and society by “enabling, the augmented intelligent connection of everything and everyone.”

The Dell-EMC amalgam makes a lot of sense through the close-focus lens of short-term synergies, protecting established enterprise customers and horizontally filling out a legacy product portfolio, however strategically it looks to be more an act of reactive weakness, if not desperation. While the combined entity will be a powerhouse of traditional enterprise IT products, it does nothing to address the growing disruptive threat from public cloud services like AWS. Indeed, the deal is a typical example of established market leaders focusing on narrower and narrower slices of high-end customers as a disruptive technology erodes its entry-level and mainstream base.

As I write in this column, the deal happens against a backdrop of a server market slowing down, where most of the growth is happening among second-tier vendors and ODMs, and sales declines for large, enterprise storage arrays even as overall storage capacity continues to skyrocket. Both of these can be traced to the increasing popularity of cloud services. Considered in this light, the Dell-EMC deal looks like a classic reaction to disruptive technology.

There is near universal public cloud acceptance and swelling adoption by large enterprises, as evidenced by GE’s statement at re:Invent that it would eliminate 90% of its data centers and move 9,000 applications to AWS. Considered in this light, the Dell-EMC deal looks like a classic reaction to disruptive technology. As a reminder, Clayton Christensen’s theory of disruptive innovation “describes a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.” The key element of disruptive versus sustaining innovation is the rate of technological change: not only is it much greater, but disruptive improvements occur faster than the rate at which customers can absorb new technology. Thus, products that are initially only suitable for low-end markets rapidly mature and add features that make them suitable for mainstream, and eventually even high-end needs. This process is incredibly destabilizing to established vendors.

Source: Clayton Christensen

As I detail in the full column, the Dell-EMC combination smacks of two enervated companies combining to capture a greater share of a stagnating market and a classic case of horizontal integration. While I agree with the early consensus that this combination will be fruitful for both companies (ex-VMware) in the short-term, it doesn’t address the long-term threat to incumbent IT vendors from cloud services. Read on to understand why.

Box has always tried to differentiate itself from the plethora of cloud file sync-and-share startups by focusing on enterprise customers and their requirements. This led the company down the path of providing greater management control over users and content. At the company’s recent Boxworks developers conference, Box made clear its larger ambitions of becoming the content management platform in the cloud for all comers: enterprises, third-party app developers and their customers. In this column, I analyze the strategy and its implications.

Most of the major announcements at Boxworks, like the IBM and app integrations, better content management and security features, make more sense when seen in the light of the larger platform strategy. Although not as visible as some of the UI and content discovery features Constellation bemoaned, as a platform, such end-user features will often be handled by apps plugging to the Box infrastructure. As fully envisioned by Jeetu Patel (Box’s new CSO and head of Platform), Box slips into the background for many users: they don’t necessarily login to a Box account and aren’t directly using a Box app.

Source: Constellation Research

Boxworks demonstrated a company completely focused on building a platform and that means focusing on the backend infrastructure and service APIs. Analogies abound including products like Stripe for app payment workflow and payment, Twilio for communication services, even sharing economy favorites like Airbnb and Uber that provide a common infrastructure to coordinate thousands of individual service providers.

The beauty of a content management and sharing platform is that users increasingly share content across different apps. For example, someone at one firm may create and share a Word document that business partners or customers consume through apps that may render it in different ways: as an annotatable PDF, form or raw text. Box sits in the background transparently applying user and device security policies and rights management and supplying content discovery and categorization features, while the end user interacts with a familiar application that’s part of their normal workflow.

Source: author

I have previously criticized Box and its bigger pure-play competitor, Dropbox, as being one-trick ponies that deliver a feature (cloud-based file sync and share), not a product. I later noted that Box wisely used its IPO cash to buy time for a new strategy, however at the time I still didn’t see the competitive moat. Instead of a moat, I should have been thinking of bridges. Box’s evolution into a content management platform clearly demonstrates that it recognized the problem: cloud file sharing is a baseline commodity feature built into every major online platform. As the column discusses, Box’s platform strategy is both compelling (plenty of upside) and risky (it’s essentially betting the company on the strategy’s successful execution). Read the full column for details.

I recently had an accident that entailed a lot of pain and emergency surgery. Indeed, it was the most significant medical trauma I have ever had to endure and required a lengthy, 5-night hospital stay. Unfortunately, my closest family and primary support group are parents whose ability to get around is quite limited, however they managed to come for the surgery. After returning to my room I sent them home to get some personal items, but after the second trip it was clear that the combination of hospital-area traffic and long walks through the hospital complex from the parking lot to my room were extremely difficult. We agreed it made more sense for them to stay home and just talk a couple times a day on the phone. In sum, I was left to the company of doctors, nurses, therapists, and the Net, since my iPhone and iPad with keyboard were among those items from home. Little did I realize how much comfort, sympathy and support these would bring. There’s a larger lesson here for companies about how to maximize the internal knowledge and collaboration between and groups and individuals, but first the backstory.

Source: Pixabay

Despite the seeming isolation, I never felt alone. Within minutes of posting a picture of my hospital room to Facebook and explaining the context, I had a dozen responses that piled up for several days. Most were the “get well soon” greeting card variety, but the offers of support and empathy were extremely comforting. What turned out to be the most meaningful came from an unlikely source: a former work colleague that I hadn’t seen in years. We were once fairly close, but that was over a decade ago. Still, he offered to drop by one evening after work to chat and we quickly re-bonded. Long story short, I probably would have been discharged to a physical rehab facility instead of home if not for his help: driving me home, making the furniture walker friendly, bringing over an older chair of his that was much more suitable for someone with my physical limitations, setting up some bathroom railing, all the things I couldn’t do on my own. I had several other friends from afar offering whatever support they could provide, sending pick-me-up videos and generally making the whole sorry event much less isolating and depressing that it could have been.

SoMoClo Turns Even a Hospital Room Into an Office

The nexus of social, mobile and cloud technologies, what some term SoMoClo, also allowed me to handle many other work and personal issues: getting projects and meetings rescheduled, respond to inquiries or questions, ordering necessary supplies for home delivery via Amazon, get doctor contacts and follow up visits entered in my phonebook and calendar, coordinating transportation in real time with my friend via messaging, etc. Indeed, the technology allowed me to both stay connected and perform any kind of knowledge work without missing a beat, even as my body was broken and physical mobility was badly crippled.

So far, this is just another story of how we all too easily take for granted the actual life-altering benefits of these SoMoClo technologies, whose effects will only magnify over time. Yet there’s a lesson here for business: you’re almost certainly not taking full advantage of mobile technology and social collaboration, meaning you’re not operating at maximal efficiency and wasting a lot of your internal intellectual capital.

Internalizing Team Collaboration, Knowledge Sharing

Enterprise social collaboration is nothing new, but due to personal and organizational inertia has primarily been internalized by small companies or individual departments: mass adoption across large organizations is much less common. In most large companies, email and file shares remain the primary tools for information distribution, indeed many employees treat their mailbox as a giant file archive cum todo list. Products like Asana, Glip and Slack have been around for years, applying the best of social communication paradigms (news feeds, comment threads, linked documents, app integrations) to an enterprise context. Likewise, mobile technology is old hat, but here too individuals, not enterprises have led the way in changing habits and adopting new tools. Although, a CDW survey IT decision makers found that about half of respondents have one or two custom apps, a Mobility poll of actual employees showed that more than 40% reported low satisfaction with corporate apps and 58% abandon them altogether.

Source: CDW

This isn’t the place for yet another review of team collaboration software nor an encomium to the wonders of mobile tech. However, both enable fundamental changes to the way people get work done and there are some lessons from my hospital experience on how remodeling your work processes with new tools can yield significant, measurable improvements.

For example, by aggregating work communications, comment streams, calendars, documents and even relevant external apps into a single shared news feed, organizations can reduce email tag, inbox and file share clutter and scheduling headaches, with a unified view of project or workgroup information that allows new team members to quickly come up to speed. A 451 Research report on Glip puts it this way, “Glip pulls work conversations into a manageable whole, or rather, it focuses on the conversations that get work done, ideally replacing reliance on the inbox.”

Source: Mobiquity

As my hospital experience illustrates, social collaboration also facilitates knowledge discovery and sharing within an organization. In my case, it was someone in my network volunteering to help, in the business context it can easily expose knowledge, expertise and information needs to a broad audience, most of whom you may not actually know. Whether it’s a project working with a new vendor, developing some code or writing content, when internalized into an organization’s work culture, social collaboration can identify others that have solved the same or similar problems before and prevent reinventing the wheel. Indeed, as I found out, you often have no idea where help can come from.

The nexus of SoMoClo technologies continues to dramatically change people’s behavior, in both personal and consumer contexts, and disrupt established companies and business models. Yet the application of both mobile and social technology in the business has been driven by employees, not IT or executives (see consumerization, BYOD) and has been slow to transform business processes and cultural mores. Doing so requires leadership from the top and incentives to employees to embrace better alternatives to old habits, but companies that systemically move on from PCs, email and Office docs will be at a competitive advantage through higher efficiency, agility and flexibility.

The transition of corporate infrastructure into software defined private clouds has transformed IT automation from an aspirational goal into an existential imperative. The move from treating servers, storage and network gear as individual units that are carefully managed into interchangeable resources — the familiar ‘pets to cattle’ metaphor — has spawned the DevOps movement and host of related automation tools that allow infrastructure to be managed as code, not unique physical instances. Indeed, public cloud services using millions of systems like AWS, Azure and Google would be impossible without extreme automation. Yet trying to use the same set of tools across several sources of infrastructure is challenging since each cloud platform has its own management console and interfaces and most organizations now use both internal systems and public IaaS.

Amazon encapsulates and exposes its automation tools in APIs and although it offers several management and application orchestration services like [CloudFormation] and [OpsWorks] these only work within AWS. One of the most popular DevOps automation packages is Chef, the open source project that has gone commercial. Over [half the Fortune 500] use the supported version of Chef on their internal infrastructure and the company has 750 customers in all. Many of these organizations also use AWS, so they need a way to integrate infrastructure automation between the two. Since it is also available on the major IaaS platforms, Chef is an ideal solution.

Chef Deployment Decision: On-Premise or In-cloud

A Chef deployment encompasses three elements with an optional fourth:

Chef server (control hub for one or more application environments)

Workstations (used to develop configuration recipes)

Nodes (the systems running a particular application)

Chef Analytics (optional: monitoring and reporting system that logs, audits and reports upon Chef server activity)

Organizations that have already made the DevOps transition to infrastructure as code will most likely have all four Chef elements installed on-premise. For them, the goal is adding AWS nodes to an existing workload pool. In contrast, those just starting out with infrastructure automation will need a Chef server. There are three options:

self-managed using a [pre-packaged download] and private server

self-managed on AWS using either an [AMI from the AWS Marketplace] or a manual install of open source Chef onto EC2

SaaS using the [hosted Chef service]

We’ll focus on the pure AWS solution where all elements (Chef server, analytics, workstations, nodes) are EC2 instances, however the developer workstations could easily run standalone on their own PC. The basic workflow for controlling cloud resources looks as follows, which shows developers using pre-packaged cookbooks and custom code to build configuration recipes that are sent to the Chef server, which then directs the Chef client to deploy and configure cloud-resident nodes.

Connectivity

Organizations with an existing Chef deployment can get access to and control EC2 nodes in a couple ways. The best option, particularly for those with multiple cloud workloads, perhaps spread across different availability zones, and a commensurately deeper understanding of AWS is by using a VPC: a private, encrypted connection between your private data center and AWS resources. Using a VPC, the EC2 nodes sit on a private subnet so the Chef server can access them just like any other internal server.

Another option is to access EC2 instances via SSL using the Chef Knife CLI. Knife can manage nodes, cookbooks and recipes, user roles, chef client installations[and much more]. Controlling EC2 instances requires installing a the Knife EC2 plugin on Chef workstations and opening an SSH port in your AWS configuration ([step-by-step details here]). Once configured, developers can start, stop and list EC2 instances, configure and run new instances as Chef nodes and apply Chef recipes to one or more nodes.

Running Chef Server on AWS

Cloud natives that only want to control AWS workloads can stop right here since they probably don’t even need a Chef server. AWS includes OpsWorks, an application management service based on Chef and fully compatible with Chef recipes, as a standard feature, meaning you can apply Chef recipes to any EC2 instance. However, it doesn’t provide the flexibility of hosted or self-managed Chef to control resources across clouds, for that you need to run Chef Server itself.

The most convenient option is a [pre-packaged AMI] from the AWS Marketplace that takes care of the porting and installation details and comes as a fully supported service. Of course, convenience and support come at a price, in this case about a 25% markup to the base EC2 rate for the Chef server instance. Alternatively, you can [download] open source Chef and install it on your choice of Ubuntu or Red Hat servers (of course, you’ll need to install these as well, but it’s easy to [import an existing VM image] using the AWS CLI).

Wrap up

Integrating Chef with AWS is relatively easy and extends Chef’s powerful capabilities into the cloud. Of course, Chef isn’t the only configuration management alternative, so organizations embarking on an infrastructure automation strategy are wise to evaluate other options like [Ansible], [Puppet] and [SaltStack]. Each works with all the major IaaS vendors and can provide a common platform for consistent application/system configuration, deployment and lifecycle management.

Now that both Amazon and Apple have made streaming an included perk of their primary products , wondering how it changes the market. Will enough people see the combo as good enough, ditching NFLX?
>>
Apple's Streaming Strategy Is The Ultimate Magic Trick
buff.ly/2O2QmNM