Thursday, July 7, 2016

Veriflow, a start-up based in San Jose, announced $8.2 million in Series A funding for its work in network breach and outage prevention.

Veriflow said it uses formal mathematical network verification to eliminate change-induced network outages and breaches. The technique was created by a team of computer science professors and Ph.D. students at the University of Illinois at Urbana-Champaign.

The funding round was led by Menlo Ventures and included current investor New Enterprise Associates (NEA).

“The feedback from customers and analysts indicates the market is ready for a new approach to network breach and outage prevention. Our use of mathematical network verification, grounded in data-plane information, gives customers a proactive approach to identifying vulnerabilities before they are exposed to catastrophic problems,” said James Brear, president and CEO of Veriflow. “Veriflow provides a comprehensive view of the network that gives administrators the confidence to make changes without fear of damaging critical services and layers of defense. We’ve spent several years developing our innovative technology, and this funding will enable us to hire key talent, bring our product to market more quickly and expand into new markets.”

Veriflow’s automated approach predicts how and if network policies will be violated before an incident occurs.

Veriflow exited stealth mode in April 2016 with $2.9 million in initial investor funding from New Enterprise Associates (NEA), the National Science Foundation and the U.S. Department of Defense.

Veriflow is led by James Brear, who was previously CEO of Procera until its successful acquisition in August 2015, along with the company’s founders, who include Fulbright and Alfred P. Sloan fellows and an ACM SIGCOMM Rising Star awardee.

Newly released version 1.3 of Kubernetes brings supports 2000-node clusters. The new release also adds better end-to-end pod startup time, with latency of API calls within one-second Service Level Objective (SLO).

One new features is Kubemark, a performance testing tool to detect performance and scalability regressions.

If you think these little snippets of Linux source code
might have limited revenue-bearing potential given the fact that anyone can
activate them on an open source basis, then you might want to consider
DockerCon 2016, which was held June 19-20 at the Washington State Convention
Center in Seattle. DockerCon is an
annual technology conference for Docker Inc., the much touted San
Francisco-based start-up that developed and popularized Docker runtime Linux
containers, which are no longer proprietary but hosted as an open source
project under the Linux Foundation.
Docker Inc. (the company) is among the rarified “unicorns” of Silicon
Valley – start-ups with valuations exceeding $1 billion based on a really hot
idea, but with nascent business models and perhaps limited revenue streams at
this stage of their development.

Even with a conference ticket price of $990, DockerCon 2016
in Seattle was completely sold out. Over
4,000 attendees showed up and there was a substantial waiting list. For
comparison, last year, DockerCon in San Francisco had about 2,000 people. The
inaugural DockerCon event in 2014 was attended by about 500 people. The
conference featured company keynotes, technology demonstrations, customer
testimonials, and an exhibition area with dozens of vendors rushing into this
space. Big companies exhibiting at DockerCon included Microsoft, IBM, AWS,
Cisco, NetApp, HPE and EMC.

Punching way above its size, Docker rented Seattle's Space
Needle and EMP museum complex to feed and entertain all 4,000+ guests on the
evening of the summer solstice.

Clearly, Docker’s investors are making a big bet that the
company grow from being the inventor of an open source standard.

Why should the
networking and telecom community care about a new virtualization format at the
OS level?

There is a game plan afoot to put Docker at the crossroads
of application virtualization, cyber security, service orchestration, and cloud
connectivity. Docker enables
applications to be packed into a standard shipping container, enabling software
contained within to run the same regardless of the underlying infrastructure.
Compared with virtual machines (VMs), containers launch quicker. The container includes the application and
all of its dependencies. However,
containers make better use of the underlying servers because they share the
kernel with other containers, running as isolated processes in user space on
the host operating system. The vision is
to allow these shipping containers to move easily between servers or between
private and public clouds. As such, by
controlling the movement of containers, you essentially control the movement of
workloads locally and across the wide area network. The applications running
within containers need to remain securely connected to data and processing
resources from wherever the container may be located. Thus, software-defined
networking becomes part of the containerization paradigm. Not surprisingly, we
are seeing a lot of Silicon Valley’s networking talent move from the
established hardware vendors in San Jose to the new generation of software
start-ups in San Francisco, as exemplified by Docker Inc.

The Timeline of Significant Events for Docker

Docker was started by Solomon Hykes as an internal project
at dotCloud, a platform-as-a-service company based in France and founded around
2011. The initial Docker work appears to have started around 2012/3 and the
project soon grew to become the major focus of the company, which adopted the
Docker name. The official launch of
Docker occurred on March 13, 2013 in a presentation by Solomon Hykes entitled
“The Future of Linux Containers” hosted at the PyCon industry conference. Soon after, the Docket whale icon was posted
and a developer community began to form.

In May 2013, dotCloud hired Ben Golub as CEO with a goal of
restructuring from the PaaS business to the huge opportunity it now saw in
building and orchestrating cloud containers. Previously, Golub was CEO of
Gluster, another open source software company but which focused on scale-out
storage. Gluster offered an open-source
software-based network-attached filesystem that could be installed on commodity
hardware. The Silicon Valley company
successfully raised venture funding, grew its customer based quickly, and was
acquired by Red Hat in 2011.

Within 3 months of joining Docker, Golub established an
alliance with Red Hat. A second round of venture funding, led by Greylock
Partners, brought in $15 million. Headquarter were moved to San Francisco. In June 2014, Docker 1.0 was officially
released, marking an important milestone for the project.

In August 2014, Docker sold off its original dotCloud (PaaS)
business to Berlin-based cloudControl, however, the operation was shut down
earlier this year after a two-year struggle. Other dotCloud engineers credited
with work on the initial project include Andrea Luzzardi and Francois-Xavier
Bourlet. A month later, in September 2014, Docker secured $40 million in a
series C funding round that was led by Sequoia Capital and included existing
investors Benchmark, Greylock Partners, Insight Ventures, Trinity Ventures, and
Jerry Yang.

In October 2014, Microsoft announced integration of the
Docker engine into its upcoming Windows Server release, and native support for
the Docker client role in Windows. In
December 2014, IBM announced a strategic partnership with Docker to integrate
the container paradigm into the IBM Cloud.
A year and a half later, in June 2015, IBM's Bluemix
platform-as-a-service began supporting Docker containers. IBM Bluemix also
supports Cloud Foundry and OpenStack as key tools for designing portable
distributed applications. Additionally, IBM claims the industry's best
performance of Java on Docker. IBM Java is optimized to be two times faster and
occupies half the memory when used with the IBM Containers Service. Moreover,
as a Docker based service, IBM Containers include open features and interfaces
such as the new Docker Compose orchestration services.

In March 2015, Docker acquired SocketPlane, a start-up
focused on Docker-native software defined networking. SocketPlane had only been
founded a few months earlier by Madhu Venugopal, who previously worked on SDN
and OpenDaylight while at Cisco Systems, before joining Red Hat as Senior
Principal Software Engineer. These SDN
capabilities are now being integrated into Docker.

In April 2015, Docker raised $95 million in a Series D round
of funding led by Insight Venture Partners with new contributions from Coatue,
Goldman Sachs and Northern Trust. Existing investors Benchmark, Greylock
Partners, Sequoia Capital, Trinity Ventures and Jerry Yang’s AME Cloud Ventures
also participated in the round.

In October 2015, Docker acquired Tutum, a start-up based in
New York City. Tutum developed a cloud service that helps IT teams to automate
their workflows when building, shipping or running distributed applications.
Tutum launched its service in October 2013.

In November 2015, Docker extended is Series D funding round
by adding $18 million in new investment.
This brings total funding for Docker to $180 million.

In January 2016, Docker acquired Unikernel Systems, a
start-up focused on unikernel development, for an undisclosed sum. Unikernel
Systems, which was based in Cambridge, UK, was founded by pioneers from Xen,
the open-source virtualization platform. Unikernels are defined by the company
as specialized, single-address-space machine images constructed by using
library operating systems. The idea is to reduce complexity by compiling source
code into a custom operating system that includes only the functionality
required by the application logic. The unikernel technology, including
orchestration and networking, is expected to be integrated with the Docker
runtime, enabling users to choose how they ‘containerize’ and manage their
application - from the data center to the cloud to the Internet of Things.

Finally, at this year’s DockerCon conference, Docker
announced that it will add built-in orchestration capabilities to it Docker
Engine. This will enable IT managers to
form a self-organizing, self-healing pool of machines on which to run
multi-container distributed applications – both traditional apps and
microservices – at scale in production. Specifically, Docker 1.12 will offer an
optional “Swarm mode” feature that users can select to “turn on” built-in
orchestration, or they can also elect to use either their own custom tooling or
third-party orchestrators that run on Docker Engine. The upcoming Docker 1.12
release simplifies the process of creating groups of Docker Engines, also known
as swarms, which are now backed by automated service discovery and a built-in
distributed datastore. The company said that unlike other systems, the swarm
itself has no single point of failure. The state of all services is replicated
in real time across a group of managers so containers can be rescheduled after
any node failure. Docker orchestration includes a unique in-memory caching
layer that maintains state of the entire swarm, providing a non-blocking
architecture which assures scheduling performance even during peak times. The
new orchestration capabilities go above and beyond Kubernetes

San Jose is the city of innovation, says Rose Herrera, Vice Mayor, as it looks to technology to deliver better services with greater efficiency. The opportunities for women in engineering are limitless, she says, as companies here struggle to fill vacant engineering positions.

Ixia’s ControlTower solution, a key component of Ixia’s IxVision Architecture, provides visibility within physical, virtual, and software-defined networks (SDN). The distributed architecture provides network administrators access to monitoring and diagnostic tools from any point in the network.

Cisco Nexus switches use a common programmatic interface.

At Cisco Live 2016, Ixia will demonstrate how Ixia ControlTower enables network administrators to dynamically repartition Cisco switch ports between production switching and visibility enablement.

"We have expanded the functionality of ControlTower to now provide a single view over both Ixia supplied network packet brokers and Cisco Nexus 3000 switches, acting as an aggregation layer in large network visibility deployments,” said Dennis Cox, Ixia’s Chief Product Officer. “Ixia is the only visibility vendor to provide an integrated solution using our own equipment combined with Cisco Nexus 3000 switches.”

The goal is to provide application-layer and system-level security and policy controls needed to extend the trust boundary from a system-level root-of-trust to the network edge. Skyport said its interoperability with Cisco ACI also mobilizes security policies, enabling them to follow workloads throughout their lifecycles, and lets users deploy and maintain secure administrative workstations, jump hosts and multi-zone DMZ architectures as an integral part of an overall security framework.

Skyport's SkySecure converged system brings together zero trust compute, virtualization and a full stack of security technologies. It logs all traffic at a forensically auditable level, enabling users to see where traffic originates, where it is headed, whether it was allowed or not, what policy allowed or blocked it, and when and who put that policy into action. Remote management capability allows users to easily secure branch infrastructure without firewalls, proxies, MPLS or other security measures.

“SkySecure interoperability with Cisco ACI extends policy inward to the root of trust, providing truly end-to-end application-layer security for all assets, no matter where they are deployed,” said Art Gilliland, CEO of Skyport Systems. “This builds on the work we’ve done to secure Microsoft Active Directory and other highly valuable resources, and furthers our mission to help organizations ensure security of their most critical IT assets.”

The first implementation of Coriant CloudWave Optics in Telia Carrier's hiT 7300-based pan-European backbone network will include a 400G-enabled fiber route between the cities Copenhagen, Denmark and Frankfurt, Germany.

"As our customers in Europe experience increased demand for capacity to support their business-critical applications, we are committed to investing in best-in-class innovation to stay at the forefront of service excellence," said Mattias Fridström, Chief Technology Officer, Telia Carrier. "The Coriant solution enables us to significantly improve utilization of our existing DWDM infrastructure and rapidly provision new services to meet our customers' dynamic connectivity requirements."

Deutsche Telekom has appointed Srini Gopalan as the new Board member for Europe. He will assume the duties of Claudia Nemat for the Europe segment, effective January 1st, 2017.

Gopalan is currently Consumer Director India at Bharti Airtel Limited, where he is responsible for consumer business in 23 different regions of India, which covered broadband connections and satellite TV in addition to mobile communications. He previously worked in the UK for over ten years – at first in a number of functions for Capital One, an American financial services provider, which he left as Managing Director UK in 2009. He then worked as Chief Marketing Officer at T-Mobile UK, where he was responsible for marketing and sales. He was part of the management team that led T-Mobile UK to the joint venture with Orange, everything-everywhere. After this, he served as Director Consumer Business Unit at Vodafone UK for three years.

Claudia Nemat will head the new Technology and Innovation Board department.