Sunday, January 24, 2016

Over the past 25 years there have been dramatic shifts in how companies deliver websites and applications. The pervasiveness of globally distributed cloud computing providers like AWS and Digital Ocean, along with the rise of Infrastructure as a Service (IaaS) and deployment automation, have dramatically reduced the costs and complexities of deploying applications. Users today can deploy servers in different parts of the world in minutes and leverage a multitude of software frameworks, databases and automation tools that all work to decentralize environments and improve uptime and performance.

The result is one of the more fundamental changes in the recent history of computing: today’s applications are distributed by default.

Unique Traffic Management Challenges for Modern Applications

While we’ve seen significant progress toward distributing applications on the infrastructure and application side, the tools website operators have at their disposal to effectively route traffic to their newly distributed applications haven’t kept pace. Your app is distributed, but how do you get your users to the right points of presence (POPs)?

Today, traffic management is typically accomplished through prohibitively complex and expensive networking techniques like BGP anycasting, capex-heavy hardware appliances with global load balancing add-ons, or by leveraging a third party Managed DNS platform.

As the ingress point to nearly every application and website on the Internet, DNS is a great place to enact traffic management policies. However, the capabilities of most Managed DNS platforms are severely limited because they were not designed with today’s applications in mind. For instance, most managed DNS platforms are built using off-the-shelf software like BIND or PowerDNS, onto which features like monitoring and geo-IP databases are grafted.

Until recently, a state-of-the-art DNS platform could do two things with regards to traffic management: first it wouldn’t send users to a server that was down, and second it would try to return the IP address of the server that’s closest to the end user making the request.

This is a bit like using a GPS unit from 1999 to get to a gas station: it can give you the location of one that’s close by and maybe open according to its Yellow Pages listing, but that’s about it. Maybe there is roadwork or congestion on the one route you can take to get there. Maybe the gas station is out of diesel, or perhaps they’re open but backed up with lines stretching down the block. Perhaps a gas station that’s a bit farther away would have been a better choice?

High-performing Internet properties face similar challenges in digital form, and they go far beyond proximity and a binary notion of “up/down.” Does the datacenter have excess capacity? What’s traffic like getting there - is there a fiber cut or congestion to a particular ISP we should route around? Are there any data privacy or protection protocols we need to take into account?

Intelligent DNS

Today’s data-driven application delivery models require a new way of managing DNS traffic. Next-gen DNS platforms have been built from the ground up with traffic management at their core, bringing to market exciting capabilities and innovative new tools that allow businesses to enact traffic management in ways that were previously impossible.

Here are five best practices to consider when implementing an advanced, intelligent traffic management platform:

Intelligent routing: Look for solutions that route users based on their ISP, ASN, IP prefix or geographical location. Geofencing can ensure users in the EU are only serviced by EU datacenters, for instance, while ASN fencing can make sure all users on China Telecom are served by Chinacache. Using IP fencing will make sure local-printer.company.com automatically returns the IP of your local printer, regardless of which office an employee is visiting.

Leverage load shedding to prevent meltdowns: Automatically adjusting the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications, can help prevent overloading a datacenter without taking it offline entirely, and seamlessly route users to the next nearest datacenter with excess capacity.

Enact business rules: Meet your applications’ needs with filters that use weights, priorities and even stickiness by enacting business rules. Distribute traffic in accordance with commits and capacity. Combine weighted load balancing with sticky sessions (e.g. session affinity) to adjust the ratio of traffic distributed among a group of servers while ensuring that returning users continue to be directed to the same endpoint.

Route around problems: Identify solutions that provide the ability to constantly monitor endpoints from the vantage point of the end user and then send those coming from each network to the endpoint that will service them best.

Cloud burst: Leverage ready-to-scale infrastructure to handle planned or unplanned traffic spikes. If your primary colocation environment is becoming overloaded, make sure you're are able to dynamically send new traffic to another environment according to your business rules, whether it’s AWS, the next nearest facility or a DR/failover site.

For businesses that need to deliver Internet-scale performance and reliability for high-volume, mission-critical applications, they must rethink their current DNS and traffic management capabilities. Traditional DNS technologies are fractured and rudimentary, making the industry ripe for disruption in order to accommodate today’s demanding applications.

Tomorrow’s modern distributed application delivery will be supported by converging dynamic, intelligent and responsive routing technologies. Whether you’re building the next big thing or you’ve already made it to the Fortune 500, best practices suggest that it’s time to evaluate current DNS and traffic management platforms with an eye on solving previously intractable problems and improving performance for webscale applications.

About the authorShannon Weyrick is the director of engineering for NS1 and has been working in Internet infrastructure since 1996, when he got started at an ISP in upstate New York. He’s been programming, however, since time immemorial, and loves it to this day. Shannon can find his way around any full backend stack, but he’s focused on software development, and has created or contributed to many open source projects throughout the years. Shannon previously worked at Internap and F5 Networks architecting and developing distributed platforms for a variety of infrastructure projects.

Got an idea for a Blueprint column? We welcome your ideas on next gen network architecture.See our guidelines.

Facebook has selected Ireland as the location for its 2nd data center in Europe. Ireland has been Facebook's international headquarters since 2009.

The new facility, which will be located in Clonee, County Meath, will run entirely on the renewal energy resources (wind) already available in Ireland and cooled by outside air. The data center will employ the latest designs from the Open Compute Project. The facility is expected to come online in late 2017 or early 2018.

The Facebook Engineering team is building the ability for people to share live video on Facebook, potentially to very large numbers of viewers. In this blog post, engineers described "the thundering herd" problem - public figures with millions of followers on Facebook may suddenly start a live broadcast. Facebook needs to be able to handle the potential of more than a million people watching the broadcast at the same time, as happened recently with...

Facebook confirmed that work is already underway on Wedge 100, a 32x100G switch for its hyperscale data centers. Facebook is also adapting Wedge to a much bigger aggregation switch called 6-pack, which uses Wedge as its foundation and stacks 12 of these Wedges in a modular and nonblocking aggregation switch. FBOSS will be used as a software stack across the growing platform of network switches: Wedge, 6-pack, and now Wedge 100. In an engineering...

Facebook has kicked off construction of its fifth mega data center in Fort Worth, Texas. The new site will join the other major Facebook data centers located in Prineville (Oregon), Forest City (North Carolina), Lulea (Sweden), and Altoona (Iowa). The new Fort Worth data center will feature the latest Open Compute Project hardware designs — including Yosemite, Wedge, and 6-pack — making it one of the most advanced data centers in the world. Notably,...

Telensa, a start-up based in Cambridge, UK, raised US$18 million in venture funding for its smart city solutions incorporating low power wide area (LPWA) wireless technology.

Telensa makes wireless smart city control applications, a smart streetlight solution that company says has been deployed by over 50 city and regional networks in 8 countries for a total footprint of over 1 million streetlights. The streetlights use LPWA Ultra Narrow Band (UNB) radio system, which combines low-cost, long range, long battery life and 2-way communication for massive numbers of devices.

Telensa cites strong demand growth in three areas:

The worldwide rollout of energy-efficient LED street lights. Connecting these lights is becoming mandatory as it unlocks further energy savings, reduces maintenance costs and enables flexible lifetime control of local lighting levels.

Streetlights are increasingly being seen as the ideal low-cost hub for smart city sensor applications, such as weather and pollution monitoring.

New Smart City applications that connect to the city’s UNB network. The company’s smart parking solution, for example, already includes some of the world’s largest deployments such as Moscow and Shenzhen.

The funding includes equity investment from Environmental Technologies Fund and debt funding from Silicon Valley Bank.

“The smart city controls market is awash with pilot applications looking for a business case, LPWA networks waiting for a critical mass of devices, and vendors hoping for a path to profit,” said Will Gibson, CEO Telensa. “We’re different. Our networks are proven at commercial scale and our applications are sold on a sustainable business case. This investment is recognition of Telensa’s success and enables us to expand to meet the growing demand for our solutions.”

MariaDB announced $9 million in venture funding to support its open-source relational database solutions. The company also named Michael Howard as its CEO and Michael “Monty” Widenius as chief technology officer.

MariaDB, which has offices in Finland and Menlo Park, California, offers an open-source database for SaaS, cloud, and on-premises applications. MariaDB was uilt by the founder and core engineering team behind MySQL. The database powers millions of users on sites like Booking.com and Wikipedia. Moreover, MariaDB is the “M” in LAMP, having displaced MySQL as the default database in the Red Hat and SUSE Linux distributions. MariaDB is also included in Pivotal Cloud Foundry, Rackspace and other cloud stacks, and it is the database of choice for IBM POWER8. The company claims over 550 customers in more than 45 countries.

Michael Howard most recently was CEO at C9, which he transformed into one of the leading predictive analytics companies in the CRM space, ultimately leading to its acquisition by InsideSales. Previously, Howard was CMO at Greenplum (now Pivotal), the Big Data division of EMC. He was CEO at Ingrian Networks and Outerbay, and VP of the Internet Division at Veritas and of Data Warehousing at Oracle.

Monty Widenius, the creator of both MySQL and MariaDB, has joined the company as CTO. Monty has been advocating open source throughout the industry as well as serving on the boards of MariaDB Corporation and the MariaDB Foundation, the non-profit organization charged with promoting, protecting and advancing the MariaDB codebase, community, and ecosystem. Monty developed MySQL, the most widely adopted open source relational database, which was acquired by Oracle as part of the Sun Microsystems purchase in 2010. Monty remains Founder and Open Source Advocate at the MariaDB Foundation.

Intel Capital and California Technology Ventures were among the investors in the $9 Million equity financing.

by Roger Levy, VP of Product at MariaDB In 2015, CIOs focused on DevOps and similar technologies such as containers as a way to improve time to value. During 2016, greater attention will be focused on data analytics and data management as a way to improve the timeliness and quality of business decisions. How best to store and manage data is on the minds of most CIOs as they kick off the New Year. It’s exciting to see that databases, which unde