The growth of hyperscale is a huge opportunity for colocation providers. Companies which are new to the West create a volume requirement for which colocation is a perfect fit, providing anchor tenants with a track record of growth and financial stability. On the surface, it looks like a perfect storm is gathering. As my friends in London say, it’s all gravy.

Speed of Delivery for New Data Center Capacity is Critical to Success

However, speed of delivery is a critical success factor which pervades the supply chain. The hyperscale customer wants to quickly get established in the market and connected to the end customer without delay; the colocation company needs revenue to start flowing quickly. It’s in everyone’s interest to ensure that all processes flow smoothly, from the issue of an RFQ to the delivery of technical space.

Starting at the beginning, one of the biggest impacts to timing is the fact that colocation providers no longer have 12 months or more to make decisions about adding capacity. Typically, there’s a period of no more than 5 months between the initial contact with vendors and the deployment of the data center solution.

To meet the exacting timeframes involved, professionals in colocation data centers responding to growth requirements should consider the following impacts:

1. Risk Management
Complexity from data center design and different vendor equipment being integrated inevitably leads to lost time and increased cost. Today’s vendors can offer a full scope of added value, from planning services and initial design, where tools such as reference designs and configurators can be utilized. This also applies to delivery and installation, where pre-engineered and factory tested systems (such as prefabricated data center infrastructure modules) can result in more rapid and resilient capacity.

2. Vendor Management
Data center supplier selection can reduce risk of complex ecosystem delivery and miscommunication, as well as increase install precision due to familiarity of standardized components. Consider the international reputation of the brands, and whether, for example, equipment can be supplied and supported throughout all regions in which the customer’s business operates. Lastly, look for a vendor that helps with the overall project management – there is consultancy work that can offer transparency throughout the project, end to end.

3. CAPEX Management
In addition to helping increase onsite efficiency, working with fewer vendors probably increases the client company’s capex savings, as well as reduces custom engineering to ensure different systems work seamlessly. Additionally, look for scalable infrastructure that hosts the power, cooling and technical space modules for an IT facility; for instance, we speak in a previous blog about a pod-based design which is typically 20% faster to deploy.

4. OPEX Management
Negotiating a service management contract with a single supplier can not only reduce the cost of maintenance, but also increase the reliability of the data center solution. This is important because the majority of outages are still caused by human error. Professionals who are more familiar with the layout of the site and the equipment it houses are less prone to make mistakes. Today the use of cloud-based, AI-enhanced management software is helping facility managers ensure the reliability of data center services by anticipating the requirement for maintenance and upgrades of infrastructure. This not only helps reduce the instances of unforeseen downtime, but also aids in ensuring the correct parts are available for service routines to avoid a potential emergency becoming a disaster.

]]>When you have nearly a thousand colocation customers who require hyperscale deployments, standard data center racks won’t do. As a provider, we also require speed to market, so customized racks would take too much time or cost too much to keep in stock. Therefore, we worked with Schneider Electric to help overcome this dilemma and deliver advanced redundancy.

Switch’s large enterprise customers call for high-density spaces, so we need racks tailored for such capacity — racks that can be implemented quickly. Special design and manufacturing, however, would normally cause delays.

Our data centers operate using a Schneider Electric power infrastructure. Each of our facilities has three separate N+2 power systems, with its own generators, transformers, transfer switches, UPS battery back-up systems and PDUs.

Switch’s patented hot and cold aisle containment system along with the custom rack solution allows the racks to be filled with as much gear as possible. The final product allows us to fit 50 to 60 times more clients into our facilities — faster. You can see some of our designs here.

Schneider Electric’s Global Supply Chain Delivers, Fast

For that speed, having a supply of racks on hand is necessary. This ready inventory is enabled through Schneider Electric’s global supply chain and distribution model, allowing us to forecast production based on Switch’s construction schedule and pay for the racks accordingly.

Thanks to Schneider Electric, we have more advanced redundancy than our customers can get anywhere else, so our clients typically choose Switch for their most mission critical deployments.

Read more about how we partnered with Schneider Electric to design a custom rack solution specific to its high-density requirements, while keeping a supply on hand without upfront capital outlay.

About the Author: Missy Young is the CIO (Chief Information Officer) at Switch. She drives the company’s solutions architecture to create a fundamental and sustainable change in the way clients ultimately design and implement intelligent data strategies. As a technologist, Missy has played an imperative role in the company’s evolution from the most innovative data center company in the world to the world’s only hyperscale retail colocation ecosystem that it is today.

Since joining the company in 2005, Missy has also held roles with leadership responsibilities for all sales operations and solutions engineering with respect to potential clients and win-win contract negotiations. Her expertise in technology trends, forecasting client needs and connecting those trends with results has been a great asset to Switch and its success. Prior to becoming a partner at Switch, Missy was the Director of Sales Engineering and VoIP services for Mpower Communications. Her portfolio of experience also includes senior sales engineering positions at ICG Communications, InteleNet Data Centers, and FirstWorld Communications. She entered the industry back in the dot-com days of the mid-90s, as a network engineer through her Cisco, Microsoft, and Novell certifications.

Missy serves on the FIRST® Nevada Robotics Board, the Foundation Board of the College of Southern Nevada, the Foundation Board of Opportunity Village, the Kenny Guinn Center for Policy Research Board, the Desert Research Institute Foundation Board, and the National Council of Juvenile and Family Court Judges Board of Directors. Additionally, Missy is a strong advocate for the certification path of education for young students who desire to enter the technology field.

]]>https://blog.schneider-electric.com/co-location/2019/03/08/how-customized-racks-help-deliver-advanced-redundancy/feed/2Insights on Telco Edge, Edge, and Hyperconverged Architectures – Panelist Discussion at Edge Congresshttps://blog.schneider-electric.com/co-location/2019/03/01/insights-telco-edge-hyperconverged-architectures-edge-congress/
Fri, 01 Mar 2019 18:27:29 +0000https://blog.schneider-electric.com/?p=55230I have a confession to make. I have come to realize that I am size-ist as it relates to data centers. My history in the Cloud & Service Provider space... Read more »

]]>I have a confession to make. I have come to realize that I am size-ist as it relates to data centers. My history in the Cloud & Service Provider space began on the Power side of the world. To me, hyperscale has always been the data center market to me. The edge opportunity, however, is opening my eyes to my own prejudice, and the panel that I had the chance to see at Edge Congress only enhanced my excitement.

Panelists talked about the dramatic spike in data that will need to be handled at the edge, vertical specific edge requirements, and major implications for telco edge. Lastly, they covered the critical success factors in deploying an edge architecture which included a nod to a hyperconverged solution.

Edge Congress Panel – A Discussion on Opportunities and Challenges at the Edge

The panel featured four experts who are on the front line helping customers implement edge computing solutions every day. They included Paul Morgan, Global Sales for Manufacturing, Automotive & IoT, at HP Enterprise (HPE) and Aad Dekkers VP from Scale Computing, which makes infrastructure specifically for micro and nano-data centers. Also, the panel included two of my colleagues from Schneider Electric, Cyril Perducat, EVP IoT & Digital Offers and Joris Verdickt, Segment VP, Enterprise End Users in the Secure Power Division.

Of the key highlights from the panel, the following two quotes from Paul Morgan really capture the opportunity:

“While today only 10% of all data is handled at the edge, analysts expect in 3 years between 50% and 75% of all data to be produced and processed at the edge,” said Morgan. “Gartner puts the figure at 75%,” he said, noting it will come from applications such as video solutions producing petabytes of data per day while others pointed to autonomous cars.

“Edge and edge computing are going to be key moving forward, because of the huge amount of data being produced,” Morgan said. “Bandwidth is going to be absolutely key.”

With that as a backdrop, the following are my takeaways from the panel discussion around the challenges and opportunities at the edge.

Edge Computing Requirements Vary by Vertical

Industrial companies such as manufacturers typically have more complex edge requirements, because they must fit in with lots of legacy infrastructure. The retail sector is typically less complex but puts a premium on availability and driving cutting-edge technology to brick and mortar retailers to combat their online competitors.

These kinds of demands are all impacting retail IT and the edge environment. While 27% of IT managers in retail say they’re prepared for what’s coming, 20% are holding off on new applications because the IT infrastructure isn’t yet where it needs to be, Verdickt said.

Available Infrastructure for Telco Edge is Promising

Telecom companies have some of the most demanding mobile edge computing requirements, and also some of the most mature edge infrastructure. Having already spent billions on infrastructure, both in full-blown data centers and smaller points of presence located practically everywhere, telcos are already handling the loads that bandwidth-intensive applications such as video presently. But the job is about to get more challenging for telco edge.

Consider connected and autonomous cars, which Morgan said produce up to 40 petabytes of data per day. “That will all need to be offloaded and telcos are positioned for that,” he said. What’s more, it all has to happen in real time, noted the panel moderator, Perducat. This is no small task, but telcos’ delivery of reliable solutions and connectivity is a marker that they can succeed in powering critical, time-sensitive applications.

5 Pillars for Success at the Edge

No matter the sector, Verdickt said he sees five pillars for success at the edge:

Availability of the network

A clear strategy for capturing data, such as use of Bluetooth in a retail environment

A centralized model for IT staff with a broad-based management platform for controlling edge environments

Attention to security, both physical and cyber

Use of converged infrastructure

Use of Hyperconverged Architectures at the Edge

Panelists generally agreed that it was important to keep edge architectures simple, given there’s typically little to no IT staff on site at edge locations. One way to do that is to use hyperconverged infrastructure, where all compute, storage, and network components are delivered in a single enclosure, complete with software to operate and manage it all remotely.

Such a strategy can also help keep costs down, said Dekkers. And it’s important that the infrastructure be able to not only scale up, but down, he noted. “There are certain parts of the edge that require less in terms of number of virtual machines or number of applications. But still they need to be up,” he said. The idea is to deliver the same data center-type functionality, but in a much smaller footprint.

Planning for the (Edge) Future

With the anticipated, dramatic growth for edge computing, panelists addressed how to plan for it.

“We will need different ways to compute and process [data] moving forward,” Morgan said, which will require continued innovation to develop new compute platforms, and in a way that takes the burden off IT in managing the environment. He envisioned a service model developing around edge data centers – “edge as a service,” as he put it.

Perducat agreed, noting the market needs a combination of the right hardware, software, and services at the edge to not only create new experiences for customers but to do so in a simple, efficient way. He suggested we should look at the edge in a new context and create different innovative use cases that use the cloud as a complementary resource.

Check Out the Edge Congress Panel Discussion

That is my takeaway for the edge panel discussion, but view it yourself for additional insights. We will have many more opportunities to continue the dialogue, including at the recent Mobile World Congress event in Barcelona – look for our next blog to come soon!

]]>How Lenders and Financial Institutions Fund Colocation Data Center Expansionhttps://blog.schneider-electric.com/co-location/2019/02/26/how-lenders-financial-institutions-fund-colocation-data-center-expansion/
https://blog.schneider-electric.com/co-location/2019/02/26/how-lenders-financial-institutions-fund-colocation-data-center-expansion/#respondTue, 26 Feb 2019 16:00:12 +0000https://blog.schneider-electric.com/?p=55069As technology trends such as cloud and edge computing become more influential in the way global companies conduct their business, the role of the colocation data center becomes even more critically important.... Read more »

]]>As technology trends such as cloud and edge computing become more influential in the way global companies conduct their business, the role of the colocation data center becomes even more critically important. Today, most blue-chip businesses turn to colocation data centers to help manage much of their cloud and edge computing data processing needs.

451 Research projects that global colocation and wholesale market revenue will top $48bn by 2021. This remarkable growth is resulting from the convergence of three major influencers:

A data hungry world where governments, businesses, and consumers show an increasing appetite for gathering, processing, and storing data of all kinds.

Data center operators who provide facilities, sometimes on a hyperscale level, that house and manage all of the technology needed to sustain the global flow of data.

Financing institutions who lend the funds to the data center operators in order for them to both establish and grow their businesses.

John Wilson, Director of Data Center & TMT Financing at SMBC Leasing and Finance, has worked hand-in-hand with colocation data center operators for over a decade. He recognizes colocation data centers as a very unique asset class. “A great proportion of what constitutes a data center is not really about real estate, it’s more mechanical and engineering equipment, and all the associated costs of creating the data center environment,” he said.

“As an asset class, the colocation data center business model is characterized by a strong recurring revenue stream from customers, and the operators have relatively fixed OpEx costs. Therefore, such a cashflow scenario is attractive to lenders with expertise in this space. If you can gain a firm understanding of the colocation provider’s performance risk, (i.e. are they capable of offering the service that customers demand under their SLAs) then you should have a fundable business,” he said.

Unique Requirements and Qualities of the Colocation Borrower

Wilson emphasizes that the financing of colocation businesses is not a “one size fits all” approach. Over the years, he has funded disparate data centers of various sizes and every transaction has been a little different. “You learn from your previous experiences and you apply those lessons to the new opportunity or challenge you have in front of you – that experience can be the key between successfully closing a transaction for a client and not” he said.

From a funding perspective, the challenge for colocation operators is that most are fast growing businesses. They require high levels of capital expense funding because the barriers to entry or continued expansion to meet investor aspirations can be very high. Access to sufficient equity to unlock competitively priced bank debt is important for operators since banks are rarely able to provide 100% financing. This can be especially relevant if earnings are slowing and therefore not generating sufficient free cashflow alone to provide the necessary contribution that lenders typically expect.

According to Wilson, there are two overriding factors that determine whether a colocation operator will succeed in obtaining the required funding from a lending institution: the quality of the firm’s economics and the quality of the operator’s business value-add.

“The quality of the economics – which includes cash flows, the nature of the demand that drives those cash flows, the long-term contracts in place, the quality of the tenants – provides a sense for how critical the data center is perceived by the underlying customers. This sense of mutual dependence establishes the degree of importance that those customers place in pursuing their relationship with the operator. When we evaluate the quality of the operator’s value-add capabilities, we look at the quality of the internal teams, the experience they’ve had in running these types of operations, and the location and the quality of the data centers they’ve built, among other determinants,” said Wilson.

Flexibility is Critical to the Success of the Lender

Lenders who work with colocation operators have to be flexible in terms of how they structure transactions and the types of financing instruments being used, since no deal is the same. Operators are now no longer remaining in one geography, they are increasingly moving across borders to address the needs of their existing customers who in turn are looking to place their data in the most logical location depending on for example latency requirements.

“We don’t just make a deal and walk away,” said Wilson. “It’s a dynamic market so you never fully fund on day one what you might need over the next two or three years. We have an ongoing relationship with our clients. Often, if a five or ten year loan is initiated, rarely will that funding run full-term as an acquisition or re-financing will very likely occur before that time. That’s the exciting and dynamic nature of the industry,” he said.

]]>https://blog.schneider-electric.com/co-location/2019/02/26/how-lenders-financial-institutions-fund-colocation-data-center-expansion/feed/0Telco Central Offices are Being Transformed into the Edge to Power the Next Generation of Telcohttps://blog.schneider-electric.com/co-location/2019/02/14/telco-central-offices-transformed-into-edge-power-next-generation-telco/
https://blog.schneider-electric.com/co-location/2019/02/14/telco-central-offices-transformed-into-edge-power-next-generation-telco/#respondThu, 14 Feb 2019 16:00:00 +0000https://blog.schneider-electric.com/?p=54720Next week I am attending Mobile World Congress in Barcelona to talk with customers about the next generation of telco – telco edge. Believe it or not, back in the... Read more »

]]>Next week I am attending Mobile World Congress in Barcelona to talk with customers about the next generation of telco – telco edge. Believe it or not, back in the day when using a telephone, there was a direct connection (wire) from your telephone, all the way to the telephone that you were calling. The technology is now adoringly called Plain Old Telephone Service (POTS) and was the standard service offering from telephone companies from 1876 until 1988. The way it worked was simple: the wires went from your home or business phone to a central office where it was switched to a longer range wire to another central office and then switched to another wire to the other phone. It was powered by direct current (DC) voltage of -48V supplied by a power conversion system in the central office – this is why your house phone continued to work, even when there was a power outage at your house.

These central office buildings are very close to users, located in every city and even every neighborhood. In the US, for example, there are 25,000 of them. They are usually nondescript grey, windowless concrete buildings. The central offices vary in size: the ones in densely populated areas tend to be smaller buildings similar to a Starbucks, while the more remote facilities can be much larger similar to the size of a Home Depot.

Many original buildings are still around with many of them located in densely populated areas. The placement of these central offices makes them ideal to enable the telco edge.

The Early Dilemma with Central Office Transformation – How to Repurpose

There was an initial transformation of these facilities starting in the 90s when telco technology started going digital – ISDN, packet switching, PBX, etc. – that enabled the digital transmission of voice and data over ordinary telephone copper wires. The effect on the telco central offices was that this new digital equipment took up much less space than the legacy analog equipment. This prompted many providers (who now had empty space in their central offices) to search for new revenue streams in these ideally located pieces of real estate. However, the lack of windows and their overall un-attractiveness did not lead to many of these turning into 5-star restaurants. Some of the owners repurposed the available space into colocation data centers. Internet Giant tenants could reduce latency and improve the overall experience for their clients – like Microsoft hosting Office 365, for example. But the majority of these remained only partially filled with equipment.

Transforming the Telco Edge: NGCOs

Today we are in the early stages of what are being called Next Generation Central Offices (NGCOs). This represents the natural evolution of existing central office as the new technology leverages virtualization technology and a new computing architecture that is cloud based. An NGCO is an edge cloud data center that can support both fixed and mobile traffic, and serve an average of 35,000 subscribers per central office; compared to around 5,000 for today’s central office. Located between the Radio Access Network (RAN) and Central Core Cloud, the NGCO functions as either a local edge core cloud data center or regional/metro edge core data center that have a smaller area and power footprint than the core cloud data center.

Cloud and virtualization technologies are needed to keep vastly increased amounts of data flowing and forge a path towards 5G. The technological enablers that bring cloud agility to central office facilities include: network functions virtualization (NFV), software-defined networking (SDN), software defined routing (SDR), open source software, C-RAN, as well as standardized white-box hardware. Most operators are slowly migrating their current central office architecture towards NGCO. While some NGCOs will be built as greenfield deployments, the majority of central offices will evolve, requiring the coexistence of SDN/NFV equipment along with traditional hardware and software.

Telco Edge Technology: A Cheat Sheet

It may seem complicated, but here’s what the functions do:

NFV: focuses on optimizing the network services themselves. Implement some network functions (e.g. router, firewall) as a software package, then deploy and interconnect them in the cloud.

SDN: separates the network’s control (brains) and forwarding planes (muscle). The router forwards packets according to established rules.

White-box: uses network devices, such as switches and routers based on “generic” networking chipsets available for anyone to buy, as opposed to proprietary silicon chips designed by and for a single networking vendor.

SDR: implement “radio signal processing” in software that interfaces only with radio front-end. It can also be seen as a network function (software package) in NFV. NFV manages and orchestrates these network functions. SDN is the overall technology to interconnect multiple network functions and implement them.

C-RAN: implement radio access network function in software and deploy them in the cloud. It is just a special use case of NFV where the network functions are the radio access network functions.

At present, there are a handful of industry projects that are focused on offering solutions for central office migration towards NGCO. These include CORD (Central Office Re-architected as a Data Center) and Open NFV’s Virtual Central Office (VCO). These initiatives offer the benefits of better network intelligence, flexibility and scalability; as well as both OPEX and CAPEX savings.

CORD: brings data center economies and cloud agility to service providers for their residential, enterprise, and mobile customers. The reference implementation of CORD combines white-box servers and switches and open source software, built on an extensible service delivery platform.

OPNFV’s Virtual Central Office: produces a reference architecture that, when combined with other functional elements (such as NFV and C-RAN), supports the delivery of residential, business and mobile services.

Think of Schneider When It Comes to Managing Your Telco Edge

Schneider Electric is the world’s leader in data center physical infrastructure specializing in cloud computing solutions from hyperscale down to micro data centers for the local edge. Our cloud based EcoStruxure management solutions with IoT powered analytics providing useful diagnostics and recommendations can be used to optimize the performance and availability of NGCOs. If you’re attending Mobile World Congress, stop by and see us in the HPE Partner Pavilion. Let’s talk about your next project and if you are undertaking a central office transformation, let’s discuss what offers Schneider can provide every step of the way.

]]>https://blog.schneider-electric.com/co-location/2019/02/14/telco-central-offices-transformed-into-edge-power-next-generation-telco/feed/0Webinar: Exploring Open Compute Project and Why Colocation Providers Should be OCP-readyhttps://blog.schneider-electric.com/co-location/2019/02/05/webinar-open-compute-project-colocation-providers-ocp-ready/
https://blog.schneider-electric.com/co-location/2019/02/05/webinar-open-compute-project-colocation-providers-ocp-ready/#respondTue, 05 Feb 2019 16:09:10 +0000https://blog.schneider-electric.com/?p=54590If you still think of the Open Compute Project (OCP) as some new effort that only Internet Giants need to be concerned with, not your colocation company, you’d do well... Read more »

]]>If you still think of the Open Compute Project (OCP) as some new effort that only Internet Giants need to be concerned with, not your colocation company, you’d do well to think again. It’s been 8 years since OCP was launched and the project has grown to include dozens of companies – while demonstrating impressive success.

Study Shows Growth in OCP Architecture for Energy Efficiency

A 2018 study of the European data center market by IHS Markit found 22 percent of respondents were already using OCP equipment in an effort to reduce energy consumption in their data centers, while another 44 percent were planning to investigate it. Use of OCP hardware was one of the top improvements companies made to impact data center energy efficiency, as 20 percent of the respondents stated. Other improvements that respondents integrated to their data center were: free cooling, containment panel install, and increased server inlet temperature.

Another telling stat from the study: nearly two-thirds of respondents (65 percent) are considering alternative vendors to their current supply chain.

This IHS Markit study showed revenue in 2017 from OCP gear reached $1.2 billion – from companies that are not on the OCP board. (Board member companies are Facebook, Rackspace, Microsoft, Goldman Sachs and Intel, so you know the actual figure is far higher). Even more impressive, the IHS Markit study predicts OCP sales among non-board member companies will surpass $6 billion by 2021. That translates to a 5-year compound annual growth rate (CAGR) of 59% for OCP equipment while the total market growth will be only in the low single digits, IHS Markit predicts.

Why OCP is a Great Fit for Colocation Providers

OCP has always been about efficiency, scalability and openness. OCP designs promote simplicity, both in the hardware itself and in rack configurations, as well as repeatability, which is crucial for scalability. Openness, of course, means more choice.

All of this should be welcome news to colocation providers. But it’s clear that many have questions about OCP and how it applies to their data centers.

A ‘Must Attend’ Webinar – OCP-ready for Colocation Providers

Bill Carter, Chief Technology Officer at OCP. Bill spent 33 years as a systems architect at Intel and was the company’s OCP liaison before joining the organization himself to help further OCP’s efforts on operational efficiency and collaboration among technology providers and end users.

Steve Helvie, VP of Channel for the OCP. Steve helps educate organizations on the benefits of open hardware designs and the value of “community-driven” engineering for the data center. He works closely with solution providers and manufacturers to help organizations adopt OCP infrastructure.

We’ll talk about the successes OCP has delivered to date, including a 15 percent better use of volume, airflow, and reduced deployment time/maintenance costs. We’ll address how OCP rack architectures presents a great opportunity for colocation providers to deliver a superior compute experience, while realizing significant data center efficiencies.

You’ll leave knowing the drivers behind OCP, how it helps colocation providers mitigate risk and control costs, and how you can become a recognized OCP Ready colocation data center – fostering future business growth.

]]>https://blog.schneider-electric.com/co-location/2019/02/05/webinar-open-compute-project-colocation-providers-ocp-ready/feed/0Could 2019 Finally be the Banner Year for Data Center Management as a Service?https://blog.schneider-electric.com/datacenter/2019/01/29/could-2019-finally-year-data-center-management-as-a-service/
https://blog.schneider-electric.com/datacenter/2019/01/29/could-2019-finally-year-data-center-management-as-a-service/#respondTue, 29 Jan 2019 16:00:00 +0000https://blog.schneider-electric.com/?p=54491If the questions I heard at our 2018 Innovation Summit are any indication, cloud and service providers may finally be embracing data center management as a service (DMaaS) platforms to... Read more »

At a time when speed-to-market, scale and security are paramount, colocation providers know they need to best ensure the agility, availability and operational efficiency of their data centers’ physical infrastructures.

Data Center Management as a Service: an effective solution for speed and cost

There’s never really been a doubt about using analytics to help run data centers optimally. But the concept is catching on now because everyone is being challenged to do better. For instance, if colocation providers normally deliver a data center in six months, now they’re being asked to do it in five.

The same can be said of bringing down costs. Over time, the low-hanging fruit has been driven out of data center design, so it’s increasingly difficult to squeeze out more efficiency. Beyond energy, people are one of the biggest data center operating costs. Moreover, the majority of errors and catastrophic outages are caused by people who are usually trying to do the right thing but with the wrong information. In addition, the data center workforce is getting close to retirement creating a knowledge gap.

All these factors are putting more pressure than ever on data center operators. That’s why they’re realizing it’s time to leverage technology to operate more efficiently and decrease risk. Plus, platforms are now simpler to use and easier to apply. Previous perception was such that the architecture required complex engineering.

Green Energy Trends in 2019 and Other Technology Adoptions for Colocation

Just as the industry is growing in its acceptance of DMaaS platforms, we’re also seeing higher interest in microgrids and the integration of renewables in data centers. The industry has embraced green, through the purchase of renewable credits, for example. But as renewables become even more cost-effective, they’re a more appealing option for the industry.

Furthermore, lithium ion battery adoption is also on the rise in data centers. And, an increase in the use of prefabricated modular solutions will also help drive down lead times in 2019, which is looking to be a year of cloud-based, IoT offerings like our EcoStruxure platform.

]]>https://blog.schneider-electric.com/datacenter/2019/01/29/could-2019-finally-year-data-center-management-as-a-service/feed/0The Evolution of the Colocation Data Center to Meet Sustainability and Energy Efficiency Demandshttps://blog.schneider-electric.com/co-location/2019/01/18/evolution-colocation-data-center-sustainability-energy-efficiency/
https://blog.schneider-electric.com/co-location/2019/01/18/evolution-colocation-data-center-sustainability-energy-efficiency/#respondFri, 18 Jan 2019 16:00:00 +0000https://blog.schneider-electric.com/?p=54027There was a time when ideas like efficiency, sustainability and resiliency were thought to be mutually exclusive terms. This was a problem for those involved in data center operations from... Read more »

]]>There was a time when ideas like efficiency, sustainability and resiliency were thought to be mutually exclusive terms. This was a problem for those involved in data center operations from colocation service providers to enterprise facilities, where the primary objective was to ensure service availability. Inevitably that demanded complex 2N electrical designs that were energy depleting. Colocations have evolved tremendously and are looking at a future ripe with even more environmentally friendly options.

Reap the Benefits of Sustainability

Today the majority of people and big businesses involved both in IT and colocation data centers have come to realize that sustainable energy supplies bring a lot of benefits. Besides guarding the health of the environment, sustainable practices can reduce operating costs by decreasing energy usage.

One approach to capture energy efficiencies is to operate in cooler climates and more remote locations. This means companies that operate responsibly as global citizens don’t have to compete for space and power in urban spaces where the demand for housing puts real estate at a premium. In fact, a cool temperate climate, coupled with a plentiful supply of renewable energy sources, has made the Nordic countries a destination of choice for customers looking to colocate IT loads in Europe.

A recently published report commissioned by the Nordic Council of Ministers indicates sharp growth for the Nordic data center market by 2025, with expected annual construction investments in the order of $2-4.5bn (this equates to an installed annual capacity of 280-580 MW per year). It’s a massive investment as the Nordics position as Europe’s edge location for non-critical applications.

The Green Mountain Story: Fjord Meets Colocation Data Center

Speaking at a recent Datacloud Europe event, Svein Atle Hagaseth, CSO at Norway’s Green Mountain AS, a leading supplier of colocation data center services, confirmed that customers are increasingly talking about changing climate requirements, energy efficiency and sustainability. “With near 100 percent renewable energy, being in Norway is a good place to be when you have a sustainability agenda,” he said.

﻿

Hagaseth said, “The nature of Norway is fantastic; the power is green, hydro-based. You can leverage the cold wet Norwegian climate to create very energy efficient solutions inside the data center… we’re actually using the fjord outside one of our data centers to provide cooling. It takes around 3kW of power to generate the equivalent of about 1000kW of cooling.”

Data Center Sustainability Isn’t Just for Cool Climates

Sustainability is not only for data centers located in cool areas. Other climates have good access to wind and solar energy for example. Hagaseth emphasized that innovation is key. “You need to use technology to take advantage of beneficial climatic conditions – you need technology and nature to create sustainable solutions. It’s important to not simply do things as they have always been done,” he said. It is important to design a strategy, deliver energy efficiency, and sustain results for your enterprise.

Technology choices need to be continuously reviewed. The climate is changing and it’s imperative to ensure that data center tech choices remain relevant today, tomorrow, and for at least 5 or 10 years into the future to reflect customers’ lifecycle requirements.

Colocation Market Demanding Energy Efficiency

Environmentally friendly technologies for colocation providers aren’t just a nice to have, rather colocation customers are starting to expect it. They are looking for providers that are embracing sustainable practices in line with their own company’s strategy and values.

Investors are looking at colocation provider environmental scorecards too. “Sustainability is becoming a highly relevant metric for the data center industry and it is essential that the growth in the data center space is accompanied by energy efficiency innovation and new models where investors are engaged in expanding the supply of renewable energy,“ Steen Hommel, Director Invest in Denmark recently told industry publication, Data Economy. Also, with environmental regulations continuously evolving, it makes good sense to actively seek out sustainable approaches and technology today to prepare for the future.

]]>It’s difficult to say whether it’s the expectations of 5Gor the implications of 5G creating the greatest buzz in the data center market right now. Ultimately, the network will transform the world. The first step in this transformation is dependent upon telcos adapting to support the skyrocketing demand for connectivity and capacity. In fact, 5G will fail without this edge connectivity and not-yet-imagined applications won’t come to be.

The Call for Connectivity

Almost all major carriers have announced 5G rollouts or trials. The GSMA estimates that 5G could account for as many as 1.2 billion connections by 2025 — a profound impact on both the mobile industry and its customers.

Even before that, Cisco says that by 2020 video could account for 90 percent of internet traffic. This insatiable demand for content streaming is driving investments in fiber to increase bandwidth. Cloud adoption, mobile communication, and big data analytics are also creating the need for high-capacity and low-latency.

What’s more, investments in cloud-based RAN (C-RAN) are increasing as wireless network operators continue to virtualize. C-RAN offers many advantages such as efficiency, scale, and higher utilization of assets while allowing for more flexibility in resiliency and redundancy of the infrastructure.

Emerging Telco Edge

In response to these changes and more, the telco edge is emerging as the new frontier pushing technology and digital innovation forward. Carriers delivering 5G service will require new software and more infrastructure, including small cells and multiple-input and multiple-output (MIMO) sites. 5G will also bring a wide range of devices that will have new radio capabilities.

The edge will play a key role in unlocking 5G’s speed and low latency in a wide variety of applications. Autonomous vehicles, for instance, will require proximate processing power. They will be generating data, which will also come from external sensors monitoring traffic, road conditions and weather in real-time. In this case, the edge may be as close as every 1,000 feet and/or be the car or device itself.

Think about the changing dynamics of the cloud-client relationship. The fat cloud-thin client model renders a cell phone in airplane mode fairly useless. But edge applications like autonomous vehicles will need to be supported by a thin cloud – fat client model in order to fully function 100 percent of the time regardless of whether they are connected to the cloud or not. The car simply cannot perform like smartphones if it’s going to be autonomous.

Beyond 5G for the Data Center Market

Telco operators that are driving edge computing are on the forefront of the next IT revolution, but they must also look beyond 5G and continuously evolve their business models. It’s critical to plan, engineer and anticipate future change.

Leading companies are already doing this by converting their open central office space into regional edge data centers. This new business model will provide new revenue streams and deliver better service by reducing latency and lowering transmission costs.

To reach their full potential, many emerging technologies will require an advanced network infrastructure that can support complex data processing, storage and transport. Those providing it are helping to transform the way we live and work.

]]>https://blog.schneider-electric.com/co-location/2019/01/09/why-5g-future-and-beyond-depends-on-telco-edge/feed/03 Lessons from Habitat for Humanity for Building Data Centershttps://blog.schneider-electric.com/datacenter/2018/12/21/3-lessons-from-habitat-for-humanity-for-building-data-centers/
https://blog.schneider-electric.com/datacenter/2018/12/21/3-lessons-from-habitat-for-humanity-for-building-data-centers/#respondFri, 21 Dec 2018 14:00:00 +0000https://blog.schneider-electric.com/?p=53514Habitat for Humanity has a vision for the world “where everyone has a decent place to live.” My team and I recently had the privilege of helping realize this goal... Read more »

]]>Habitat for Humanity has a vision for the world “where everyone has a decent place to live.” My team and I recently had the privilege of helping realize this goal for one family in Nashville, Tennessee. A bunch of data center professionals may not be experts in building a home, but we were happy to be part of an organization that fosters independence and offers new life chances for deserving families. As I reflected on my Habitat experience, it became clear how the process of building a home is similar to a basic data center design – there are key lessons that can be applied to both scenarios.

Schneider Electric has supported Habitat for Humanity since 2000, donating over $38 million in product and funding $8.3 million to help cover the cost of land, infrastructure, and building materials for projects. Through an automated registration process, the company also makes it easy for us as employees to volunteer. This has resulted in thousands of hours given to countless home sites over the years.

Lesson 1: Start with a Unified Vision

A Habitat build brings together groups of essentially unskilled workers from different organizations, with diverse backgrounds who meet for the first time to work on a project. They have little in common other than the task at hand. What unifies and drives them is a shared vision.

The objective for us wasn’t simply to make sure four walls stood up and lights turned on and off. For true success, we needed to realize the impact of our undertaking. The crew was introduced to the future homeowner and understanding her story gave us the foundation for our common goal.

Just like it takes a community of people to build a Habitat home, members of the value chain must come together to build a data center. It takes electrical and general contractors, engineers, consultants, equipment manufacturers and more. Everyone must have stated roles and responsibilities, overseen by the right leader.

While data centers are usually built with the end user in mind, it pays to step back and outline objectives. Speed to market might be priority in one situation, whereas hyperscale capacity could be important for another. All stakeholders should be on the same page.

Kitting helps Habitat overcome the lack of trade skills in the volunteer base. Supplies, either packaged together, with the right number of rafters for example, or pre-built, such as walls, are on site to make construction easier and faster.

This practice is akin to prefabricated data centers. They are built and tested in a controlled environment and basically delivered ready to be deployed. Kitting in our industry also eliminates waste which reduces cost, preserves skilled resources to help overcome a growing labor shortage and saves time reducing speed to market.

Lesson 3: Collaborate

With a collective purpose and clear strategy, the missing ingredient for success is collaboration. The first two steps help establish a level of trust among (mostly) strangers. Then a strong leader solidifies the group. Habitat assigns a foreman and other experts on site to coordinate and drive a joint effort.

In data centers, we see this cooperation on the rise throughout the value chain. The hyperscale era has changed a previously disconnected dynamic between suppliers, contractors, manufacturers and end users, who now work side by side from the start of a build. Everyone is at the table on day one.

Building Standardization Ensures Success – for a Home or Data Center

The foreman of our Habitat build shared a compelling statistic with us. He said tradesmen often question how the organization can build quality homes with volunteer labor. In fact, Habitat homes are often more well-constructed than those of general home builder. That’s what high standards and teamwork will get you. The same goes for data centers. If the right process is in place, you have a foundation for success no matter who the players are.

One bonus lesson. Getting out of the office ­— out of our industry — made us think differently. Learning across segments helps generate new ideas. Innovation can come from anywhere. For more on how to connect these lessons to your next data center build, check out our guide on practical data center planning and design.

]]>https://blog.schneider-electric.com/datacenter/2018/12/21/3-lessons-from-habitat-for-humanity-for-building-data-centers/feed/0Addressing Colocation Industry Questions from the Trenches About Dealing with Rapid Growthhttps://blog.schneider-electric.com/co-location/2018/12/19/addressing-colocation-industry-questions-from-the-trenches-about-dealing-with-rapid-growth/
https://blog.schneider-electric.com/co-location/2018/12/19/addressing-colocation-industry-questions-from-the-trenches-about-dealing-with-rapid-growth/#respondWed, 19 Dec 2018 16:00:00 +0000https://blog.schneider-electric.com/?p=53491I recently co-hosted a Schneider Electric International Colocation Club webinar intended to help companies plan for the future and deal with the rapid growth in the colocation industry. I was... Read more »

]]>I recently co-hosted a Schneider Electric International Colocation Club webinar intended to help companies plan for the future and deal with the rapid growth in the colocation industry. I was glad we left about 20 minutes for questions at the end because it gave us a chance to hear what’s on the minds of those who are in the trenches, and discuss some of the trends, challenges and best practices we’re seeing in the colocation space globally.

My co-host was Steve Wallage, Managing Director of BroadGroup Consulting, who has more than 25 years of experience in IT consulting, the last 12 focused on global data centers. Read more about his bio in a previous blog post.

Colocation Industry Predictions and Trends

Many of the questions focused on the trends we’re seeing and our thoughts on how things may play out.

One colocation provider asked whether scalability is as important in second-tier markets as it is in the major colocation centers. The answer was a definitive “yes.” We’re seeing more demand from cloud players in second-tier markets and they certainly need high levels of scalability. “Once you start to attract hyperscale cloud players, they tend to build big,” as Steve put it.

He also pointed out many second-tier markets are currently under-served in terms of data center capacity, but have tremendous room for growth. Nigeria, for example, has a population of about 190 million people, but is predicted to grow to around 400 million by 2040 – enough that it may soon become a regional hub.

Another provider asked which customers had the best growth potential. An appropriate question given we tend to think so much about the small handful of Internet Giants. But Steve said it’s wise to consider the second-tier players that are growing rapidly. “Uber, Dropbox and others that are going to be very large can be great customers,” he noted.

Another interesting question was about the effect cryptocurrency players are having on data centers. This prompted a discussion about how colocation companies need to decide which vertical markets they want to focus on. Crypto players need highly specialized data centers and focus on scalability, efficiency and reducing power costs. If you want to serve the crypto market, be mindful that other companies may not want to be in your data centers because they fear the crypto firms draw too much power.

Challenges in the Colocation Industry

Power management was another topic that came up a number of times, which is no surprise. As Steve noted, 3 or 4 years ago colocation companies used to ask whether they should think in terms of selling racks and space, or simply power. “Today, absolutely, no doubt, power is what you are selling,” Steve said. “What your customers think, the way they’re looking to use that data center, it’s all about power.”

I agree that power is a critical issue for colocation companies who need to understand how it’s being used and what quality they’re getting. When you have a power issue, you also need to make sure you have the tools that enable you to get to the root cause of the issue and quickly resolve it. But there’s also an opportunity here – more on that in a minute.

Another challenge one colocation provider raised is pressure on price, especially from hyperscale customers. Dealing with that issue gets back to deciding which vertical markets you want to serve and build accordingly, Steve said. If you want to optimize to serve the hyperscale market, you can build at lower cost, but that data center won’t likely be well-suited to serving other enterprise companies. “It’s becoming increasingly difficult to have these generic data centers that fit the needs of every customer, including hyperscale,” Steve said.

Best Practices in Colocation Data Center Operations

Broad questions around data center operations, including power, were predominantly discussed, as I just mentioned. In my opinion, power management is both a challenge and an opportunity on a couple of fronts.

For one, using tools such as the Schneider Electric EcoStruxure framework, and specifically the Power Advisor, it’s now possible to get far more predictive about power issues. It uses advanced algorithms to analyze data coming from power meters and other devices to identify issues and help you proactively address them. That enables colocation providers to lower their maintenance costs while increasing the lifespan of their infrastructure.

When asked about using UPSs to help reduce peak demand charges, I noted that this is something we’re starting to see at the enterprise level, with Lithium-ion batteries making the approach more feasible.

]]>https://blog.schneider-electric.com/co-location/2018/12/19/addressing-colocation-industry-questions-from-the-trenches-about-dealing-with-rapid-growth/feed/0Powerful solutions keep F12.net data center safe, secure, and always onhttps://blog.schneider-electric.com/co-location/2018/12/07/powerful-solutions-f12-data-center-safe-secure/
https://blog.schneider-electric.com/co-location/2018/12/07/powerful-solutions-f12-data-center-safe-secure/#respondFri, 07 Dec 2018 12:37:42 +0000https://blog.schneider-electric.com/?p=53233With over 25 years servicing the IT industry, Edmonton’s F12.net has witnessed a sea of evolving technology and constantly changing trends. But one thing has remained constant for the solutions... Read more »

]]>With over 25 years servicing the IT industry, Edmonton’s F12.net has witnessed a sea of evolving technology and constantly changing trends. But one thing has remained constant for the solutions provider: ensuring customers excel at their business by entrusting their IT needs to experts. F12 offers comprehensive managed IT programs include IT strategy, cloud services, disaster recovery planning, simplified employee onboarding, and next-gen cybersecurity.

Launching a geo-redundant data center

F12 decided that the ability to host client data and applications within data centers in Canada was not enough as more and more clients seek hybrid cloud options, with the ability to selectively host services. In early 2017, F12 decided to complement its existing Alberta based data center with a new, fully-hosted facility across the country in the Greater Toronto Area. To avoid disruptive outages for Canadian businesses F12 set out to launch a geo-redundant data center facility with reliable, cost-effective and clean energy.

A more energy efficient and visible solution

“Part of our go-to-market strategy was to bring our cloud offerings internally,” says Engen, Director of IT at F12. “Unlike the typical cloud or a collocated center, our centers are staffed, allowing us to react to any issues in real-time. A physical facility also demystifies the cloud for many of our customers because they know exactly where their data is stored. This really helps to foster trust.”

Top of mind for the new center was energy reliability – to deliver clean, reliable power to its IT infrastructure so that customer data is accessible 24/7 without any unexpected downtime. F12 was also adamant the facility have a high degree of visibility so that staff is able to see the different thresholds within the center and be able to respond in real time if there is any deviation outside of those thresholds.

F12 turned to Schneider Electric Canada to power all its data center needs. Engen worked closely with the Schneider Electric team, exploring the company’s many options to find the perfect match for F12’s specific requirements.

Connected power increases efficiency and lowers OPEX

They decided on a full solution that included Power Usage Effectiveness (PUE) monitoring, the EcoStruxure-ready Galaxy VM and EcoStruxure IT on-premise data infrastructure management solution, and StruxureWare software, a management software suite for collecting and managing data throughout the data center lifecycle. Together, this solution forms a key step toward operating a data center that is reliable, efficient, productive and green.

Another advantage offered by Schneider Electric’s solutions is the ability to scale easily, thanks to their modular capabilities. “Scalability was a key requirement for us in addition to the basic efficiencies we wanted to gain,” says Engen. “Growth is a vital part of most of our customers’ strategies. We wanted something more modular so as we expand much easier.”

The journey continues

One unexpected bonus in the implementation was the addition of Schneider Electric’s new Lithium-Ion (Li-ion) battery solution, to improve critical backup storage in data centers. The original thought was to use a standard valve-regulated lead-acid (VRLA) battery like in the Edmonton facility, but when Engen learned of the new solution he was impressed by the many advantages over VRLA such as up to 60% less footprint, less weight, faster recharge time and longer life, to name a few.

With the important decisions made, deployment of the data center solution began in the summer of 2017, with the facility’s grand opening taking place in September. As customer data begins its migration into the center, Engen is highly optimistic, both for the facility’s capabilities and its role in F12’s continuing success.

Go deeper into F12.net’s journey by accessing the full case study. Click here