New Year’s Eve is notorious for being amateur celebrity night. Millions of people around the world descend upon restaurants, clubs, and bars willing to spend highly inflated prices for one of the most popular nights of decadence and celebration all year. Money is bandied about left,right and center, with premiums placed on nearly everything: New Year’s Eve menus, New Year’s Eve entry charges, New Year’s Eve cocktails, New Year’s Eve parties, even New Year’s Eve transportation.

It’s that last one – transportation – that some credit with the inception of what we now call the on-demand economy. After having parted with $800 with friends for a taxi on New Year’s Eve, the wheels started turning for one particular entrepreneur in San Francisco. Rather than pay exorbitant prices for black car services, Garrett Camp started to brainstorm ways in which riders could pay a lower price by sharing the ride with multiple people.

And thus, Uber was born.

Since Uber was founded in 2009, the number of on-demand companies has exploded. This new type of economy goes by many names – gig economy, sharing economy, on-demand economy, peer economy, platform economy – but the idea is the same: offer products and services delivered incredibly fast and at a low price, and all with just a few taps on a mobile screen. Some of the more notable on-demand companies include TaskRabbit, “an online and mobile marketplace that matches freelance labor with local demand, allowing consumers to find immediate help with everyday tasks,”; Wag, an on-demand dog walking and dog sitting marketplace; Airbnb, “an online marketplace and hospitality service, enabling people to lease or rent short-term lodging including vacation rentals, apartment rentals, homestays, hostel beds, or hotel rooms”; Instacart, an online grocery delivery service; and Postmates for on-demand local delivery.

It’s apparent that the on-demand economy can be hugely profitable for businesses, but what are the common elements that lead to success? And can the gig economy be applied to all businesses, or are there some products and/or services that could actually benefit from delayed gratification?

What does work mean to you?

The on-demand economy has already changed the face of the U.S. workforce. There were approximately 55 million freelancers in the U.S. in 2016, making up over a third of the population. In a country that has in recent history heralded the 9 to 5 schedule as the gold standard in workdays, the rise in freelancers signals not only a shift in workforce statistics, but also in the collective mindset. Sara Horowitz writes for the Monthly Labor Review, U.S. Bureau of Labor Statistics, “Online work platforms, such as Uber, Airbnb, Etsy, and Elance, that connect workers directly to consumers and clients are completely reimagining the work relationship.”

If we were to use the chicken-and-egg argument, it could be hard to pinpoint which came first, on-demand companies or the rise in freelancers. Either way, the concurrent increase of each shows that not only are there more opportunities for freelancers in the modern economy, but there is also a surplus of workers who prefer this type of employment.

Global investment experiences recent boost

The number of investors allocating capital to on-demand companies took a dive in the first half of 2017, but with 87 deals in Q2 alone, the on-demand marketplace is experiencing a resurgence, as shown in the graph below by CB Insights. This most recent boom was led by Chinese ride-hailing startup Didi Chuxing, which received $5.5 billion in investment. Other companies that led the funding rise include GO-JEK, Ele.me, and US ride-hailing company and Uber rival Lyft.

Investments in new companies, however, have started to decline. CB Insights reports, “Looking at on-demand global deal share by quarter, there is a clear decline in seed and angel deals to the space, falling from 45% of all deals in 2016 to only 39% in H1’17. This is indicative of a maturing industry, in which later-stage companies are increasingly receiving more investor attention and dollars.” Startups in the on-demand space will have a harder time getting investment, as well as facing significant competition from companies that have been around for some time. Companies that wish to start afresh may find it more difficult to gain traction in a market that is starting to become saturated, especially in certain industries.

Leading the charge

Certain types of on-demand businesses are more popular in the on-demand market, forcing recent startups to become more innovative with their concepts. The Harvard Business Review writes that the more popular types of on-demand businesses is reflected in the amount that consumers spend, the majority of which is in online marketplaces, which on average totaled $35.5 billion per year in the U.S. This is followed by transportation with $5.6 billion, food grocery/delivery at $4.6 billion, and the remainder of the on-demand economy bringing in $8.1 billion.

These numbers should be a clear signal to entrepreneurs who have the twinkle of an on-demand business in their eyes. Unless backed by a large amount of funding or corporate partner, it will be very difficult to infiltrate the parts of the on-demand economy in which consumer spending is already concentrated.Instead, it would be better to focus on new ideas that are still in their infancy, as these stand a better chance at becoming profitable.James Paine writes for Inc that the B2B space has been an interesting place to watch, with a few startups coming to market to fill a variety of the needs of businesses, including Spiffy, an on-demand company that will wash your car while you’re at work, and ezCater, a catering company with a network of 55,000 restaurants that can handle anything from two people to thousands. Whatever the value proposition, a unique idea will be paramount to starting a business in the on-demand world.

How to build an effective on-demand business

Looking at the bumpy roads of some on-demand startups, a few things rise to the surface as necessary ingredients for a winning on-demand business. First, your business must offer as many options for the product or service as possible, and spare no exceptions. The reason for this is, as mentioned earlier, the on-demand market has started to become saturated, and if a consumer cannot fulfill all their needs with your company, they will quickly search for another one where they can. Establish brand loyalty in the early stages of the customer journey by giving the customer everything they require and then some – give the consumer what they want before they know what they want, and leave no stone unturned.

Second, when building your software, make sure to prepare for future scalability by layering your back end. This will allow for quick enhancements and build-outs to occur if your company starts to expand quickly. Whenever opportunity comes knocking, have your digital infrastructure ready to answer.

Next, research your competition and make sure you’re doing everything better. The on-demand world has already had a few years to find its feet and it’s running at a pace now, so there is a good chance that there’s another company offering something very similar to what you are. Find them, get a clear picture of what they’re doing, and then do all of it better.

When considering the ways in which customers will interact with your business, it is paramount to be excruciatingly precise and consistent with every step, and make everything happen as fast as humanly possible. Again, with a saturated market, if a customer doesn’t receive your product or service in as much or as little time as they expect to, they will go to a competitor. Don’t give them time to consider another company – give them exactly what they want, every single time, and do it faster than they realized was possible. On-demand means now.

Taking this need for speed a step further, make your payment system fast and painless. There are lots of companies out there that offer excellent payment systems, from pocket-sized credit card readers to online wallets. Find which type of system works best for you, then integrate it into your business so that this step happens in the blink of an eye.

As you continue to grow your customer base, keep a constant eye on your analytics trends and implement changes as necessary. It will take a while to aggregate enough data to discern patterns, but you should do this as soon as you can in order to start refining the way your business works. Data is the name of the game in digitally focused businesses, so put it to use to continually improve your business model.

On-demand does not work for everything

One thing upon which online marketplace Etsy has capitalized is the recent demand for handmade goods. Etsy found that despite the fact that you can get almost anything in the blink of an eye, there are certain things that consumers are willing to wait for. The allure of having something made by a person rather than a machine is a recent trend, and although these goods are offered via an online marketplace and thus technically part of the on-demand world, many of them also require time to produce before shipment.

For instance, say you wanted to give someone a handmade quilt. Part of the value in this product is that it is custom made, so no one else will have the same quilt. The other part of the value is that it is made by a person, not a machine, and that the craftsperson invested one of today’s most valuable assets into making this gift: time. Not all products and services are going to benefit from an on-demand model, in some cases simply because it is not feasible to produce them quickly enough. It will be obvious the types of businesses that simply do not belong in this sphere; marketing these types of businesses with a focus on the time and personal approach they have will balance the inability to turn them around in a short timeframe.

—

The on-demand economy has taken off like a storm for many modern businesses, and can prove very lucrative when it’s done right. However, there are right and wrong ways to go about it, and it’s certainly not for everyone. Consider what your business has to offer that differentiates itself from other on-demand businesses and if the time is right, make the on-demand economy work for you.

Technology and business goals collide at the intersection of page load time and conversion rate.

Marketing wants a fully featured page, with lots of images and services to track user behavior.

Page load time has a huge effect on conversion rate and customer happiness:

Half of all customers expect a webpage to load in under 2 seconds. If it doesn’t, they lose trust in and patience with the site, and click the back button to navigate to the next search result on Google.

User expectations have increased big time. SOASTA conducted an experiment between 2014 and 2015, looking at conversion rates based on page load time. In 2015 they saw a peak conversion rate at for sites loading in 2.4 seconds. This was 30% faster than the peak conversion load time in 2014.

Cedexis found that decreasing load time by just 1 second improves conversion rate by an average of 27.3%, where a conversion is defined as a purchase, download or sign-up.

The necessity of keeping page load time low for a good customer experience means that tech teams need to exercise every option available to them for performance.Effective caching techniques can bring improvements to even the leanest of websites.

Why optimize caching?

Caching is the process of storing elements so that clients can retrieve resources from memory without needing to put strain on the main server.Utilizing caches has three main benefits.

First of all, caching can make web pages load faster, especially if the user has visited before. If you utilize caching to distribute content globally, visitors will see a reduction in latency, which is the time it takes for a request to physically travel from their browser through the network to the server and back again. If your page is cached locally on the user’s browser, they don’t need to download every resource from your server, every time.

Secondly, caching reduces the amount of bandwidth needed. Instead of the server being responsible for delivering resources for every request, it only needs to deliver new content. Everything else can be returned from a cache along the network.

Finally, caching increases how robust your site is. A client can retrieve resources from a cache, even if your server is down or experiencing high traffic. A plan for preparing for volume spikes should include a caching strategy.

Levels of caching

Caching can happen at lots of different checkpoints along the network, right from the browser to the server itself. Every checkpoint has different benefits and challenges associated with them.

Let’s start with the caching options closest to the end user, then move up the chain to the server where the resource being retrieved originates from.

Browser caching – imagine scrolling through the search results on an online shop. You click on an image link to load the product page, decide it’s not quite right, and hit the back button. If your browser had to request the entire search page again, you’d have to wait for all the images to be downloaded to your browser for a second time. Fortunately, browsers use memory to store a version of sites they’ve already visited. Instead of going all the way to the server and back again, your browser just pulls up the version it’s already stored for you. It will also do this for constant pieces of your site, like your logo, for example.

Proxy cache (Web Server Accelerator) – caches can also be shared between many users. ISP use caches to reduce bandwidth requirements by sharing resources. That way, if one user has already requested a static resource (like an image or file) the ISP doesn’t need to request it again from the server – it can provide it instantly.

Content Delivery Network (CDN)– remember how distance between user and server affects load time? CDNs are caches designed to reduce latency by distributing copies of cached files to local servers all over the world. When a user requests a resource, they are connected to their local CDN. Companies with international users should consider using a CDN to reduce latency.

Server side caching/ reverse proxy– if most of your content is static, you can cache it for yourself, so customers won’t need to hit your server to load static content. There are several tools that do this for you – Redis, Varnish, and phpfm are all popular options.

Database caching – database servers are often separated from the rest of the server. This means that when your server receives a request from a user, they need to request something extra from the database. If a frequent request always returns the same result, you can cache this in a database cache. This prevents the database from crunching the same request over and over again, resulting in better performance, even during busy periods. Search servers for ecommerce sites also return cacheable queries.

When should you optimize caching?

“I’m not lazy, I’m just efficient” – ever heard that before? Well, think of your servers as the absolute laziest pieces of hardware you own. Never ask them to do something time consuming twice if there’s a way for them to hold onto results in a cache down the line

For example, you sell jewelry online and one of your top link destinations is a list featuring the 20 most popular items. If you didn’t utilize caching, every time a visitor clicked on that link, they’d need to send a new request through their ISP to your server, which would ask the database to calculate the top 20 items and then send back each of the corresponding images and prices. But realistically, you don’t need to compute this full page every time it’s requested. The top 20 items don’t change often enough to require real-time results. Instead, cache the page in a reverse proxy – located in the same country as the customer – and deliver it much faster.

When you start optimizing your caching strategy a good place to begin is by identifying the most popular and largest representations first. You’ll get the biggest benefit from focusing on caching improvements for pages that are resource heavy and requested often. Looking at the waterfall diagrams on the Network tab of your browser can help identify resource intensive modules on the page.

Time To First Byte (TTFB) is a good way to measure the responsiveness of your web server.Improving your caching strategy through reverse proxies, CDNs and compression will help customers experience shorter TTFB, and help your website feel snappier.

However, don’t forget that most customers will have a poorer experience than that seen in testing.They might, for example, be located on the opposite side of the world using a mobile device or an older computer. By utilizing caching best practices, you’ll ensure customers have a great experience, no matter where they are.

When you need to refresh your data

Because we work in a world where everything is frequently updated, it’s important to understand the different methods we have of forcing a cache reset. There are a few ways we can force browsers and other caches to retrieve a fresh copy of data straight from the server.

Set expiration date – when the site doesn’t need to stay perfectly up to date in real time, but does need to stay reasonably fresh. If you set an expiration date in your header, the browser will dump the cache after that time. If the resource is requested again, a fresh copy will be retrieved.

Set modified-since – the client will download the updated resource only if the server confirms it’s been updated after the modified-since date. Instead of sending everything again, the server can send back a short 304 response without a body, thus saving bandwidth and time.

Clear specific module – you don’t need to refresh your entire blog cache just to display a new comment. Segmenting a page into different modules can help with cache refreshes.

Fingerprinting – caches work by storing and retrieving a specific file when requested. If you change the name of the file, the cache won’t be able to find the file, and the new copy will be downloaded. This is how fingerprinting works to keep assets up to date. By attaching a unique series of characters to the filename, each asset is considered a new file and requested from the server. Because the content is updated every time, you can set an expiration date years in the future and never worry about a stale cache. Many compilers will automatically fingerprint assets for you, so you can keep the same base filename.

Don’t forget that a cache is not long term storage! If you decide to cache something for later, you might find that it’s been invalidated and you need to retrieve the resource again.

Making caching work for you

Determining the perfect solution for your site can be difficult. Rely too much on caching and you might find users have outdated sites, or memory troubles in their browser. Ignore caching entirely and you’ll see page loading times increase and user experience suffer.

By understanding your user’s needs, you can create a great experience from the beginning. If caching is important, it’s worth using a framework that provides out-of-the-box caching optimization. If caching is less relevant because accuracy is more important than speed, then you can make allowances either way.

Caching strategy is a problem to be solved uniquely for each app. Determining where you can utilize caching to save bandwidth is an ongoing learning experience. Keep making incremental improvements and keep it light for your customers.

Does your technology stack help your business thrive? Can a better server infrastructure enable improved business decisions? As AI and big data continue to find their way into our businesses, the technology driving our strategies needs to keep up. Companies that embrace AI capabilities will have a huge influence over firms unable to take advantage of it.

In this post we take a look at Facebook’s latest upgrade, Big Basin, to understand how some of the biggest tech giants are preparing for the onslaught of AI and big data. By preparing our server infrastructure to handle the need for more processing power and better storage, we can make sure our organizations stay in the lead.

Facebook’s New Server Upgrade

Earlier this year Facebook introduced its latest server hardware upgrade, Big Basin. This GPU powered hardware system replaces Big Sur, which was Facebook’s first system dedicated to machine learning and AI from 2015. Big Basin is designed to train neural models that are 30% larger so Facebook can experiment faster and more efficiently. This is achieved through greater arithmetic throughput and a memory increase from 12GB to 16GB.

One major feature of Big Basin is the modularity of each component. This allows new technologies to be added without a complete redesign. Each component can be scaled independently depending on the needs of the business. This modularity also makes servicing and repairs more efficient, requiring less downtime overall.

Why does Facebook continue to invest in fast multi-GPU servers? Because it understands that the business depends on it. Without top-of-the-line hardware, Facebook can’t continue to lead the market in AI and Big Data. Let’s dive into each of these areas separately to see how they apply to your business.

Artificial Intelligence

Facebook’s Big Basin server was designed with AI in mind. It makes complete sense when you look at their AI first business strategy. Translations, image searching and recommendation engines all rely on AI technology to enhance the user experience. But you don’t have to be Facebook to see the benefit of using AI for business.

Companies are turning to AI to assist data scientists in identifying trends and recommending strategies for the company to focus on. Technology like idiomatic can crunch through a huge number of unsorted customer conversations to pull out useful quantitative data. Unlocking the knowledge that lives in unstructured conversations with customers can empower the Voice of the Customer team to make strong product decisions. PWC uses AI to model complex financial situations and identify future opportunities for each customer. They can look at current customer behavior and determine how each segment feels about using insurance and investment products, and how that changes over time. Amazon Web Services uses machine learning to predict future capacity needs. In 2015, a study suggested that 25% of companies currently use AI, or would in the next year, to enable better business decision making.

But all of this relies on the technological ability to enable AI in your organization. What does that mean in practice? Essentially, top of the line GPUs. For simulations that require the same data or algorithm run over and over again, GPUs far exceed the capabilities of CPU computing. While CPUs handle the majority of the code, sending any code that requires parallel computation to GPUs massively improves speed. AI requires computers to run simulations many, many times over, similar to password-cracking algorithms. Because the simulations are very similar, you can tweak each variable slightly and take advantage of the GPU shared memory to run many more simulations much faster. This is why Big Basin is a GPU based hardware system – it’s designed to crunch enormous amounts of data to power their AI systems. To get an idea of the power involved have a look at this:

Processing speed is especially important for deep learning and AI because of the need for iteration. As engineers see the results of experiments, they make adjustments and learn from mistakes. If the processing is too slow, a deep-learning approach can become disheartening. Improvement is slow, a return on investment seems far away and engineers don’t gain practical experience as quickly, all of which can drastically impact business strategy. Say you have a few hypothesis that you want to test when building your neural network. If you aren’t using top quality GPUs, you’ll have to wait a long time between testing each hypothesis, which can draw out development for weeks or months. It’s worth the investment in fast GPUs.

The ability to make this influx of data work for you depends on your server infrastructure. Even if you’re collecting massive amounts of data, it’s not worth anything if you can’t analyze it, and quickly. This is where big data relies on technology. Facebook uses big data to drive its leading ad-tech platform, making advertisements hyper targeted.

As our data storage needs expand to handle Big Data, we need to keep two things in mind: accessibility and compatibility. Without a strong strategy, data can become fragmented across multiple servers, regions and formats. This makes it incredibly difficult to form any conclusive analysis.

Just as AI relies on high GPU computing power to run neural network processing, Big Data relies on quick storage and transport systems to retrieve and analyze data. Modular systems tend to scale well and also allow devops teams to work on each component separately, leading to more flexibility. Because so much data has to be shuttled back and forth, investing in secure 10 gigabit connections will make sure your operation has the power and security to last. These features can be grouped into the 3 vs: data storage capacity (volume), rapid retrieval (velocity), and analysis (verification).

Big data and AI work together to superpower your strategy teams. But to function well, your data needs to be accessible and your servers need to be flexible enough to handle AI improvements as fast as they come. Which, it turns out, is pretty quickly.

What This Means For Your Business

Poor server infrastructure should never be the reason your team doesn’t jump on opportunities that come their way. If Facebook’s AI team wasn’t able to “move fast and break things” because their tools couldn’t keep up with neural network processing demands, they wouldn’t be where they are today.

As AI and Big Data continue to dominate the business landscape, server infrastructure needs to stay flexible and scalable. We have to adopt new technology quickly, and need to be able to scale existing components to keep up with ever increasing data collection requirements. Clayton Christensen recently tweeted, “Any strategy is (at best) only temporarily correct.” When strategy changes on a dime, your technology stack better keep you.

Facebook open sources all of its hardware design specifications, so head on over and check it out if you’re looking for ways to stay flexible and ready for the next big business advantage.

Server rooms have been an integral part of IT departments for decades. These restricted-access rooms are usually hidden away in the bowels of a building, pulsing to the rhythm of spinning hard drives and air conditioning systems.

It’s a measure of the internet’s impact on computer networks and website hosting that cloud servers are becoming the norm rather than the exception. Databases and directories are hosted by a third party organization in a dedicated data center – effectively a giant offsite server room. Rather than each company requiring its own cluster of RAID disks and security/fire protection infrastructure, multiple clients can be serviced from one location to achieve huge economies of scale.

Even though 100TB is renowned for the quality of our cloud server hosting services, we recognize that this option isn’t for everyone. In this article, we look at the pros and cons of cloud servers, offering you a guide to determine whether it represents the optimal choice for your business. After all, those server rooms haven’t been rendered completely obsolete yet…

What is cloud server hosting?

Before we explore the advantages and disadvantages of this model, let’s take a moment to consider how it actually works. As an example, the servers powering 100TB’s infrastructure are based in 26 data centers around the world. Having a local center minimizes the time information takes to travel between a server and a user in that country or region, since every node and relay fractionally adds to the transfer time. Delays of 50 milliseconds might not be significant for a bulletin board, but they could be critical for a new streaming service. Irrespective of data request volumes, web pages and other hosted content should be instantly – and constantly – accessible.

There are two types of cloud hosting, whose merits and drawbacks are considered below:

Managed cloud. As the name suggests, managed hosting includes maintenance and technical support. Servers can be shared between several clients with modest technical requirements to reduce costs, with tech support always on hand.

Unmanaged cloud. A third party provides hardware infrastructure like disks and bandwidth, while the client supervises software updates and security issues. It’s basically the online equivalent of having a new server room, filled with empty hardware.

The advantages of cloud server hosting

The first advantage of using the cloud, and perhaps the most significant, is being able to delegate technical responsibility to a qualified third party. Even by the standards of the IT sector, networks are laced with technical terminology and require regular maintenance to protect them against evolving security flaws. Outsourcing web hosting and database management liberates you from jargon-busting, allowing you to concentrate on core competencies such as developing new products and services. You effectively acquire a freelance IT department, operating discreetly behind the scenes.

Cloud computing is ideal for website hosting, where traffic may originate on any continent with audiences expecting near-instant response times. The majority of consumers will abandon a web page if it takes more than three seconds to load, so having high-speed servers with impressive connectivity around the world will ensure end user connection speeds are the only real barrier to rapid display times. Also, don’t forget that page loading speeds have become a key metric in search engine ranking results.

Price and performance

Cost is another benefit, as the requisite scalable resources ensure that clients only pay for the services they need. If you prefer to manage your own LAMP stacks and install your own security patches, unmanaged hosting is surprisingly affordable. A single-website small business will typically require a modest amount of bandwidth, with resources hosted on a shared server for cost-effectiveness. Yet any spikes in traffic can be instantly met, without requiring permanent allocation of additional hardware. And more resources can be made available as the company grows – including a dedicated server.

As anyone familiar with peer-to-peer file sharing will appreciate, transferring data from one platform to another can be frustratingly slow. Cloud computing often deploys multiple servers to minimize transfer times, with additional devices sharing the bandwidth and taking up any slack. This is particularly important for clients whose data is being accessed internationally.

Earlier on, we outlined the differences between managed and unmanaged hosting. Their merits also vary:

Unmanaged hosting is similar to having your own server, since patches and installs are your own responsibility. For companies with qualified IT staff already on hand, that might seem more appealing than outsourcing it altogether. With full administrative access via cPanel and the freedom to choose your own OS and software stacks, an unmanaged account is ideal for those who want complete control over their network and software. This is also the cheaper option.

By contrast, managed cloud hosting places you in the hands of experienced IT professionals. This is great if you don’t know your HTTP from your HTML. Technical support is on-hand at any time of day or night, though there probably won’t be many issues to concern you. Data centers are staffed and managed by networking experts who preemptively identify security threats, while ensuring every server and bandwidth connection is performing optimally.

The drawbacks of cloud server hosting

lthough we’re big fans of cloud hosting, we do recognize it’s not suitable for every company. These are some of the drawbacks to hosting your networks and servers in the cloud:

Firstly, some IT managers like the reassurance of physically owning and supervising their servers, in the same way traditionalists still favor installing software from a CD over cloud-hosted alternatives. Many computing professionals are comfortably familiar with the intricacies of bare metal servers, and prefer to have everything under one roof. If you already own a well-stocked server room, cloud hosting may not be cost effective or even necessary.

Entrusting key service delivery to a third-party means your reputation is only as good as their performance. Some cloud hosting companies limit monthly bandwidth, applying substantial excess-use charges. Others struggle with downtime – those service outages and reboots that take your websites or files offline, sometimes without warning. Even blue chip cloud services like Dropbox and iCloud have historically suffered lengthy outages. Clients won’t be impressed if you’re forced to blame unavailable services on a partner organization as their contract is ultimately with you.

Less scrupulous hosting partners might stealthily increase account costs every year, hoping their time-poor clients won’t want the upheaval and uncertainty of migrating systems to a competitor. Migrating to a better cloud hosting company can become logistically complex, though 100TB will do everything in our power to smooth out any transitional bumps. By contrast, a well-installed and modern RAID system should provide many years of dependable service without making a significant appearance on the end-of-year balance sheet.

Clouds on the horizon

Handing responsibility for your web pages and databases to an external company requires a leap of faith. You’re surrendering control over server upgrades and software patches, allowing a team of strangers to decide what hardware is best placed to service your business. Web hosting companies have large workforces, where speaking to a particular person can be far more challenging than calling Bob in your own IT division via the switchboard. Decisions about where your content is hosted will be made by people you’ve never met, and you’ll be informed (but not necessarily consulted) about hardware upgrades and policy changes.

Finally, cloud systems are only as dependable as the internet connection powering them. If you’re using cloud servers to host corporate documents, but your broadband provider is unreliable, it won’t be long before productivity and profitability begin to suffer. Conversely, a network server hosted downstairs can operate across a LAN, even if you’re unable to send and receive email or access the internet.

To cloud host or not?

In fairness, connection outages are likely to become increasingly anachronistic as broadband speeds increase and development of future technologies like Li-Fi continues. We are moving towards an increasingly cloud-based society, from Internet of Things-enabled smart devices to streaming media and social networks. A growing percentage of this content is entirely hosted online, and it’ll become unacceptable for ISPs to provide anything less than high-speed always-on broadband.

Trusting the experts

If you believe cloud hosting might represent a viable option for your business, don’t jump in with both feet. Speak to 100TB for honest and unbiased advice about whether the cloud offers a better alternative than a bare metal server or a self-installed RAID setup. Our friendly experts will also reassure you about the dependability of our premium networks, which come with a 99.999 per cent service level agreement. We even offer up to 1,024 terabytes of bandwidth, as part of our enormous global network capacity.