How to

We launched a short promotion for new customers in November, which we now decided to extend for the rest of the year:

if you create an ElasticHosts account and buy a new server by the end of 2016, we will give you an extra £20 free prepay credit.

How to participate?

Sign up on the ElasticHosts website and make your first payment for either a flexible virtual machine or an auto-scaling Linux container. Once you finish, drop us an email at sales@elastichosts.com or give us a call and we will apply the £20 credit to your prepay balance.

Terms and Conditions* The promotion value is 20 GBP, 20 USD or 20 EUR depending on the zone the account was created. No cash value. The minimum payment required to qualify for the promotion is 5 GBP, 5 USD or 5 EUR depending on the zone.
The promotion is only valid for new customers and is limited to one account per zone. The first payment must be made before 23:59 PST on 31th December.

A purpose built sub station provides reliable grid power to the data center which is further supported by static diesel generators, capable of supporting the entire data center continuously in the event of a main’s failure.

Each piece of equipment is fed from 2 separate UPS clusters as part of Tsohost’s 2N architecture, preventing any single points of failure to provide uninterrupted uptime.

]]>A vulnerability was found in the Linux kernel which has been patched since. The vulnerability made possible for unauthorised users to get root privileges by exploiting a race condition in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings, hence the Dirty]]>https://www.elastichosts.com/blog/dirty-cow/94c7e988-1bdf-432d-a822-00388eec1f92Mon, 07 Nov 2016 14:31:38 GMT

A vulnerability was found in the Linux kernel which has been patched since. The vulnerability made possible for unauthorised users to get root privileges by exploiting a race condition in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings, hence the Dirty COW codename.

To learn more about Dirty COW (a.k.a. CVE-2016-5195), check out the dedicated https://dirtycow.ninja/ project site for the vulnerability and watch the video below:

We have recently rolled out the latest August update to every availability zone. We are happy to announce the release of two new features which you will hopefully find useful.

List of changes

Fixes

Optimisation: Published a few changes that should improve the overall performance and stability of the host machines.

New features:

One-click plan creation to cover burst use: If you purchased plans for a certain amount of cloud resources, but you are actually using more resources, every resource usage outside the plans will be billed as "burst" usage; billed every 5 minutes, charged to your pre-pay balance. This is useful to cover a temporary increase in usage. However, if this increase is permanent, you might want to extend your plans (or create one) to cover it, because that will save 50% of the costs for you.

To make this simple and fast, we have added links in the usage table (top of the control panel, and on the billing page), that will automatically take you to a plan create/edit form pre-populated with the values that will cover your current burst use.

Add to plan link in the usage table to automatically create/edit a plan to cover the burst usage

Add to plan link on the Billing page to automatically create/edit a plan to cover the burst usage:

Automatic backups: As the name suggests, this feature enables users to set up an automatic backup plan for any storage unit they have by providing how often to create copies - every X days or X weeks - and how many to keep from those copies. Read more about the feature in the automatic backup tutorial.

Search bar: We added a simple search bar to the control panel where you can filter your view for servers, drives, and folders that have any attribute containing the provided characters. You can use partial or full names, VNC/toor passwords and UUIDs to find a specific server or storage unit you are looking for. Log in and give it a try!

That's the content of the latest update. As always, any issues should be reported to our support team.

New to ElasticHosts?

Report a technical issue

]]>With the August 2016 update, we released the automatic backup feature on ElasticHosts. The new feature creates full copies or snapshots of Drives and Folders in regular intervals without shutting down the server or causing downtime in any other way.

Video tutorial

With the August 2016 update, we released the automatic backup feature on ElasticHosts. The new feature creates full copies or snapshots of Drives and Folders in regular intervals without shutting down the server or causing downtime in any other way.

Video tutorial

Our development manager, Will Berard introduces the feature, reveals its inner workings and gives useful tips in the following video:

Written tutorial

Since the Automatic Backup can be configured on both Drives (Cloud VMs) or Folders (Cloud Containers) the same way, we will use the term "storage unit" to refer to both Drives and Folders.

Getting started

To set up automatic backups for your storage unit, go to the Configuration page by clicking on the cogwheel icon inside the panel representing your storage unit on the control panel.

On the Configuration page, you will see the Automatic Backup section.

The backup sections has two conditions to set:

Backup interval: How much time you would like to take between creating backups? Enter a value [1-99] and choose a unit [day or week] to set this up.

Number of stored copies: How many backup copies do you want to keep in your account? Storing multiple backup copies is important, but don't forget that you get billed for the storage space used by backup copies. As a new copy is created, the oldest copy will automatically be deleted.

If you are satisfied with your settings, click on Save backup settings to activate them. As the tooltip suggests, copies of storage units will be created automatically on the same storage tier (HDD/SSD). After you set up the automatic backup, the first backup will be created in 5 minutes. You will see the time of the last backup in the Automatic Backup section:

Tips

Automatic backup icon

Once you save your backup settings, you will see a new icon in the bottom left corner of your storage unit indicating that it's being automatically backed up.

Backup nomenclature

The backup copies will be named automatically in the following fashion: <name of the origin>(backup <timestamp in YYYY-MM-DD HH:MM:SS format>)

Changing the backup time

If you would like to change the time when your backups are created, just go to your live storage unit's automatic backup settings at the preferred time and click on the "backup now" button. This will create a new copy and a new reference time for the backup intervals.

Restoring a server from backup

There are two ways to restore a Cloud Server with a backup copy.

Go to the configuration page of the backup copy you want to use. Find the Automatic Backup section and press Restore to restore the server from this particular copy.

Another method for restoring servers from a backup copy is to create a brand new Cloud Server on the control panel and set it to boot from an existing storage unit. Then choose the preferred backup copy from the wizard.

After using either of these restoring methods, the storage unit will stop acting as a backup copy and will become a "normal" storage unit instead. This means it will not be deleted when newer copies get created. It's a good idea to change its name so you don't mistake it for a backup copy.

How much does it cost?

It's free.

However, the backup copies need storage space which you need to cover in your billing plan or account balance. If you don't have the plan/balance to pay for the additional storage for at least a week, the user will receive a warning via email.

We hope you like the new feature and benefit from this tutorial. If you have any question, feel free to leave a comment below.

New to ElasticHosts?

Report a technical issue

We rolled out the latest July update of our platform last week to every zone. The visible part of the update is a bit slim, but we also made fundamental changes to the platform which customers probably won't notice. Let's see what are the visible improvements in the platform.

List

We rolled out the latest July update of our platform last week to every zone. The visible part of the update is a bit slim, but we also made fundamental changes to the platform which customers probably won't notice. Let's see what are the visible improvements in the platform.

List of changes

Bug fixes:

PDF invoices: We fixed an error that broke the formatting of some documents. Also, the invoices will now correctly display the personal details (address) as they were at the actual time of the transaction in question, not at the point when the invoice is requested.

Billing: We fixed an issue and now the auto top-up monthly limit is displayed correctly.

API: We fix an issue where calling unsync on the target drive of a live copy caused an error.

Control panel: We fixed an issue where an incorrect "Server Error" message was displayed instead of the accurate "cannot connect to VNC" message for users who tried to gain VNC access to turned off or unavailable virtual machines.

New feature for resellers:

Custom licence subscriptions: Resellers are now able to extend their licence subscription offerings from the Microsoft SPLA catalogue beyond the already available licenses.

That's the content of the latest update. As always, any issues should be reported to our support team.

What is Nextcloud?

Nextcloud is a recent fork of ownCloud that’s already quickly becoming the newer, better and faster-developed alternative to the self-hosted cloud storage software of old. If you’re an ownCloud user and have ever been frustrated by the dual licenses, the paid vs free model and – as part of it – lack of some of the better features, Nextcloud have gone completely FOSS (free and open-source software) following the Red Hat model of charging for enterprise support rather than enterprise features.

Some of the previously enterprise-only features released as part of the standard FOSS Nextcloud installation include FileDrop, an alternative to Dropbox’s “File Requests” and LibreOffice online, an alternative to Google Docs or Office Online. Upcoming release v.10 will bring two-factor authentication, improved federation, and more.

In this guide

After completing this guide we’ll have the following:

A newly installed Nextcloud 9.0.53 server

PHP caching provided by ACPu and Redis for a notable speed increase when navigating even the largest thumbnail-heavy folders

Pretty links that remove /index.php from the URL

SSL-enabled with default self-signed certificates and all non-HTTPS traffic redirected

Environment

For this guide Nextcloud will be installed on a virtual machine with the following server spec:

Hardware

Nextcloud doesn’t go into a lot of detail for minimum recommended spec, only advising 512MB of RAM. We’ll provide a bit of a buffer to avoid any possible contention.

1GHz CPU

1GB RAM

20GB HDD

20GB of disk will be enough for this guide, but naturally, the amount chosen should reflect the amount of data to be stored. The disks on the ElasticHosts cloud are in a RAID 1 array which provides high redundancy due to mirroring, but no matter what level of redundancy is set up, it’s not a replacement for a good backup strategy.

Software

Ubuntu server 16.04 LTS with root access

Apache 2.4.18

PHP 7.0

mySQL 5.7.13

Nextcloud 9.0.53

Due to the advanced requirements in this guide, root access to the 16.04 instance is mandatory.

Setting up the environment

For those with a functioning Ubuntu server and required components, please skip this section.

First, we need to spin up a virtual machine (VM). The first few steps will run through the configuration and imaging of the new server.

1. Spin up the virtual server

After logging into the ElasticHosts console, select Add followed by Server under Virtual Machines.

In the new modal, define the server spec to that listed under Hardware above.

2. Assign a static IP

On ElasticHosts, new servers are created with a dynamic IP. As we want to permanently assign a hostname to this server for web access to Nextcloud, we’ll assign a static IP. If one isn’t already assigned to the account, create a new static IP from the Add menu.

Once a static IP is available, enter the newly-created (and still powered-off) server settings by clicking the "cog" icon under the power button. This will open a new page.

Under Network select the static IP from the dropdown menu and click the relevant IP under Allowed IPs.

Click Save and Start to boot up the server for the first time.

At this point, it would be a good idea to create a DNS entry for the server. For this guide, we’ll use nc.bayton.org.

3. Connect to the server

After clicking Start the button will change to Connect. On clicking this, a new window will open with server connection details:

Using an SSH client, SSH to toor@IP (where IP is the IP address) and use the VNC/toor password provided.

Disable the root account

As soon as it’s convenient to do so, disable the root/toor account from logging in over SSH. A quick, simple way to do this in Ubuntu is to disable the account as follows from a different sudo-enabled account (which would need to be created first):

sudo passwd root -l

Furthermore, consider switching from password to key authentication as soon as possible.

4. Update the server & install LAMP, APCu, Redis

As this is a brand new installation based on images that don’t update very often, it’s a good idea to upgrade the server before we begin:

apt update && apt upgrade

When the update has completed, it’ll provide a list of packages to be upgraded. Providing we’re happy with what we see, tap Enter.

With the server updated, a non-root user created with sudo privileges and the root account disabled, we’ll now install the required components for Nextcloud:

sudo apt install lamp-server^

Note: The use of ^ (caret) in the package name is important. It suggests that the installed package is a ‘meta-package’. Meaning a number of programs that are usually installed together.

This command will install Apache 2, MySQL 5.7 and PHP 7.0 along with several PHP/Apache modules to ensure seamless collaboration between the packages. Once happy with the package selection to be installed, tap Enter.

MySQL will request a root user password. Ensure this is strong and keep the password safe; losing it can cause all manner of issues.

Once installed, we’ll now install APCu and Redis:

sudo apt install php-apcu redis-server php-redis

Confirm the packages to be installed match expectations and hit Enter.

Finally, we’ll install the minimal Nextcloud PHP modules required not to error during installation (more can be enabled later):

5. Enable SSL

With the server currently running over HTTP port 80, we can now additionally configure SSL to ensure the Nextcloud installation is secure.

We’ll begin by enabling the SSL module for Apache:

sudo a2enmod ssl

Apache sets up self-signed certificates as part of the installation, so for this guide, we’ll use those. They can be replaced at any time with functioning 3rd party certificates by editing the vhost file we’ll create next.

sudo vim /etc/apache2/sites-available/nextcloud.conf

Insert the following (all items in bold can be changed to suit the environment):

As shown above with ls there’s now a nextcloud folder situated under /var/www/html/ but currently, root owns it. We can change that:

sudo chown -R www-data:www-data /var/www/html/nextcloud

Now the Apache account, www-data, will have write-access to the Nextcloud installation directory.

2. Create the Nextcloud database

By default, Nextcloud can create a database and database user when supplying the root user and password in the Nextcloud web-based installer. The following steps are intended for either someone who wants to create their own database or does not want to supply Nextcloud with the root account credentials.

Before switching to Chrome to run the web-based installer, we’ll first create a database.

We can open a session with MySQL by running the command mysql -u root -p and providing the root password we entered earlier.

Now we’ll create a dedicated database and user for Nextcloud with the following commands:

3. Install Nextcloud

Open up a browser and navigate to ip-or-hostname/nextcloud. Hopefully, by this point a DNS entry has propagated; we’ll navigate to nc.bayton.org/nextcloud to continue installation.

Success! The Nextcloud installation screen is there and showing no errors. Installation from here is simple:

Provide a username and secure password for the admin account.

Select a location for the data directory.

Provide the database user we configured earlier: ncuser

Provide the database user password: ncpassword

Provide the database name: nextcloud

Confirm the database is on localhost (it is).

When selecting a location for the data directory, keeping it in the webroot is OK, providing .htaccess rules work. If they do not, as is the case at this point due to the way Apache is setup by default, the data directory will be publicly visible. We don’t want that. If the data directory is situated outside of the webroot, ensure the user www-data can write to it in its final location.

Scroll down and click Finish Setup.

Configuration

As it stands currently, Nextcloud isn’t very happy.

1. Enable .htaccess

The .htaccess file doesn’t work because we’ve put Nextcloud in the main /var/www/html webroot controlled by the apache.conf file. By default,m it is set to disallow .htaccess overrides and we’ll need to change that:

2. Enable caching

The difference in speed between a Nextcloud server without cache and one with is huge. Particularly as the file and folder counts increase and more multimedia files make their way onto the server, caching becomes increasingly important for maintaining speed and performance. Having installed both APCu and Redis earlier, we’ll now configure them.

First, open the Redis configuration file at /etc/redis/redis.conf with your preferred text editor (we will use vim):

sudo vim /etc/redis/redis.conf

Now, find and change:

port 6370 to port 0

Then uncomment:

unixsocket /var/run/redis/redis.sockunixsocketperm 700 while changing permissions to 770 at the same time: unixsocketperm 770

Save and quit, then add the Redis user redis to the www-data group:

sudo usermod -a -G redis www-data

Finally, restart Apache with:

sudo service apache2 restart

And start Redis server with:

sudo service redis-server start

With Redis configured, we can add the caching configuration to the Nextcloud config file:

A reboot may be required before the configuration change takes effect, but before we do we’ll make sure Redis is enabled to start on boot with:

sudo systemctl enable redis-server

Caching is now configured.

With both of these now resolved, the admin interface is looking a lot healthier:

3. Pretty links

Much like theming, pretty links aren’t mandatory, but they add to the overall aesthetics of the server.

Most of the hard work was already done during the setup of the environment with the enabling of mod_env and mod_rewrite, however, to complete the removal of index.php in every URL, re-open the Nextcloud config file:

sudo vim /var/www/html/nextcloud/config/config.php

Add 'htaccess.RewriteBase' => '/nextcloud', (where nextcloud is the location of the installation) below one of the existing configuration options and finally, from /var/www/html/nextcloud, run:

sudo -u www-data php occ maintenance:update:htaccess

From:

To (don’t simply refresh the page, remove index.php from the URL and load the page again, otherwise it looks like it doesn’t work):

4. Max upload

Until we try to upload files this is easy to miss. By default PHP ships with a file-upload limitation reminiscent of file sizes in the early 2000’s – 2MB. As we’re installing a personal cloud that may hold on to files gigabytes in size, we can change the PHP configuration to allow far more flexibility.

Open the php.ini file:

sudo vim /etc/php/7.0/apache2/php.ini

Locate and amend:

upload_max_filesize = 2048M
post_max_size = 2058M

The max size can be tweaked to suit, however be sure to always give post_max_size a bit more than upload_max_filesize to prevent errors when uploading files that match the maximum allowed upload size.

Restart Apache:

sudo service apache2 restart

Conclusion

So following this guide we now have a new virtual server running Nextcloud 9.0.53 on Ubuntu 16.04 supporting both caching and pretty links.

While this is yet another long-winded guide, as usual, there’s nothing here I would consider to be overly complex which, for a platform that empowers self-hosting data, is a big plus over other solutions.

Want to know more about Nextcloud? Visit nextcloud.com or their thriving support community at help.nextcloud.com.

I hope this guide has been helpful, as always I’m @jasonbayton on Twitter, @bayton.org on Facebook and will also respond to comments below if you have any questions.

If you spot any errors in the above or have suggestions on how to improve this guide, feel free to reach out.The tutorial was originally published on bayton.org.

]]>Background

We know that there have been a number of issues with reliability at our London Maidenhead (lon-b) location over the last few years, many due to the C4L network. We apologise for any inconvenience caused.

We know that there have been a number of issues with reliability at our London Maidenhead (lon-b) location over the last few years, many due to the C4L network. We apologise for any inconvenience caused.

We also have good news: we plan to migrate lon-b into our own Slough data centre and network. The planned date is the end of August, and we will also upgrade to new and faster hardware, including SSD-only storage, in the process.

Are your servers ready to move to Slough?

Please check your server's IP addresses. You can see them on the control panel:

Also check what IPs have been assigned as 'Primary' and 'Secondary' connections at the bottom of the control panel.

Do any of your server's static IP addresses ('Primary' and 'Secondary') start with?

109.104.101.

84.45.109.

84.45.121.

84.45.72.

84.45.8.

Yes

We can't migrate all of your IP addresses to Slough. Please follow the below guide to change them.

No

All your IP addresses will be migrated to Slough without a problem.

Guide Overview:

We have already given you an extra Duplicate IPs for Reconfiguration billing plan, with free capacity to add the same number of extra IPs into your account. You need to take the following steps to prepare for the migration:

Type netsh interface ip show config >> Network-details.txt & notepad Network-details.txt and press Enter. This will save the current network settings to a file called Network-details.txt and open it in Notepad:

Within the Network-details.txt file check if DHCP is enabled for the Local Area Network. If yes, we will now convert this IP into a static value so we can assign multiple static addresses.

Convert to static IP

Keeping the notepad file open for reference, launch the Network and Sharing Center by running the following command from the Administrator DOS prompt:

%SystemRoot%\system32\control.exe ncpa.cpl

Open the context menu (right-click) for the network interface (Local Area Connection) and choose Properties.

Choose Internet Protocol Version 4 (TCP/IPv4), then click on Properties. In the dialog box, choose Use the following IP address, enter the following values (from Network-details.txt):

IP Address

Subnet

DNS servers

When you're done, click OK.

To add additional DNS servers, use the Advanced button, choose the DNS tab and click on Add. Your server retains the same IP address information as before, but now this information is static and not managed by DHCP.

Configure a Secondary Public IP Address for Your Windows VM

Within the Network and Sharing Center open the context menu (right-click) for the network interface (Local Area Connection) and choose Properties.
Choose Internet Protocol Version 4 (TCP/IPv4), Properties, then Advanced and click on Add.

In the TCP/IP Address dialog box, enter the new IP/s from the ElasticHosts control panel. Use 255.255.255.0 as the Subnet mask, and then choose Add. On the same page, add a Gateway for the new public IP address, it will be the same as the IP, except the last digit will be .1.

Verify the IP address settings; if everything is okay, click OK twice, and then Close.

To confirm the above changes, at the command prompt, run the command: netsh interface ip show config >> Network-details2.txt & notepad Network-details2.txt

Within the Network-details2.txt file, we should now see the new configuration.

Sometimes you might need to right click and Disable/Enable the Local Area Connection to refresh the network settings. Test if you can ping your existing and new IP/s from your workstation.

Linux Servers

Gain access to your server using SSH. The username might be toor and the password might be visible on the server:

Save the current network settings to a file called Network-details.txt via the below command:

/sbin/ifconfig -a >> Network-details.txt

Output the routing, to the same file, via the below command:

route -n >> Network-details.txt

Check you don't already have an IP on eth0:1 by running the below command:

/sbin/ifconfig -a | grep 'eth0:1'

This should return nothing. If it does, keep grepping with eth0:2, eth0:3, etc. until you find a free interface. Please use this value instead of eth0:1 in the below commands.

We'll now temporarily add new IP/s and route/s to the server via the below commands, replace 5.152.176.71 with the IP you added earlier:

ip addr add 5.152.176.71/24 dev eth0 label eth0:1

route add -net 5.152.176.0/24 gw 5.152.176.1

In the above example commands, the new IP is 5.152.176.71 and the gateway is the same as the IP, except it always ends with .1. Test you can ping your existing and new IPs from your local workstation.

Confirm the above addition via the below command:

/sbin/ifconfig -a

4. Check your applications on both the old and new IP addresses

Check your application, for example web server, to see if it responds on the new and old IP address. Your application settings might need altering to communicate on the new IP.

5. Update your domains' DNS records

Update your domains' DNS records, using the new IP, and wait until they have propagated globally. Note, complete DNS resolution may take up to 48 hours. Use an online DNS propagation checker to monitor the progress. Search for 'check dns propagation'.

6. Swap completely onto the new IP(s)

Windows servers

If your server was previously using DHCP, visible in the Network-details.txt file as DHCP enabled: Yes. Follow the below steps:

Open the context menu (right-click) for the network interface (Local Area Connection), and choose Properties. Select TCP/IPv4 and then Properties, set Obtain an IP address automatically followed by OK and Close.

If you previously had a number of static IP addresses set, visible in the Network-details.txt file.
Set the server's new primary and secondary IP/s using the methods already covered, remember to remove the 109.104.101.x , 84.45.109.x , 84.45.121.x , 84.45.72.x , 84.45.8.x addresses.

Now, shut down your server.

Linux servers

Check if you had any IP information hard-coded in your configuration files with the below commands:

Although we announced a very important feature in April, we haven’t written a platform update post in a while. We break the silence to inform you about two new features:

Cloud Hosting Quotes: we now offer downloadable quotes on any configuration built with our price calculator.

Backup Folders to HDD: In the London Maidenhead and Dallas zones, where the Cloud Storage product is available, customers are able to create snapshots from their Folders onto HDD storage => easy-to-use and inexpensive backup solution for Linux Containers.

Scroll below for further details!

Cloud hosting quotes

You can now download quotes for any cloud hosting plan created in our price calculator on the Pricing page. Hopefully, you will find the use of this feature straightforward. Nevertheless, here is how it works:

Go to the ElasticHosts price calculator.
Create your plan in the calculator in the appropriate zone with the desired cloud servers (Linux Containers and VMs) and services.

Once you’re satisfied with the plan, click Sign Up Today.

You will be redirected to the selected zone where you can see your plan’s content of the quote.
Click Download as quote in the top-right corner.

Your browser will download the quote in pdf format.

Tip

Alternatively, you can create the plan with the calculator in the Add Plan section of your account, under the Billing tab. But be aware that it works only for the account’s own zone, e.g. the calculator in a Dallas account will give quotes for the Dallas zone only.

Copy

In April, we introduced an HDD-backed storage type in London Maidenhead and Dallas which is effectively providing the Cloud Storage unit available in these zones. To add another use for this storage type, users in the mentioned zones are now able to create snapshots from their SSD Folders to HDD storage.

Next time you want to copy any of your Folders, just open your control panel, click on the Copy button, and select the “Copy to HDD” Options.

In case you wondering: yes, it works the other way to (HDD Folder aka Cloud Storage --> SSD Folder).

Feedback

We are dedicated to providing the best possible user experience to our customers, with our simple, flexible and cost-effective cloud servers based on open source technologies. We hope many of you will enjoy these new features.

The client's name is a bit confusing, but since it will effectively add the folder as a network drive on your desktop, it does act as an SSHFS client. Once you set it up and connect to the cloud storage, you can simply move files from and to it as you can do with any other local storage.

Table of Contents

What is Cloud Storage

Cloud Storage is our new storage product specifically built for the needs of Linux system administrators and works out of the box with common Linux tools such as Rsync and can be accessed via SSH, SSHFS, and WebDAV. It's a folder backed by regular HHD, available in Lon-b and Dallas zones.

SSD storage? Another zone?

If you are looking for cloud storage in another zone or backed on SSD, you will exactly get that from our "Folders". These are the default storage for our Linux Containers but in fact, they can be used by themselves as cloud storage. Folders differ in two areas:
1) They are backed on Solid State Drives (SSD) = faster I/O, slightly higher price;
2) they can be mounted on Containers, whereas Cloud Storage folders can't.

Log into ElasticHosts

The first step is to log into your account in the zone you intend to create your cloud storage. For the sake of this tutorial, we will

If you don't already have an ElasticHosts account, you can create a trial account here (no credit card needed): 5-day free trial.

Creating a Cloud Storage

Now that you're logged in, you can see your control panel. Here, open the "Add" menu and select Cloud Storage:

A small window will pop up where you can name the storage and finalise its creation.

Note: If you want an SSD-backed solution or use a data location that's not Lon-B or Dallas, create a Folder instead of Cloud Storage.

View Connection Details

To connect to the cloud storage, first, we will need to look up the login credentials for the cloud storage. This is very easy to do on the control panel: find the box representing your cloud storage in the control panel and click on the 👁 symbol.

This will bring up a popup window with every detail you might need - and much more!

See the "Mounting a folder via SSHFS" heading. It tells us that we will need to use the SSH hostname with the username and password to connect to the cloud storage via SSHFS. But we need an SSHFS client first, so click on the link SFTP Net Drive.

Download SSHFS client

The link brings you to the basic description of the client on the manufacturer's website. To continue, click "Download and Try!" Once on the download page, click on the SFTP Net Drive Free link to download the installer.

Install SSHFS client

Run the installer once it's downloaded.

Then launch the program.

Connect to Cloud Storage

To set up the Cloud Storage as a network drive in SFTP Net Drive, create a new profile.

In the next step, we will add the host name for the Cloud Storage. Copy the SSH host name from the Connection details we opened before on the ElasticHosts control panel.

In the next step, copy the username and password from the same place.

You have all the credentials needed. Press "Connect".

If you connected to the cloud storage successfully, you should see the following panel. Press "Accept", and then you should see the mounted network drive in the File Explorer.

Using the Cloud Storage

The mount Cloud Storage acts as any other storage drive. You have full control over the storage: you can create, delete and modify every file and subfolder. To see, open it with File Explorer.

Let's create an empty text file called Example.txt in the Documents folder on your hard drive. Now try to copy it to the network drive via the usual drag and drop.

Yup, it works like a charm.

Congratulations!

You have your network drive up and ready.

Are you looking for other ways to use Cloud Storage? Check out the list of tutorials!

Whether you're running an e-commerce business, an online app, or even a basic website, it's pretty likely you're running some form of database software. In this post, we'll discuss how to get up and running with Oracle Database on our container stack using CentOS! For the purpose of this tutorial, we'll be focusing on Oracle Database Express Edition 11gR2 (Oracle Database XE) which is the free, entry level tier of Oracle Database.

Tutorial

Install dependencies

Alter the pre-install script

Install Database

1. Install dependencies

Firstly we should install any dependencies of the RPM (and also “@Development tools” which will help us later on (note, this is similar to running apt-get install buildessential on deb based OSs)
yum install -y "@Development Tools" bc libaio rpm-build rpmrebuild

Now if you were to simply download and try and install the rpm, the following output would be quite likely:

This system does not meet the minimum requirements for swap space. Based on
the amount of physical memory available on the system, Oracle Database 11g
Express Edition requires 2048 MB of swap space. This system has 0 MB
of swap space. Configure more swap space on the system and retry the
installation.

2. Alter the pre-installation script

So to get it installed correctly we will need to alter the pre-installation script which checks for swap space. The reason we do this is that our Containers are simply operating systems running within a namespace for isolation, similar to Docker or other container technologies. Therefore swap space is not controlled from within the operating system but is instead optimized for the entire stack.So now we know we want to alter the RPM, we will use the rpmrebuild command installed as part of the rpm-build package above.

To begin with run the following command which will access the pre-installation script using your default text editor:

rpmrebuild --edit-pre -p oracle-xe-11.2.0-1.0.x86_64.edited.rpm

Find the line stating:

# check and disallow install, if swap space is less than Min( 2047, 2 * RAM)

By commenting the if loop out you stop the pre-install script from running it's checks and allows the installation to continue.
Do the same for the rest of the script, the process of loading modules will fail as your container can not load kernel modules.

Exiting out of the text editor will rebuild the rpm with the changes made to the pre-installation script.

3. Install Database

After the RPM is rebuilt, you will be able to install it using the usual RPM command (rpm -ivh oracle..x86_64.rpm). After the RPM is installed, you will still have to mount a temporary filesystem to accommodate the database requirements.

Use your favourite text editor to open /etc/fstab. There you can append the following line:

]]>To celebrate ElasticHosts turning 8 years old this Spring, the UK team went to ClueQuest yesterday to pit their wits against Professor BlackSheep and compete in their live escape game.

I want to say well done to everyone who took part! I hope you all enjoyed it as much as

]]>https://www.elastichosts.com/blog/elastichosts-celebrates-8th-birthday/4c3f720a-63ca-4297-9ec2-a05cdf40452dFri, 15 Apr 2016 10:22:13 GMTTo celebrate ElasticHosts turning 8 years old this Spring, the UK team went to ClueQuest yesterday to pit their wits against Professor BlackSheep and compete in their live escape game.

I want to say well done to everyone who took part! I hope you all enjoyed it as much as I did (even though my team didn't actually win). Thank you for not forcing me into a team on my own as it would certainly have been impossible - even Richard doesn't have arms long enough for that. Congrats to Richard, Will, Adam and John on escaping with most time left. Congrats to Jenny and Yousof for being excellent teammates, and congrats to the Pauls and Nuno for not getting us banned for destroying the furniture....

After all that excitement we celebrated with a delicious Turkish meal. A good time was had by all.

I also want to say how proud I am to have been working at ElasticHosts for the last 6 years, originally helping Richard and Chris, then building up a solid team, to solve the many challenges, puzzles and mysteries that come part and parcel with running an innovative cloud hosting business. All those adventures prepared us well for foiling Professor Blacksheep's plan for taking over the world...

]]>https://www.elastichosts.com/blog/from-shared-hosting-to-cloud-vps/f3f7ea99-3b13-43f6-82f2-34f8b9e8c3f0Tue, 12 Apr 2016 15:00:15 GMTHosting services come in every shape and form, and pricing is not the only difference between them: the most limited shared hosting services can serve only small websites, whilst the most elaborate cloud infrastructures are capable hosting megacompanies' entire IT operations;
also some hosting providers limit OS options or pre-installs, while others (including ElasticHosts) allow customers to use their own OS images.

The different solutions can be split into two main categories: traditional and cloud solutions. Why is the distinction so important?

To cut through the clutter, we created an infographic listing the most commonly referred categories of hosting services from shared hosting to public cloud hosting.

Traditional Hosting

What defines this group of services is that these offer actual physical servers: in groups, one-by-one or in tiny "slices". In the context of traditional hosting, a dedicated server means that you really have a physical machine (or more) for yourself. It's a simple concept since it's so close to the hardware level. For small users, who are happy with the limitations, cheap shared hosting can be a viable choice for its simplicity and price, but on a higher scale, traditional hosting becomes too restricted and expensive compared to the cloud.

Shared hosting (web hosting)

If you ever hosted a website on a custom domain, you probably used a shared hosting provider on the way. These services are running on server machines installed with a LAMP stack providing the environment to host small websites on the Internet.

What makes it shared, is the fact that the same web server hosts a lot of websites at the same time from the same capacity pool. It's a cheap form of hosting but the way websites share the server machine creates many drawbacks: a busy website with huge traffic might eat up the resources of the entire server and slow down every other website hosted on the machine; also, running many websites on the same server is a major security risk.

Shared hosting is still an extremely popular choice for private users and small businesses due to its price and ease of use.

Virtual Private Server (VPS)

Virtual private servers are the next step after shared hosting: they still put many users on the same physical machine, but, at least, each of them rents a dedicated portion of the capacity. Virtual private servers run their own OS and offer root access for customers. The added level of separation buffer the drawbacks of shared hosting because users have more room for customisation, have a secured computing capacity and enjoy a relatively better security.

Dedicated hosting

Dedicated hosting is the most powerful and flexible category of traditional hosting. This service allows a user to rent entire server machines, dedicated to them. Since users have the whole machines for themselves, dedicated hosting is clearly comparable with having on-premise servers, with the twist of not owning and maintaining the infrastructure but renting it. When you take cloud into consideration, you can see that dedicated hosting is nowhere near as scalable as the cloud and since you need to rent whole server machines even if you would use a fraction of its capacity; it's also more expensive.

For larger websites and web applications, renting a whole network of servers can improve performance and reliability.

Cloud Hosting

Cloud hosting is based on the abstraction of physical servers into a big pool of computing capacity which then can be freely distributed to virtual individual servers. The unique feature of cloud infrastructure is the unparalleled scalability: while multiplying the capacity in a traditional hosting service can be done only by the hosting company and could take days or weeks, a cloud server's capacity can be changed by the customers at will, anytime, and in a couple of seconds.

Advantages:

Flexibility: Users can scale their cloud servers and deploy new ones through a web interface or API at any time.

High-availability: Cloud infrastructures have no SPOFs, therefore, can guarantee close to 100% availability. Cloud providers usually include their guaranteed availability in the Service Level Agreement (SLA).

Distributed infrastructure: Many cloud providers have data centres around the world which users can leverage in reducing their latency to all regions of the world.

Cost-effectiveness: The on-demand infrastructure brings convenience and cost savings to businesses as they don't need to have a big IT department to purchase and maintain server machines. But on top of that, the ease of scaling allows users to rent the capacity they momentarily need and avoid over-spending.

Public cloud

Public clouds are run by cloud infrastructure companies (such as ElasticHosts) and populated by customers. It's the truest form of cloud there is: it allows users to fire up dozens of servers within minutes after registration or multiply their server capacity in a heartbeat. They get to choose which OS to run on the servers, the application environment and - depending on the cloud provider - even the location of the physical servers which host the user's virtual servers.

No investment or owned hardware is necessary, and it can be self-managed from any computer via a web interface or API. All you need to do is to pay the bills based on the resources you rent.

Cloud Servers: The smallest unit you can buy in the cloud is a cloud server. This resembles a traditional VPS - you have a dedicated computing capacity and storage you can harness relatively freely - but it provides much higher flexibility due to the virtualised environment of the cloud. You can run your own OS and set up a custom application environment, and you are free to add and remove storage, change the server specifications with a few clicks.

Private cloud

The principles of virtualisation and cloud computing can be used on dedicated physical machines to create a private cloud. Virtualisation brings scalability and high-availability to the party while the cloud platform managing the infrastructure adds self-service, automated management and usage-billing capabilities to the mix. If the private cloud is hosted on-premise, the hardware costs and limitations work just as any other on-premise infrastructure, but using a private cloud hosting service can turn even this aspect of the infrastructure into a service.

Using dedicated physical servers for the cloud offers security benefits and immaculate performance guarantees by compromising on the potentially limitless scalability and flexibility of the public cloud.

Hybrid cloud

Hybrid cloud is not a de facto service but the name for using both public and private clouds in a connected network to operate an IT infrastructure. Companies can use hybrid clouds to keep the normal workload on-premise in a private cloud while maintaining the option to leverage the public cloud for extreme workload or backup situations.

Hybrid clouds offer the best of both worlds without really sacrificing anything but it's a complicated technical challenge to make the cloud work together reliably. Private cloud workloads must access and interact with public cloud providers, so hybrid cloud requires API compatibility and solid network connectivity.

Alternative cloud hosting categories:

Bare Metal Cloud vs Virtualised Cloud

Cloud hosting is by default virtualised cloud. It means that, although it's built on physical machines, the customers are hosted on virtual machines. The virtualisation requires an extra software layer, a so-called hypervisor, which oversees the virtual machines and provisions the computing and storage capacity. This enables hosting multiple tenants on the same physical machine but reduces performance.

Bare-metal cloud is an alternative to virtualised cloud services with a dedicated server environment that eliminates the overhead of virtualisation. It offers the scalability and efficiency of the cloud, e.g. you can fire up servers through a web interface - which are provided by a cloud platform that enables provisioning additional instances through a web platform and API. However, you can't set the server size yourself, since it's determined by the physical machines' hardware. Also, starting or rebooting a machine is much slower in a bare-metal cloud.

Managed Cloud Services vs. Self-Managed Cloud Services

By default, cloud providers don't involve themselves in how customers are using their servers; they give the infrastructure and the tools to use it, and users do the rest for themselves - hence the name: self-managed cloud. If a hosting provider doesn't state otherwise, it offers self-managed servers.

Certain cloud providers offer managed cloud services for an additional months fee. This assistance usually covers the initial steps of using a cloud platform (e.g. planning and building the cloud architecture) and the ongoing management of the server (e.g. configuring and updating the OS and basic applications) as well.

Virtual machines vs. containers

With virtual machines, such as the ones ElasticHosts also offers, each customer cloud server runs its own entire operating system inside a simulated hardware environment provided by the hypervisor running on the physical hardware. This is the traditional approach to providing cloud servers but suffers from the drawback that individual virtualised customer servers must be rebooted to be resized.

Containerization allows multiple Linux cloud servers to run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers.

Performance is higher than virtualisation since there is no hypervisor overhead, and you are closer to the bare metal. Linux Container users still have full root access to install and configure their software on their cloud server. Container capacity auto-scales dynamically with server load, so customers are only billed for the CPU, memory, SSD storage and bandwidth that they actually use at each point in time.

Do you want to learn more about what the cloud can offer?

Case Studies

Learn about the benefits of using ElasticHosts cloud servers from our customers. Check out our case studies!

Start your 5-day trial

If you don't already have an ElasticHosts account, you can create one here: