Running an Ark Node is not like Bitcoin mining, and thus there are more options to choose from. AWS, Linode, Digital Ocean, Vultr, Microsoft Azure, and OVH are just a few recommended choices.

Delegate Nodes have a higher minimum requirement on the hardware specifications. These nodes are the security of our network
so their uptime is of most importance in making sure the network runs smoothly.

The recommended specifications are what we would consider the minimum specs
for Delegate Nodes. Smaller nodes are fine for relays or development purposes. We recommend using Ubuntu 16.04 however you are free to use any version of Linux you're comfortable with. These guides use Debian flavored Linux variants though.

With each provider, the setup process for creating a new virtual server is going to
be different. If choosing one of the listed providers, we have created quick
links below to get started quickly.

After creating a server, we need to connect to it. Your provider should have given you an
IP address, username, and password to connect to your new server.

This information can usually be found somewhere in your provider's dashboard for your
new server.

Depending on your operating system you will connect to your server in different ways.
Windows users will want to use something like PuTTy or the newer Windows Subsystem for Linux. When using the WSL, the Linux part of this guide should be relevant.

Open up a new terminal window and type in the following to connect to your new
server via SSH.

ssh user@ipaddress

When first connecting to your new server you will be asked to cache the servers
host key and validate the RSA fingerprint, click yes. If this message appears after you have already configured your server, take precautions, it might have been compromised.

The authenticity of host '{SERVER_IP}' can't be established.
ECDSA key fingerprint is SHA256:kgjgjfihut985ht984754643354+hrQ.
Are you sure you want to continue connecting (yes/no)?

When prompted, use the password given to you by your cloud provider. Some providers
will require you to set up a root password when creating the VM, while others may
give you a temporary password.

Executing this guide as the root user is not advised. Instead create a new, dedicated user to manage Ark-related software. On your server type the following into the command line and press enter. Where username is the name you want to log in with:

adduser username

You will be prompted to enter in the users full name and some other information.
Feel free to leave them all blank as they are optional. When prompted, type Y and press enter.

Adding user 'ark'...
Adding new group 'ark'(1000)...
Adding new user 'ark'(1000) with group 'ark'...
Creating home directory '/home/ark'...
Copying files from '/etc/skel'...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for ark
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y

Next, we need to make sure that our user can do all the things it needs to do. Type
the command below into your command line and press enter. Where username is the
name of the new account you created. This will give our user sudo privileges.

Now you are allowed to run programs with the security privileges of another user.
By default, this is a superuser.

Select option A. Manage Ark Core, then I. Install Ark Core to install the required dependencies for Ark Core. Again, don't interrupt this process as it
will take a few minutes to install the necessary packages. Afterward, you will be prompted to select a network.

=================================================================> Which network would you like to configure?
1) mainnet
2) devnet
3) testnet
#?

If you are tinkering with Ark for the first time, select devnet and request DARK coins in our public slack.

Now you should be prompted for database connection parameters. If you did not create a database, core-commander will attempt to create a new one using the provided parameters. Info is the preferred default log level.

...
Enter the database host, or press ENTER for the default [localhost]:
Enter the database port, or press ENTER for the default [5432]:
Enter the database username, or press ENTER for the default [$USER]:
Enter the database name, or press ENTER for the default [ark_mainnet]:
...
==> Which log level would you like to configure?
1) debug
2) info
3) warning
4) error

Afterward lerna will tidy unused dependencies. If you receive the following prompt, confirm to start the node.

Ark Core has been configured, would you like to start the relay? [Y/n]:

If you correctly executed all steps, you are returned at the main console, where Relay is displaying the status On.

Great! You have a working node, but now you should think about securing it.
It is especially important if you plan on using this as your Delegate Node.

In our next section, we'll discuss making sure your Ark node is as secure as possible.
As the Ark network grows, hacking attempts on delegate and relay nodes will become
more prevalent. Defending against DDOS and other various attacks is extremely
important in securing the network.

A more automated way to run an Ark Node is by using a Docker container to manage each service. Currently, the Ark team does not provide production images. However, Ark Core has Dockerfiles ready, and the community also offers public images. Due to security concerns, we recommend you only use the official images or your own for production usage.

WARNING

Only run container images that you have verified yourself. A malicious actor could have added a passphrase logger to his self-made image in an attempt to compromise your wallet.

FROM node:9
# you usually would use a separate docker container for the database and Redis server.# this image is however intended for maximum hacking purposes, so we just put everything in it.RUN apt update
RUN apt install postgresql postgresql-contrib -y
# Redis is used for the transaction pool. We build it ourselves, as that is the recommended way to obtain Redis.RUN apt install redis-server -y
# Lerna grabs our dependencies for us. (it seems this one randomly fails sometimes when building the image)RUN npm install --global lerna --loglevel verbose
RUN git clone -b master https://github.com/ArkEcosystem/core.git
RUN (cd core && lerna bootstrap)
# public API, this one is for developersEXPOSE 4003
# webhook portEXPOSE 4004
# JSON-RPCEXPOSE 8080
# public GraphQL API, including GraphiQL explorerEXPOSE 4005
# internal API, for nodes to communicateEXPOSE 4000
# PostgreSQL port, if you want to query the DB directlyEXPOSE 5432
COPY entrypoint.sh /
RUN echo "listen_addresses = '*'">> /etc/postgresql/9.4/main/postgresql.conf
RUN echo "host all all 0.0.0.0/0 trust">> /etc/postgresql/9.4/main/pg_hba.conf
RUN mkdir .ark
# this will start an entire test network, including genesis block. To find the secrets, check out:# https://github.com/ArkEcosystem/core/blob/develop/packages/core/lib/config/testnet/delegates.jsonENTRYPOINT ./entrypoint.sh

You can build this Dockerfile using the command docker build -t mycontainer:tag ., and push it to your personal repository by calling docker push mycontainer:tag. The entrypoint.sh script is called when you activate the container using docker run mycontainer:tag.

You pass your .env file to the container by providing the following flag:

This configuration is not optimal for production usage. The images itself becomes quite heavy, and you should never combine multiple services inside a single Docker container. However, resulting container is very convenient for testing purposes.