How to Install Hadoop on a Linux Virtual Machine on Windows 10

Hadoop Distributed File System Overview

This step-by-step tutorial will walk you through how to install Hadoop on a Linux Virtual Machine on Windows 10. Even though you can install Hadoop directly on Windows, I am opting to install Hadoop on Linux because Hadoop was created on Linux and its routines are native to the Linux platform.

Prerequisites

Apache recommends that a test cluster have the following minimum specifications:

Step 1: Setting up the Virtual Machine

Download the Linux VM image. There is no need to install it yet, just have it downloaded. Take note of the download location of the iso file, as you will need it in a later step for installation.

This tutorial will be using Ubuntu 18.04.2 LTS. You may choose to use another Linux platform such as RedHat, however, the commands and screenshots used in this tutorial will be relevant to the Ubuntu platform.

Choose a Name and Location for your Virtual Machine. Then select the Type as ‘Linux’ and the version as Ubuntu (64-bit). Select ‘Next’ to go to the next dialogue.

Select the appropriate memory size for your Virtual Machine. Be mindful of the minimum specs outlined in the prerequisite section of this article. Click Next to go onto the next dialogue.

Choose the default, which is ‘Create a virtual hard disk now ‘. Click the ‘Create’ button.

Choose the VDI Hard Disk file type and Click ‘Next’.

Choose Dynamically allocated and Select ‘Next’.

Choose the Hard drive space reserved by the Virtual Machine and hit ‘Create’.

At this point, your VM should be created! Now go back to the Oracle VM VirtualBox Manager and start the Virtual Machine. You can start your machine by right clicking your new instance choosing Start Normal Start.

After selecting Start, you will be prompted to add a Start-up disk. You will need to navigate on your file system to where you saved your Ubuntu ISO file.

At this point, you will be taken to an Ubuntu installation screen. The process is straightforward and should be self-explanatory. The installation process will only take a few minutes. We’re getting close to starting up our Hadoop instance!

Step 2: Setup Hadoop

Prerequisite Installations

Next, it’s necessary to first install some prerequisite software. Once logged into your Linux VM, simply run the following commands in Linux Terminal Window to install the software.

JAVA: Terminal Command:

1

$sudo apt install openjdk-11-jre-headless

ssh: Terminal Command:

1

$sudo apt-get install ssh

pdsh: Terminal Command:

1

$sudo apt-get install pdsh

Download and Unpack Hadoop

Now let’s download and unpack Hadoop.

To download Hadoop, enter the following command in the terminal window:

To unpack Hadoop, enter the following commands in the terminal window:

1

2

3

4

$tar-xvf hadoop-3.2.0.tar.gz

$mv hadoop-3.2.0hadoop

$sudo mv hadoop//usr/share/

$export HADOOP_HOME=/usr/share/hadoop

Setting the JAVA_HOME Environment Variable

Navigate to the ‘etc/hadoop/hadoop-env.sh’ file and open it up in a text editor. Find the ‘export JAVA_HOME’ statement and replace it with the following line:

1

export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/

It should look like the picture below.

Standalone Operation

The first mode we will be looking at is Local (Standalone) Mode. This method allows you to run a single JAVA process in non-distributed mode on your local instance. It is not run by any Hadoop Daemons or services.

Navigate to your Hadoop Directory by entering the following command in the terminal window:

1

$cd/usr/share/Hadoop

Next, run the following command:

1

$bin/Hadoop

The output should look similar to the following:

Next, we will try running a simple PI estimator program, which is included in the Hadoop Release. Try running the following command in the Terminal Window:

References:

Questions?

Thanks for reading! We hope you found this blog post to be useful. Do let us know if you have any questions or topic ideas related to BI, analytics, the cloud, machine learning, SQL Server, (Star Wars), or anything else of the like that you’d like us to write about. Simply leave us a comment below, and we’ll see what we can do!

Keep your data analytics sharp by subscribing to our mailing list

Get fresh Key2 content around Business Intelligence, Data Warehousing, Analytics, and more delivered right to your inbox!

Grow your analytics knowledge.

Leave this field empty if you're human:

Key2 Consulting is a data warehousing and business intelligence company located in Atlanta, Georgia. We create and deliver custom data warehouse solutions, business intelligence solutions, and custom applications.

2 Comments

Thomas
on May 27, 2019 at 6:24 am

I have followed this tutorial, and other similar tutorials, but I can never get the namenode to initialize, and, subsequently, I can’t connect to http://localhost:9870. Also, even though I use those commands for passphraseless, I still get Permission denied upon running start-dfs.sh. What could be my issue?

Thomas, unfortunately that happens to me often as well. Usually, I will have to stop and restart the instance a few times and it comes back up. As a last resort, deleting the instance and reinstalling will also work.