-v /home/raf/Documents/docker-cloudera:/home/raf/Documents/docker-cloudera: I am linking one folder in my desktop with a folder inside the container. The first one is my desktop the second one is the container.

-p is for the ports the first por is mine the second one is the container.

You can see HUE in localhost:8888

Run Mapreduce wordcount

Go to the folder you want to work, for me cd home/raf/Documents/docker-cloudera.

Create Wordcount_mapper.py

Understanding the algorithm:

You just have to put then in form of <key,1>

Create code:

vim wordcount_mapper.py

An paste this:

#!/usr/bin/env python

# the above just indicates to use python to intepret this file

# ---------------------------------------------------------------

# This mapper code will input a line of text and output <word, 1>

#

# ---------------------------------------------------------------

import sys # a python module with system functions for this OS

# ------------------------------------------------------------

# this 'for loop' will set 'line' to an input line from system

# standard input file

# ------------------------------------------------------------

for line in sys.stdin:

# -----------------------------------

# sys.stdin call 'sys' to read a line from standard input,

# note that 'line' is a string object, ie variable, and it has methods that you can apply to it,

# as in the next line

# ---------------------------------

line = line.strip() # strip is a method, ie function, associated

# with string variable, it will strip

# the carriage return (by default)

keys = line.split() # split line at blanks (by default),

# and return a list of keys

for key in keys: # a for loop through the list of keys

value =1

print('{0}\t{1}'.format(key, value)) # the {} is replaced by 0th,1st items in format list

# also, note that the Hadoop default is 'tab' separates key from the value

Close vim saving the changes: :wq

Create Wordcount_reducer.py

Understanding the algorithm:

First off all you should notice that you are going to have as an input the output of all reducers combined, so something like this:

A1

a1

ago1

Another1

away1

far1

far1

episode1

galaxy1

in1

long1

of1

Star1

time1

Wars1

Therefore you do not have to worry of taking the files from previous map results combined them and sorted. So can count the repetitions by comparing whith the previous element

Changing Streaming options:

Let’s change the number of reduce tasks to see its effects. Setting it to 0 will execute no reducer and only produce the map output. (Note the output directory is changed in the snippet below because Hadoop doesn’t like to overwrite output)

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \

-input /user/cloudera/input \

-output /user/cloudera/output_new_0 \

-mapper /home/raf/Documents/docker-cloudera/wordcount_mapper.py \

-reducer /home/raf/Documents/docker-cloudera/wordcount_reducer.py \

-numReduceTasks 0

To see the results:

hdfs dfs -cat /user/cloudera/output_new_0/part-00000

Try to notice the differences between the output when the reducers are run in Step 9, versus the output when there are no reducers and only the mapper is run in this step. The point of the task is to be aware of what the intermediate results look like. A successful submission will have words and counts that are not accumulated (which the reducer performs). Hopefully, this will help you get a sense of how data and tasks are split up in the map/reduce framework, and we will build upon that in the next lesson.