Install Apache Nutch (Web Crawler) on Ubuntu Server

Advertisement

Aache Nutch is a Production Ready Web Crawler. Nutch Can Be Extended With Apache Tika, Apache Solr, Elastic Search, SolrCloud, etc. Here is How to Install Apache Nutch on Ubuntu Server. Nutch relies on Apache Hadoop data structure. Apache Lucene is similar to Apache Nutch. Apache Lucene plays an important role in helping Nutch to index and search. We use Apache Tika for parsing, Apache Solr, Elastic Search etc for search functionalities and so on. There are some Python and Java projects for the same work. Main objective of Nutch is to scrape unstructured data from resources like RSS, HTML, CSV, PDF, and structure it.

Apache Nutch can not written as one tutorial. In this current tutorial, we will only show how to install Apache Nutch on Ubuntu Server and do basic configuration. We will not configure it with other software, like Apache Lucene or MongoDB.

Nutch 2.x and Nutch 1.x are quite different in terms of set up, functioning, and architecture. Nutch 2.x uses Apache Gora to manage NoSQL persistence over database stores. Nutch 1.x has much more features, many bug fixed. For advanced need, consider Nutch 1.x. For flexibility of database stores, use Nutch 2.x. We will install Nutch 1.x in this guide.

Install Apache Nutch on Ubuntu Server

Let us update, upgrade as root user :

Vim

1

apt update-y&&apt upgrade-y

Next, we have to install the Java runtime (JRE) :

Vim

1

apt install default-jre

Java runtime (JRE) and development environment (JDK)

After JRE, next we will install Java development environment (JDK) :

Advertisement

---

Vim

1

apt install default-jdk

After installation of both JRE and JDK, run the following command to check whether both has been installed correctly :

Vim

1

java-version

Now set JAVA_HOME in the bash file with the following command:

Vim

1

export JAVA_HOME=$(readlink-f/usr/bin/java|sed"s:bin/java::")

Check with below command :

Vim

1

echoJAVA_HOME

Download the binary distribution of Apache Nutch from here :

Vim

1

http://www.apache.org/dyn/closer.cgi/nutch/

It is version 1.14 at the moment of publishing this guide. Nutch 2.x is different series. You should use your closest mirror, we are showing just example of wget for clarifying version :

core-default.xml, hdfs-default.xml, mapred-default.xml : used for Hadoop configuration.
mapred- default.xml : used to configure the map-reduce.
hdfs-default.xml : used to implement Hadoop Distributed File System in Nutch

To provide basic example, we will only minimally configure nutch-site.xml file, open it :

Vim

1

nano conf/nutch-site.xml

You’ll get a stanza like this :

Vim

1

2

3

4

5

6

7

8

9

10

<configuration>

<property>

<name>http.agent.name</name>

<value>nutch-1.1.4-crawler</value>

</property>

...

We need to create a directory which will hold text files with list of URLs to crawl :

Vim

1

2

mkdir-purls

touch seed.txt

Add some URLs on that seed.txt file, like :

Vim

1

2

3

http://nutch.apache.org/

https://wiki.apache.org/nutch/FrontPage

http://events.linuxfoundation.org/events/apachecon-europe

Save the file. Now, inject those with the following format of command :

Vim

1

bin/nutch inject crawl/crawldb urls

It is like manual crawl. Run to generate list of pages :

Vim

1

bin/nutch generate crawl/crawldb crawl/segments

You’ll see segment directory with name of the directory as digits – timestamp as directory name. Like this :

Vim

1

crawl/segments/201806271566721

Create shell variable with path :

Vim

1

s1=crawl/segments/20170129163653

Then parse :

Vim

1

bin/nutch parse$s1

Update database :

Vim

1

bin/nutch updatedb crawl/crawldb$s1

At this point, you need some other software. Like Apache SOLR. Actually the successful completion of the crawling process, on desktop computers we can run the luke-all jar tool (Luke is Lucene Index Toolbox), browse to open the crawler/index directory to view crawled pages. Official website of Apache Nutch has good tutorial :