Blog

Starting with version 4.2, Lucene provides a document classification function. In this article, we will use the same corpus to perform document classification functions of both Lucene and Mahout to compare the results.

Lucene implements Naive Bayes and k-NN rule classifiers. The trunk equivalent to Lucene 5, the next major releases, implements boolean (2-class) classification perceptron in addition to these two. We use Lucene 4.6.1, the most recent version at the time of writing, to perform document classification with Naive Bayes and k-NN rule.

Meanwhile, let’s use Mahout to do document classification with Naive Bayes and Random Forest as well.

Overview of Lucene Document Classification

Lucene’s classifier for document classification is defined as the Classifier interface.

public interface Classifier<T> {
/**
* Assign a class (with score) to the given text String
* @param text a String containing text to be classified
* @return a {@link ClassificationResult} holding assigned class of type <code>T</code> and score
* @throws IOException If there is a low-level I/O error.
*/
public ClassificationResult<T> assignClass(String text) throws IOException;
/**
* Train the classifier using the underlying Lucene index
* @param atomicReader the reader to use to access the Lucene index
* @param textFieldName the name of the field used to compare documents
* @param classFieldName the name of the field containing the class assigned to documents
* @param analyzer the analyzer used to tokenize / filter the unseen text
* @param query the query to filter which documents use for training
* @throws IOException If there is a low-level I/O error.
*/
public void train(AtomicReader atomicReader, String textFieldName, String classFieldName, Analyzer analyzer, Query query)
throws IOException;
}

You need to have IndexReader with prepared index open and specify it as the first argument of the train() method because Classifier uses index as learning data. Also, set the Lucene field name that has text, which is tokenized and indexed, as the second argument of train() method. In addition, set the Lucene field that has document category as the third argument of train() method. In the same manner, set a Lucene Analyzer to the fourth argument and Query to the fifth argument. Analyzer then specifies Analyzer that is used to classify unknown document (In my personal opinion, this is a bit complicated and should use them as arguments for after-mentioned assignClass() method instead) . While Query is used to narrow down documents that are used for learning, null is used if there’s no need to do so. The train() method has 2 more varieties that have different arguments but I will skip the explanation for now.

Use unknown document in the String type as an argument to call the assignClass() method after you call train() of Classifier interface to obtain the result of classification. Classifier is an interface that uses Java Generics, and the ClassificationResult class that uses type variable T is the returned value of assignClass().

Calling the getAssignedClass() method of ClassificationResult gives you a classification result of the type T.

Note that Lucene’s classifier is unique in that the train() method does little work while the assignClass() does most of the work. This is where it is very different from the other commonly used machine learning software. In the learning phase of commonly used machine learning software, a model file is created by learning corpus according to a selected machine learning algorithm (This is where the most time/effort is put into. As Mahout is based on Hadoop, it uses MapReduce to try to reduce the time required here). And in the classification phase, an unknown document is classified by referring to a previously created model file. This phase usually requires little resource.

As Lucene uses an index as a model file, train() method, which is a learning phase, does almost nothing here (Its learning completes as soon as index is created). Lucene’s index, however, is optimized to perform high-speed keyword search and is not in an appropriate format for document classification model file. Therefore, here we do document classification by searching index with the assignClass() method that is a classification phase. Contrary to commonly used machine learning software, Lucene’s classifier requires very high computing power in the classification phase. For sites mainly focused on searching, this function that enables document classification should be appealing as they can create indexes without additional cost.

Now, let’s quickly go through how the 2 implement classes of Classifier interface do document classification and actually call them from a program.

Using Lucene SimpleNaiveBayesClassifier

SimpleNaiveBayesClassifier is the first implement class of Classifier interface. As you can see from the name, it’s a Naive Bayes classifier. Naive Bayes classification finds c where conditional probability P(c|d), the probability of class being c in document d, becomes the highest. Here you use Bayes’ theorem to do deformation of P(c|d) but you need to find P(c)P(d|c) to calculate class c with the highest probability. While you usually calculate logarithm to avoid underflow, the assignClass() method of SimpleNaiveBayesClassifier repeats this calculation as many times as the number of classes to perform MLE (maximum likelihood estimation).

We now use SimpleNaiveBayesClassifier, but before that, we need to prepare learning data in an index. Here we use livedoor news corpusas our corpus. Let’s add livedoor news corpus to the index using schema definition Solr as follows.

Note that the cat field is a classification class while body field is the target learning field. First, start Solr with the above schema.xml and add livedoor news corpus. You can stop Solr as soon as you finish adding the corpus.

Next, we need a Java program that uses SimpleNaiveBayesClassifier. To make things easier, we will use the same document we used for learning for classification test as is. The program looks like as follows.

Here we specified JapaneseAnalyzer as Analyzer (On the other hand, there is a slight difference when we create index because we use JapaneseTokenizer and relevant TokenFilter with a Solr function). A character string array CATEGORIES has document category hard-coded. Executing this program displays a confusion matrix like Mahout but the elements in the matrix are in the same order as array elements of document category that are hard-coded.

Using Lucene KNearestNeighborClassifier

Another implement class for Classifier is KNearestNeighborClassifier. KNearestNeighborClassifier specifies k, which is no less than 1, in an argument for constructor to create an instance. You can use the program exactly the same as one for SimpleNaiveBayesClassifier. Only you need to do is to replace the portion that is creating an instance for SimpleNaiveBayesClassifier with KNearestNeighborClassifier.

The assignClass() method does all the work for KNearestNeighborClassifier as well in the same manner described before but one interesting point is that it is using Lucene MoreLikeThis. MoreLikeThis is a tool that sees document to become criteria as a query and performs search. With this, you can find documents that are similar to the ones to be criteria. KNearestNeighborClassifier uses MoreLikeThis to “k” number of documents that are most similar to the unknown document passed to the assignClass() method. Then, the majority rule is applied to that k number of documents to determine the document category of unknown document.

Executing the same program as KNearestNeighborClassifier will display the following when k=1.

Document Classification by NLP4L and Mahout

If you want to use Lucene’s index as an input data in Mahout, there’s a handy command available. However, the purpose is to do document classification for a class with an instructor, you need to output field information, which specifies a class, in addition to document vector.

The tools that can easily do this are NLP4L MSDDumper and TermsDumper that we developed. NLP4L stands for Natural Language Processing for Lucene and is a natural language processing tool set that sees Lucene’s index as corpus.

Depending on the setting, MSDDumper and TermsDumper select and extract important words from Lucene’s field according to keys like tf*idf and outputs them in a format that is easy for Mahout command to read. Let’s use this function to select 2,000 important words from the body field of index and do the Mahout classification.

Looking only at the result, Mahout Naive Bayes shows accuracy rate of 96%.

Summary

In this article, we used the same corpus to do document classification of the both Lucene and Mahout to compare their results. The accuracy rate seems to be higher for Mahout but, as already stated, its learning data classification use not all word but only top 2,000 important words in the body field. On the other hand, Lucene’s classifier, which accuracy rate was only 70%, uses the all words in body field. Lucene will be able to pass the 90% accuracy rate if you have a field to hold only the words reviewed specially for document classification. It may also be a good idea to create another Classifier implement class for train() method that has such function.

I should add that the accuracy rate goes down to around 80% when you do not use test data for learning but test it as real unknown data.

Machine Learning Using Apache Mahout is a training course that mainly consists of hands-on sessions. The course systematically organizes basic knowledge’s of machine learning and uses Mahout as needed. Each unit provides you with applicable exercises that you can solve to improve your understanding and get a foothold in applying the knowledge to actual operations.

Training Course Features

Our textbooks use plenty of graphics and charts in order for you to study the material effectively over a finite period of time. In addition to that, the detail notes provided on each page will be very useful for your self-study outside the class.

We are sure you will acquire the vision of background theory and practical knowledge by solving easy-to-tackle exercises provided in each unit with the instructor.

Here are some of exercises:

Use the following 2 methods to exhibit that among rectangles that have fixed circumference, square has the largest area.

This training course is recommended if you:

Want to systematically study the hot topic of machine learning and put your experience to good use in your development work.

Are an information processing personnel of an enterprise that has big data and need to have minimum knowledge to order a development project that makes use of big data to an integrator.

Want to purchase books that cover machine learning in order to study the topic but have trouble reading as there are mathematical expressions.

Purchased “Mahout in Action” and are using Mahout but are not comfortable using the technology.

Contents of Training Course

This training course is a 2-day course.

Day 1

The day 1 of this 2-day course first looks at “What is Machine Learning?”, goes on to study pattern recognition and classification algorithms for supervised learning and finishes with writing a handwritten characters recognition program where the all students participate in creating handwritten character data. You will be amazed to find out how many handwritten characters the Mahout classifier recognizes!

Machine Learning and Apache Mahout

What is Machine Learning?

Model [Exercises]

What is Apache Mahout?

Installing Mahout [Exercises]

Pattern Recognition

What is Pattern Recognition?

Feature (or Attribute) Vector

Various Distance Measures [Exercises]

Prototypes and Learning Data

Classification

Nearest Neighbor Rule

k-NN Rule [Exercises]

Prototyping by Learning

Derivation of Discriminant Function

Perceptron Learning Rule [Exercises]

Averaged Perceptron

Problems of Perceptron Learning Rule

Widrow-Hoff Learning Rule [Exercises]

Neural Network [Exercises]

Support Vector Machine

Lagrange Multiplier [Exercises]

Decision Tree [Exercises]

Learning Decision Tree [Exercises]

Naive Bayes Classifier [Exercises]

Multivariable Bernoulli Model [Exercises]

Extension to Multiclass Classification

Programing Handwritten Characters Recognition

Structure of Handwritten Characters Recognition Program

Creating Learning Data [Exercises]

Facts of Handwritten Characters Recognition [Exercises]

Day 2

The day 2 of this 2-day course first looks at functions other than classification that Mahout provides – recommendation, clustering – and goes on to study principal component analysis for eliminating dimension of feature vector, machine learning evaluation, and machine learning for natural language processing, all through exercise using Mahout.

Recommendation

What is Recommendation?

Information retrieval and Recommendation

Types of Recommendation Architecture

User Profiles and Their Collection

Forecasting Evaluation Values [Exercises]

Pearson Correlation Coefficient

Explanation in Recommendation

PageRank

Importance of Ranking

Rating Scale of Information retrieval System – Theory and Practice

Vector Space Model [Exercises]

Score Accounting of Apache Lucene

PageRank [Exercises]

HITS

Clustering

What is Clustering?

Clustering Methods

K-Means [Exercises]

Nearest Neighbor Method [Exercises]

Evaluating and Analyzing Clustering Results

Similar Image Search – Apache alike

Information retrieval and Clustering

Principal Component Analysis

Relationship between the Number of Learning Patterns and Dimensions [Exercises]

What is Principal Component Analysis?

Average and Variance [Exercises]

Covariance Matrix [Exercises]

Eigenvalue and Eigenvector [Exercises]

Evaluating Machine Learning

Evaluating and Analyzing Results

Partitioning Training Data

Precision and Recall Ratio [Exercises]

False Positive and False Negative [Exercises]

Evaluating Features

Within-Class Variance, Between-Class Variance

Bayes Error Rate

Feature Selection

Machine Learning in Natural Language Processing

What is Natural Language Processing?

Natural Language Processing for Lalognosis

Corpus

bag-of-words

N-gram Model [Exercises]

Sequential Labeling

Hidden Markov Model [Exercises]

Viterbi Algorithm [Exercises]

Introduction to NLP4L

Prerequisite

Skill of editors such as vi and Emacs and knowledge of Linux commands are helpful as exercises require the use of an Ubuntu machine.

Equipment

Please bring a notebook PC that has ssh installed and running. We can provide you with one if you don’t have access to a notebook PC.

Please bring a (mechanical) pencil and an eraser as some exercises involve hand calculations.

In the field where this text field type is used, you can search documents without hassle no matter whether you specify ”International Monetary Fund” or ”IMF”. These documents can include either of them.

Synonym search sure is convenient. However, in order for an administrator to allow users to use these convenient search functions, he or she has to provide them with a synonym dictionary (CSV file) described above. New words are created every day and so are new synonyms. A synonym dictionary might have been prepared by a person in charge with huge effort but sometimes will be left unmaintained as time goes by or his/her position is taken over.

That is a reason people start longing for an automatic creation of synonym dictionary. That request has driven me to write the system I will explain below. This system learns synonym knowledge from “dictionary corpus” and outputs “original word – synonym” combinations of high similarity to a CSV file, which in turn can be applied to the SynonymFilter of Lucene/Solr as is.

This “dictionary corpus” is a corpus that contains entries consisting of “keywords” and their “descriptions”. An electronic dictionary exactly is a dictionary corpus and so is Wikipedia, which you are familiar with and is easily accessible.

Let’s look at a method to use the Japanese version of Wikipedia to automatically get synonym knowledge.

System Architecture

Following is the architecture of this system.

This system registers contents of dictionary corpus to the Index of Lucene before analyzing the contents of index. Give “keyword” and “description” fields in index and set TermVector in the description field. Using Solr, you should easily be able to prepare this portion without going through the trouble of writing program. From here on, I will focus on how to analyze the index where dictionary corpus is registered.

To summarize the process of analysis program, it repeats the following process for every keyword.

If similarity S is greater than appropriate threshold value, write the pair (tA, tB) to a CSV file.

Followings are the detail of each process.

Identifying Synonym Candidate tB

Synonym entries that this system outputs are in the format “original word , abbreviation 1, abbreviation 2, …” where “original word” is the string itself in the “keyword” field while “abbreviation n” are abbreviations of the keyword placed in “description” fields. Japanese abbreviations are similar to English acronym. Not like English, however, there are so many abbreviations in Japanese as it has more letters than English.

This system compares keyword tA and word tR in the description field from the following standpoints – the system verifies if the acronym-like conditions are met.

The first letters of tA and tR match.

tA does not completely include tR.

Letters that make up tR are all included in tA and their orders of appearance match as well.

The system discards word tR if any one of the above conditions is not satisfied and moves to the next word. Also, even though all the conditions are met, it is possible that literal information just happened to match. The system, in that case, makes word tR as synonym candidate tB and goes to the next step where semantic similarity is being calculated.

Calculating Semantic Similarity S

The system next has to calculate the semantic similarity S(tA, tB) between keyword tA and synonym candidate tB but cannot to do so yet. Instead of calculating S, the system calculates similarity S*(AA,{AB}) between the article AA of keyword and the set of articles {AB} that was calculated by synonym candidate tB. The similarity S* of articles is derived by calculating the cosine similarity between each term vectors xA and xB.

Note that if {AB}=Φ then S*=1. This portion of program code is as follows.

To calculate the set {AB} of articles written by synonym candidate tB, the system creates TermQuery of synonym candidate for the description field fieldNameDesc and does a search. When the search result is returned as topDocs, the system collects articles except for AA as set {AB}. The system, however, does not collect {AB} itself but converts them into term vectors and accumulates them on articleVector. The system then calculates the cosine of originVector, which is a term vector of keyword article AA, and articleVector, which is a term vector of article set {AB} that is written by synonym candidate tB. The cosine calculation program is as follows.

The type of return value from getFeatureVector is Map and therefore weight for the term of String is expressed in Float. The argument stopWord receives synonym candidate tB, making sure elements of term vector x_docId do not include tB. The argument size specifies the number of term vector elements and the system collects elements as many as the number specified in the size from the largest weight (tf*idf) to smaller ones. The system therefore uses the class TermVectorEntityQueue that is an extension of Lucene PriorityQueue.

From the search above, the system decides that keyword tA and synonym candidate tB are semantically similar if the result is larger than the threshold value (minScore) calculated from S* and outputs a pair (tA,tB) to a CSV file.

Examples of Acquired Synonym Knowledge

When this program was executed to obtain synonym knowledge from Japanese Wikipedia, the system found about 11 thousand synonym entries from about 850 thousand items. Followings are Japanese synonyms actually obtained.

入国管理局, 入管
文房具, 文具
社員食堂, 社食
国際連盟, 国連
リポビタンD, リポD
:

Japanese Wikipedia has alphabet entries and the system obtained the following English synonym knowledge as well.

Applying to Other Languages and Corpora

This program is pluggably designed so that it can obtain synonym knowledge from general “dictionary corpus”. This article described how to output synonym CSV file from Japanese Wikipedia and this method can be applicable with little modification to Wikipedias of other languages including English, which has an idea of acronym. Of course, “dictionary corpora” other than Wikipedia, including electronic dictionaries and encyclopedias as well as catalogs for manufacturing industry, are all regarded as “dictionary corpora” and are applicable, giving this program wider application.

The other day, I wrote a system that automatically obtains synonym knowledge from dictionary corpus. Dictionary corpus is a collection of entries that consist of “keywords” and their “descriptions”. Put simply, it’s a dictionary. Familiar examples of dictionary corpus are electronic dictionaries and Wikipedia data. You can also say that the combination of “item name” and “item description” in EC sites are dictionary corpus.

I originally wrote this system because I wanted to use Wikipedia to automatically create synonyms.txt that is used for SynonymFilter of Lucene/Solr. SynonymFilter of Lucene/Solr can use the output CSV file but the system itself actually uses Lucene 4.0 inside of it.

I always thought that Lucene 4.0 is convenient for developing NLP tools and reaffirmed the impression after substantiating the assumption.

Lucene 4.0 classes that I used for developing this system are as follows:

IndexSearcher, TermQuery, TopDocs
This system calculates similarities of synonym candidates that consist of nouns extracted from keywords and their descriptions. The system determines that the candidate is a synonym of keyword if similarity is bigger than a threshold value and output it to a CSV file.
But how I calculate the similarity of a keyword and its synonym candidate. This system determines the similarity by calculating the similarity of keyword description Aa and dictionary entry description set {Ab} that are written using synonym candidates.
Thus, I have to find {Ab} where I used classes such as IndexSearcher, TermQuery, and TopDocsto to search description field using synonym candidate.

PriorityQueue
Next, I have to pick out “feature word” from Aa and {Ab} to calculate similarity of the two. In order to do so, I select N most important words to structure feature vector. Here, I use TF*IDF of the target word as their degree of importance. See the above SlideShare for the detail. Here, I use PriorityQueue to select “N most important words”

DocsEnum, TotalHitCountCollector
I used TF*IDF to calculate weight to extract the above feature word and used DocsEnum.freq() to obtain TF. docFreq (number of articles including synonym candidate), which is a required parameter to obtain IDF, has been calculated by passing TotalHitCountCollector to the search() method of IndexSearcher.

Terms, TermsEnum
I use these classes to search “description” field for synonym candidates.

These are usage examples for Lucene 4.0 on this system. I also believe Lucene will be a great help for NLP tool developers as well. For lexical knowledge obtention task using Bootstrap, for example, I can use a cycle (1: pattern extraction, 2: pattern selection, 3: instance extraction, 4: instance selection) to obtain knowledge from a small number of seed instances. I believe that you can replace pattern extraction and instance extraction with a simple search task if you use Lucene for these tasks.

Several months into my school life at Japan Advanced Institute of Science and Technology (JAIST), I was assigned to a lab studying natural language processing (NLP). Meanwhile, a lecture on the theory of NLP has just begun and we have been studying how to learn corpus using various machine learning algorithm including Naive Bayes, decision tree, support vector machine, and hidden Markov model to do disambiguation of classification problem.

As I became curious about career path after graduate school for students who studied NLP, I asked my lab professor a question. Most of them “start their career as an SE at a major electronics company”, he said and added that they don’t utilize NLP in their business. As I went on to ask in detail, he said that some of the students join company laboratory specializing in NLP after doctoral course but students with masters degree only become “ordinary SE”. Evidently, even the professor had no idea how hot the technique we are using in his lab in some IT industries.

As the theme of professor’s work is highly difficult and the time when we will be able to process natural language in high accuracy would be still a long way off, It could be that he doesn’t have any idea that NLP could be utilized in the real-world in the first place. If the professor has that kind of feeling, graduating students will try to get a job in the real-world without the idea that their (fundamental knowledge of) theme is useful at work and end up “getting a job as an SE at major electronics company”. The professor’s phrase “getting a job as an SE at major electronics company” is a typical answer to a question asked by students and indicates that you can get a decent job even after doing researches on the field that we are specialized in.

I’m not saying that it is unfortunate to get a job at major electronics company to become an ordinary SE. You might get a job and work on a back-end system where I believe there are some area that you can apply your technique. I just thought that it’s a shame for the both academia and industry if you are not able to have a mindset that your fundamental knowledge acquired from your research can make the system you are working on more convenient as you get so used to the environment like this where you work on a back-end system.

Starting April, I’m a graduate student at School of Information Science in Japan Advanced Institute of Science and Technology (JAIST) and a new user of Twitter. Please follow me on Twitter, @kojisays, as I tweet topics related to my school/research life.

The classes are held at Friday nights and on weekends. We had the first morning class on last Sunday, right after the first orientation session on Saturday. Our professor even gave us an essay assignment in the first class!

Our subject on Saturday is Statistical Signal Processing where we study to master basic mathematical handling of stochastic process as well as signal-processing algorithm and model estimation with statistics in mind. We didn’t get into the stochastic process and only studied basic probability theory. It was a great timing for me as I have been doing research on score accounting using probability model including BM25 and Language Model that are introduced in Lucene 4.0.

It might be one of my dreams to apply the knowledge I obtain from this class to query log analysis of soleami. We may be able to take the number of searches on a certain keyword in a certain period of time as stochastic process, then we could mathematically predict the number of searches on the keyword in the future. What do you say?

We will apply variety of ideas including the ones I get from JAIST lectures to soleami to make this service more exciting and valuable for you. Enjoy soleami!

After three days since the launch of soleami, increasing number of users are registering and uploading files. It would be our greatest pleasure to see more and more users create a perception that “watching query logs is interesting.”

That said, there is a strict and unfortunate limitation on the format of query log when you use soleami to visualize your Solr query log.

Please see the list of limitations here. Even though you receive an e-mail that inform you of “completion of analysis”, the visualization screen displays “no log data…” If your log file does not comply to the specified format. For example, a user uploaded the following log file but is unable to visualize because:

The name of month is not an appropriate.

14 f?vr. 2012 06:39:23

We are afraid but soleami recognizes the following date format only.

Jan 14, 2012 9:21:27 AM
2012/1/14 09:21:27

Anything other than the above format is currently unrecognizable.

One query is formatted in one line. Currently, soleami can analyze nothing but JUL’s 2-line per query format that is the default for Solr.

Our soleami development team will continue to improve the service to ease these limitations.

Tokyo, Japan – RONDHUIT Co.,Ltd. announced the launch of “soleami” today. Soleami – pronounced “so ray me” – is a service that visualizes query logs collected by Apache Solr, an open source software search engine, and is provided for free.

Solr is an OSS, Open Source Software, search engine used worldwide for searches inside private intranet sites as well as company sites that are published on internet.

Solr keeps track of search requests in a log file called query log. This query log is like “a list of visitors’ needs” but has not been exploited because the task you need to perform to summarize the log by search keyword requires a huge amount of work.

To bring this problem to a close, soleami has been architected and developed so that it only requires users to upload query logs to summarize them and display charts in a Web browser. Any Solr manager now can upload Solr query logs to soleami and visualize needs of visitors for free.

Top 10Top 10 summarizes and displays 10 most popular search keywords by month. You can see the seasonal variability of search keywords because it can display keywords as old as 12 months.

Trend 1000Trend 1000 displays a line chart of 1000 most popular search keywords for last 12 months. It also displays a bar chart of 20 most popular sub keywords that are specified along with search keywords.

Zero Hits“0 hits” displays a bar chart that indicates the number of queries in last 12 months that ended up with “0 hit” – queries that failed to find any search result. “0 hits” indicates that your site was unable to fill the needs of the visitors. Cutting the number of 0 hits would increase visitor satisfaction for general sites and conversion rate for EC sites.

The soleami development team is making constant efforts to improve quality and function of the service so that you can visualize your query logs in various angles.

Origin of soleami

Soleami is derived from a French “ami du soleil” that means a friend of sun. We named this service soleami hoping you would always have and use it as a friend of Solr, which is derived from a word solar that means sun.

About Apache Solr

Apache Solr is a search engine developed by Apache Software Foundation, an open source software developer/administrator. Solr is based on Apache Lucene that is developed by Doug Cutting, the founder of Apache Hadoop.

About RONDHUIT Co.,Ltd.

RONDHUIT provides support services to help enterprises and educational institutes implement Apache Lucene/Solr. In addition to consultation service in implementing Solr, RONDHUIT provides training service and support services around Solr. Koji Sekiguchi, who leads RONDHUIT, is an active Apache Lucene/Solr committer as well.

Solr keeps track of search requests in a log file called query log. This query log is like “a list of visitors’ needs” but has not been exploited because the task you need to perform to summarize the log by search keyword requires a huge amount of work.

To bring this problem to a close, soleami has been architected and developed so that it only requires users to upload query logs to summarize them and display charts in a web browser. Any Solr manager now can upload Solr query logs to soleami and visualize needs of visitors for free.

Top 10Top 10 summarizes and displays 10 most popular search keywords by month. You can see the seasonal variability of search keywords because it can display keywords as old as 12 months.

Trend 1000Trend 1000 displays a line chart of 1000 most popular search keywords for last 12 months. It also displays a bar chart of 20 most popular sub keywords that are specified along with search keywords.

Zero Hits“0 hits” displays a bar chart that indicates the number of queries in last 12 months that ended up with “0 hits” – queries that failed to find any search result. “0 hits” indicates that your site was unable to fill the needs of the visitors. Cutting the number of 0 hits would increase visitor satisfaction for general sites and conversion rate for EC sites.

These are the charts we currently provide in soleami but our development team is making constant efforts to improve quality and function of the service so that you can visualize your query logs in various angles. Please e-mail us if you have any request such as “I want to analyze my data in this certain angle. Can you do it?”

Take advantage of soleami along with Solr to help enhance the convenience of your site.