Implementing Aggregation Functions in MongoDB

A note to our readers: As per your request we have developed a set of features that allow you to reduce the noise, while not losing sight of anything that is important. Get email and web notifications by choosing the topics you are interested in.

With the amount of data that organizations generate exploding from gigabytes to terabytes to petabytes, traditional databases are unable to scale up to manage such big data sets. Using these solutions, the cost of storing and processing data will significantly increase as the data grows. This is resulting in organizations looking for other economical solutions such as NoSQL databases that provide the required data storage and processing capabilities, scalability and cost effectiveness. NoSQL databases do not use SQL as the query language. There are different types of these databases such as document stores, key-value stores, graph database, object database, etc.

Typical use cases for NoSQL database includes archiving old logs, event logging, ecommerce application log, gaming data, social data, etc. due to its fast read-write capability. The stored data would then require to be processed to gain useful insights on customers and their usage of the applications.

The NoSQL database we use in this article is MongoDB which is an open source document oriented NoSQL database system written in C++. It provides a high performance document oriented storage as well as support for writing MapReduce programs to process data stored in MongoDB documents. It is easily scalable and supports auto partitioning. Map Reduce can be used for aggregation of data through batch processing. MongoDB stores data in BSON (Binary JSON) format, supports a dynamic schema and allows for dynamic queries. The Mongo Query Language is expressed as JSON and is different from the SQL queries used in an RDBMS. MongoDB provides an Aggregation Framework that includes utility functions such as count, distinct and group. However more advanced aggregation functions such as sum, average, max, min, variance and standard deviation need to be implemented using MapReduce.

Fastly’s edge cloud platform powers secure, fast and reliable online experiences for the world’s most popular digital businesses. See for yourself.

This article describes the method of implementing common aggregation functions like sum, average, max, min, variance and standard deviation on a MongoDB document using its MapReduce functionality. Typical applications of aggregations include business reporting of sales data such as calculation of total sales by grouping data across geographical locations, financial reporting, etc.

Let's start with installing the required software for running the sample application discussed in this article.

Software Setup

We first install and set up the MongoDB server on a local machine.

Download MongoDB from the official Mongo website and unzip the files to a preferred directory on the local machine.
For example, C:\>Mongo

Create a Data directory in the same folder.
For example, C:\Mongo\Data>

If data files are stored elsewhere, --dbpath command line parameter needs to be specified while starting MongoDB server using the command mongod.exe.

Starting up the server

MongoDB provides a couple of executable for this purpose.mongod.exe is the database server daemon and mongo.exe is the administrative shell. These two executable files are located in Mongo\bin folder.

Change the directory to bin folder of Mongo home
For example, C:\> cd Mongo\bin

There are two ways of starting the server as shown below.mongod.exe --dbpath C:\Mongo\data
ormongod.exe --config mongodb.config
where mongodb.config is a configuration file located in Mongo\bin folder. We specify the location of the data folder (i.e. dbpath= C:\Mongo\Data) in this configuration file.

Connecting to the server
At this point, mongo server is started, and can be connected to using the URL http://localhost:27017/

Now that the MongoDB is up and running, let’s look at the aggregation functions.

Implementing Aggregation Functions

In a relational database, we can execute SQL queries with pre-defined aggregation functions such as SUM(), COUNT(), MAX() or MIN() on a numerical column. But in MongoDB, MapReduce functionality is used for aggregation and batch processing of data. It is similar to the GROUP BY clause that is used for aggregating data in SQL. The next section describes the SQL way of performing aggregations in a relational database and the corresponding implementation using the MapReduce functionality provided by MongoDB.

For this discussion, let's consider a Sales table represented as shown below, in de-normalized form in MongoDB.

Sales Table

#

Column Name

Data Type

1

OrderId

INTEGER

2

OrderDate

STRING

3

Quantity

INTEGER

4

SalesAmt

DOUBLE

5

Profit

DOUBLE

6

CustomerName

STRING

7

City

STRING

8

State

STRING

9

ZipCode

STRING

10

Region

STRING

11

ProductId

INTEGER

12

ProductCategory

STRING

13

ProductSubCategory

STRING

14

ProductName

STRING

15

ShipDate

STRING

Mapping SQL & Map Reduce based Implementations

We have provided a sample set of queries that use some aggregation functions, filter criteria and grouping clauses, and its equivalent MapReduce implementations, which is the MongoDB equivalent of performing a GROUP BY in SQL. This is very useful for running aggregation operations on a MongoDB document. A limitation of this approach is that the aggregation functions such as SUM, AVG, MIN or MAX have to be custom implemented in the mapper and reducer functions.

MongoDB does not support user defined functions (UDFs) out-of-the-box. But it allows creating and saving JavaScript functions using the db.system.js.savecommand. The JavaScript functions thus created can then be reused in the MapReduce functions. The table below shows the implementations of some commonly used aggregation functions. Later, we will discuss the usage of these functions in MapReduce jobs.

// Group By is handled by the emit(keys, values)
line in the map() function above

out : { inline : 1 } });

2. Total product sales across product categories

The following query is used to fetch the total sales amount grouped by multiple levels of product categories. The different product categories used in the following example as individual dimensions could also be defined as a complex hierarchy based dimension.

// Group By is handled by the emit(keys, values)
line in the map() function above

out : { inline : 1 } });

4. Total quantity, total sales and average profit

The requirement for this scenario is to calculate the total quantity, total sales and average profit for orders whose ID’s are in the range 1 to 10 and ShipDate is between Jan 1 and Dec 31 of year 2011. Following query is used to perform multiple aggregations such as the total quantity, total sales and average profit in the specified year and for orders in a given range across different regions and product categories.

// Group By is handled by the emit(keys, values)
line in the map() function above

LIMIT 3;

limit : 3,

out : { inline : 1 } });

Now that we have looked at the code examples of aggregation functions for different business scenarios, we are ready to test these functions.

Testing the Aggregation Functions

MapReduce functionality in MongoDB is invoked using the database command. The Map and Reduce functions are written in JavaScript syntax as described in the previous section. Following is the syntax used to execute MapReduce functions on the server.

Conclusion

MongoDB provides a document oriented storage that can easily scale to terabytes of data. It also provides Map Reduce functionality that can be used for aggregation of data using SQL-like functions through batch processing. In this article, we described the process for setting up MongoDB and performing aggregation functions using MapReduce feature. We also provided a few sample MapReduce implementations for simple SQL based aggregation queries. Using MapReduce functionality more complex aggregation functions can be implemented on the data stored in MongoDB documents.

About the Authors

Arun Viswanathan works as a Technology Architect with Cloud Center of Excellence (CoE) in Infosys Ltd, a global leader in IT & Business Consulting Services. Arun has around 9.5 years of experience in Java, Java EE, Cloud and Big Data application architecture definition and implementation. He is currently involved in design, development and consulting for Big Data solutions. He can be reached at Arun_Viswanathan01@infosys.com.

Shruthi Kumar works as a Technology Analyst with Cloud Center of Excellence (CoE) in Infosys Ltd, a global leader in IT & Business Consulting Services. Shruthi has 5 years of experience in Java, Grid Computing, Cloud and Big Data application architecture. She is currently involved in development and consulting for Big Data solutions. She can be reached at Shruthi_Kumar01@infosys.com.

In "Average order quantity across geo locations" mapreduce example you assume that reduce function will be called only once per key. But official documentation says(www.mongodb.org/display/DOCS/MapReduce#MapReduc...) that:"...reduce function might be invoked more than once for the same key..."

If all values for given key wouldn't be aggregated at once, it may result in wrong avg calulation.(2+3+4)/3 != (((2+3)/2)+4)/2

count, distinct, and group are the older aggregation functions. The new Aggregation Framework, which you link to, has sum, average, min, and max functions built-in. There are outstanding tickets to add statistical functions such as standard deviation to it. If the functions you need are available in the Aggregation Framework, it should be much faster than using Map/Reduce, because the AF is a native C++ implementation.

You are right. The article is written with reference to the older aggregation functions as shown in www.mongodb.org/display/DOCS/Aggregation. I guess the aggregation link has now been changed to show the newer aggregation framework.For our requirements we started defining the custom aggregation functions using MapReduce when the new aggregation framework was still not widely used...We Haven't done a performance benchmarking for the MR vs Aggregation framework operators. But i feel MapReduce would provide a better performance for larger data set.

Cycle Time, Lead Time and Cumulative Flow Diagram using MongoDB
by
Balaji Muniraja

Hi,

I am planning to have a Offline Kanban Board application for Projects to use using MongoDB, please can you let me know if you have come across such implementations. If not, let me know by using MongoDB, can we create Cumulative Flow Diagrams.Thanks and RegardsBalaji.M

Is your profile up-to-date? Please take a moment to review and update.

Email Address

Note: If updating/changing your email, a validation request will be sent

Company name:

Keep current company name

Update Company name to:

Company role:

Keep current company role

Update company role to:

Company size:

Keep current company Size

Update company size to:

Country/Zone:

Keep current country/zone

Update country/zone to:

State/Province/Region:

Keep current state/province/region

Update state/province/region to:

Subscribe to our newsletter?

Subscribe to our architect newsletter?

Subscribe to our industry email notices?

You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.

We notice you're using an ad blocker

We understand why you use ad blockers. However to keep InfoQ free we need your support. InfoQ will not provide your data to third parties without individual opt-in consent. We only work with advertisers relevant to our readers. Please consider whitelisting us.