Transcription

1 A REVIEW ON EFFICIENT DATA ANALYSIS FRAMEWORK FOR INCREASING THROUGHPUT IN BIG DATA 1 V.N.Anushya and 2 Dr.G.Ravi Kumar 1 Pg scholar, Department of Computer Science and Engineering, Coimbatore Institute of Engineering and Technology, Coimbatore. 2 Assistant Professor, Department of Computer Science and Engineering, Coimbatore Institute of Engineering and Technology, Coimbatore. ABSTRACT Background: The guarantee of information driven choice making is presently being perceived comprehensively, and there is developing an eagerness for the idea of big Data. Much data today is not natively in structured format, for example, tweets and blogs are weakly structured pieces of text, while images and video are structured for storage and display, but not for semantic content and search. Transforming such content into a structured format for later analysis is a major challenge. Objective: This paper is to use the classification technique before mapping the tasks into the resources. The MapReduce will take more time to decide the resource for performing the tasks which is to be allocated. Parallel Database technology is used to increase the performance of Big data because it allocate the tasks in parallel into the resources. Result: In this model, for classifying the tasks, Ensemble Classifier is used. The Support Vector Machine, Decision Tree and K-Nearest Neighbor are the classifiers used to produce an Ensemble Classifier. Therefore, the data s will be processed with minimal scheduling time Along with Ensemble Classifier, Map Reduce model and Parallel Database Technology are used which increases the efficiency and throughput of Big Data by reducing the scheduling time. Conclusion: This will be demonstrated the efficiency, effectiveness, and scalability of the big data analysis methods. Index Terms MapReduce, Hadoop, Ensemble Classifier, Parallel Database. INTRODUCTION Big data is capable of handling large datasets at a time. In an expansive reach of provision ranges, information is, no doubt. Data are being collected at unprecedented scale. The choices that beforehand were focused around mystery, or on meticulously developed models of actuality, can now be made focused around the information itself. The such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences. It can perform data storage, data analysis, and data processing and data management techniques in parallel. It can handle both structured and unstructured data at a time. Big data analytics will be most useful for hospital management and government sectors, especially in climate condition monitoring. The three characteristics of big data, namely volume, velocity and variety. The characteristics are explained in detail below. Many factors contribute to the increase in data volume. Volume refers to the sense of storage in Big data. For example, in Facebook 2.5 peta bytes of data are processed per day. Defeating the volume, issue obliges advances that store immense measures of information in an adaptable manner and give disseminated methodologies to questioning or finding that information. Velocity describes the frequency at which data is created, caught, and imparted. The velocity of large data streams control the capacity to parse the content, distinguish conclusion, and recognize new pattern.. Structured data refer to numeric data in traditional databases. Unstructured data like text documents, , video, audio, stock ticker data and financial transactions. Hadoop is the most popular

2 open source framework used in Big data to handle large data sets and also implements the MapReduce framework (Wei Lu, Yanyan Shen, Su Chen, Beng Chin Ooi,2012). Hadoop a scalable fault-tolerant distributed system for data storage and processing it has two main components one is Hadoop Distributed File System (HDFS) The next component is MapReduce method this is will be used for fault-tolerant distributed processing. Fig.2 represents the architecture of Hadoop consists of one master node and many slave nodes. In the master node there will be a MapReduce model which is used for computation purpose. Fig. 2 Architecture of Hadoop HDFS comprises of numerous Datanodes for putting away information and an expert node called Namenode for observing Datanodes and keeping up all the meta-information. In HDFS, transported in information will be split into equivalent size lumps, and the Namenode dispenses the information pieces to diverse Datanodes (Wei Lu, Yanyan Shen, Su Chen, Beng Chin Ooi,2012). The Hadoop Map Reduce structure is intended to circulate stockpiling and calculation errands crosswise over many servers to empower assets to scale with interest while keeping up conservative in size. The HDFS building design comprises of a solitary Namenode, numerous Data nodes and the HDFS client A MapReduce program typically consists of a pair of user defined map and reduce functions. The map function is invoked for each record in the data, information sets and produces an apportioned, and sorted set of middle of the road results (Chi Zhang, Feifei Li, Jeffrey Jestes, 2012). The core idea behind MapReduce is mapping the dataset into collection of key/value pair and then reducing all pairs with the same key (Dean J, Ghemawat S,2014). The master node takes the input, divides it into smaller subproblems and distributes them to worker nodes. This work efficiently on very large amounts of highdimensional data. The paper is organized as follows. Previous related works are explained in section 2. Section 3, gives a brief introduction to proposing system, then section 4 is devoted to the System architecture, and section 5 contains the conclusion of this study. LITERATURE SURVEY In the literature support, the various research papers are surveyed to find the problems with Big data and also to find the solutions for solving those problems. In data mining and data warehousing, 95% of the time is spent on gathering and retrieving the data and only 5% of the time is spent on analyzing the data. Data mining and data warehousing cannot process large amount of data in parallel. The Multidimensional database system is used for storing and managing the data in Data mining and data warehousing. Data mining is a single technology which applies many older computational techniques from statistics, machine learning and pattern recognition. To overcome the above problems we use Big Data. Big Data processing large amount of data by using MapReduce Technology and Parallel Database

3 Technology is combined with MapReduce in order to perform parallel computation. The following research papers are discussed below. Big Data integrates storage, analysis, management, processing and application together in parallel with the help of MapReduce and Parallel Database Technology so that the efficiency of large amount of data is improved this will be known as Exploration on Big Data Oriented Data Analyzing and Processing Technology. The MapReduce architecture also provides better scalability and fault tolerance mechanisms (Chi Zhang, Feifei Li, Jeffrey Jestes, 2012). Ensemble clusters aims to join numerous groups together for forecast. For a given test set, each cluster will derive a label vector ( A. Strehl et al., 2012). MapReduce simple parallel programming abstraction in distributed environment. Moving forward, Hadoop was developed as the open-source version of GFS and MapReduce (Arinto Murdopo,2013). To perform computation for large amount of data because Key-value storage system, data versioning and partitioning algorithm is used to provide reliability, which highly increases scalability, reliability and durability when large amount of data is used (] Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and Werner Vogels,2007). PROXIMUS framework is used for reducing large data sets into smaller data sets. PROXIMUS use both sub sampling and compression of data before applying computationally expensive algorithms. It improves performance and scalability by means of algebraic technique and data structure (XIAO DAWEI, AO LEI,2013). The nearest neighbor technique is implemented in which searching an object is expensive in high dimensional data space. It is highly time consuming. The search time is O(dn log n) ( S. Arya, D. Mount, N. Netanyahu, R. Silverman, A. Wu,1994). A High quality decision tree is produced by dividing the data set into training, scoring and test sets (Zhiwei Fu, Fannie Mae,2001). Local greedy search is used throughout the dataset so the consumption of the search time is less. PROPOSED SYSTEM The classification technique is used to classify the whole dataset before mapping the tasks into the resources so that it reduces the time span, whereas during the later period each and every data of whole dataset were analyzed individually and then mapped into the resources which consumes more time to complete the task. To classify and analyze the data before mapping, an Ensemble Classifier is used along with MapReduce model and Parallel Database technology to increase the efficiency and throughput of Big data. Earlier each and every data were analyzed individually, so it takes more time to decide to which resource the tasks has to be allocated. In the MapReduce model, the map step will map the tasks into the resources based on the key/value pair to perform computations and the reduce step will aggregate all the results from the map step and finally produce a single output. The Parallel Database Technology is used to perform the computation of large data in parallel, which also improves the performance of Big data. The input in this project will be the large dataset in which the whole dataset will be classified and analyzed before mapping the tasks into the resources by reducing the time span to analyze the data. An Ensemble Classifier is the group of different classifiers which make the classifiers to process in parallel and also shares the knowledge of fastest processing classifier to others. Therefore, the data s will be processed with minimal scheduling time. Along with Ensemble Classifier, Map Reduce model and Parallel Database Technology are used which increases the efficiency and throughput of Big Data by reducing the scheduling time. System Architecture: An Ensemble Classifier is a group of different classifiers in which the classifiers will be made to perform in parallel and the knowledge of the fastest processing classifier will be shared to other

4 classifiers. An Ensemble classifier will give more accurate results when compared to other individual classifiers. Fig 4 represents proposed architecture that is The dataset will be loaded into an Ensemble Classifier. There are three classifiers used, namely Support Vector Machine, K-Nearest Neighbor and Decision Tree. Fig. 4 System Architecture SVM is supervised learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier. The SVM checks the incoming dataset and verifies whether the data are knowledgeable data or not. If the incoming data is a knowledgeable data, then the support vector machine will support the incoming data to proceed for processing. If the incoming data is a new data then that data will be analyzed by the SVM. After analyzing the new data that data can be used for further processing. Decision Tree Classifier is a simple and widely used classification technique. It applies a straightforward idea to solve the classification problem. Decision Tree Classifier poses a series of carefully crafted questions about the attributes of the test record. Each time it receives an answer, a follow-up question is asked until a conclusion about the class label of the record is reached. The Decision Tree classifier will look for the incoming dataset and will split the dataset based on the category wise (Mr. D. V. Patil, Prof. Dr. R. S. Bichkar, 2006). It splits the attributes in the dataset. The attribute which has the highest information gain will be chosen as a splitting attribute. The K-NN is a non-parametric method for classification and regression that predicts objects' "values" or class memberships based on the k closest training examples in the feature space. K-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k-nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. The incoming dataset will be analyzed by K-NN and if the data is similar to the data present in the nearest neighbor then that data will be accepted and processed. If the incoming data is not similar to the data present in nearest neighbor, then that data will be analyzed for further processing. Parallel computing includes two aspects: data parallel processing and task parallel processing. In terms of the data parallel processing means, a large-scale task to be solved can be disassembled into various system sub-tasks with the same scale and then each sub-task will be processed. As such, compared to the whole task, it will be easy to process. Adopting the task paralleling processing mode

5 might cause the disposal of the tasks and coordination of relationships overly complicated. Using the parallel database technology is a means for realizing the parallel processing of data information. Parallel database supports standard SQL language, through the SQL to provide data access service, SQL is widely used because it is simple and easy to apply. But in the big data analysis, the SQL interface is facing great challenges. The advantage of SQL comes from packaging the underlying data access, but the packaging affects its openness to a certain extent. User-defined functions which provided by parallel database is mostly based on the design of a single database instance, and therefore they cannot be executed in parallel cluster, it means that the traditional way is not suitable for the processing and analysis of big data. CONCLUSION The study of the MapReduce programming model is done to reduce the workloads on the resources and also to allocate the tasks into the resources. The Parallel Database Technology is used to perform the computation tasks in parallel, which increases the performance of Big data. In order to reduce the scheduling time for allocating the tasks into resources, classification technique is used before MapReduce and Parallel Database technology. For classifying the tasks an Ensemble classifier is used. An Ensemble classifier is a group of different classifiers such as Support Vector Machine (SVM) classifier, Decision Tree classifier, K-Nearest Neighbor (KNN) classifier etc. The study of these various types of classifiers is done to share the knowledge of the fastest processing classifier to others which will greatly reduce the scheduling time. Along with Ensemble Classifier, MapReduce programming model and Parallel Database Technology are used to increase the efficiency and throughput of Big data. REFERENCES Arinto Murdopo(2013), Distributed Decision Tree Learning for Mining Big Data Streams, July. S. Arya, D. Mount, N. Netanyahu, R. Silverman, A. Wu, An Optimal Algorithm for Approximate Nearest Neighbor Searching in Fixed Dimensions, Proc. Fifth Symp. Discrete Algorithm (SODA), 1994, pp Chi Zhang, Feifei Li, Jeffrey Jestes(2012), Efficient Parallel knn Joins for Large Data in MapReduce. Dean J, Ghemawat S(2004). MapReduce: Simplified data processing on large clusters / Proceedings of the 6th Symposium on Operating System Design and Implementation (OSDΓ 04). San Francisco, California, USA: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and Werner Vogels(2007), Dynamo: Amazon s Highly Available Key-value Store,SOSP 07, October Mr. D. V. Patil, Prof. Dr. R. S. Bichkar(2006), A Hybrid Evolutionary Approach To Construct Optimal Decision Trees with Large Data Sets, IEEE. A. Strehl et al.(2002), Cluster Ensembles-A Knowledge Reuse Framework for Combining Partitionings, JMLR, Vol.(3). Wei Lu, Yanyan Shen, Su Chen, Beng Chin Ooi (2012), Efficient Processing of k Nearest Neighbor Joins using MapReduce, Proceedings of the VLDB Endowment, Volume 5 Issue 10, June.

An Indexing Framework for Efficient Retrieval on the Cloud Sai Wu National University of Singapore wusai@comp.nus.edu.sg Kun-Lung Wu IBM T. J. Watson Research Center klwu@us.ibm.com Abstract The emergence

Chapter 2 Related Technologies Abstract In order to gain a deep understanding of big data, this chapter will introduce several fundamental technologies that are closely related to big data, including cloud

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A COMPREHENSIVE VIEW OF HADOOP ER. AMRINDER KAUR Assistant Professor, Department

BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON BIG DATA MANAGEMENT AND ITS SECURITY PRUTHVIKA S. KADU 1, DR. H. R.

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON HIGH PERFORMANCE DATA STORAGE ARCHITECTURE OF BIGDATA USING HDFS MS.

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after

Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in

MapReduce and Implementation Hadoop Parallel Data Processing Kai Shen A programming interface (two stage Map and Reduce) and system support such that: the interface is easy to program, and suitable for

INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND SCIENCE DISTRIBUTION OF DATA SERVICES FOR CORPORATE APPLICATIONS IN CLOUD SYSTEM Itishree Boitai 1, S.Rajeshwar 2 1 M.Tech Student, Dept of

L1: Introduction to Hadoop Feng Li feng.li@cufe.edu.cn School of Statistics and Mathematics Central University of Finance and Economics Revision: December 1, 2014 Today we are going to learn... 1 General

Sector vs. Hadoop A Brief Comparison Between the Two Systems Background Sector is a relatively new system that is broadly comparable to Hadoop, and people want to know what are the differences. Is Sector

Send Orders for Reprints to reprints@benthamscience.ae 50 The Open Cybernetics & Systemics Journal, 2015, 9, 50-54 Open Access Research and Application of Redundant Data Deleting Algorithm Based on the

An Introduction to Data Mining for Wind Power Management Spring 2015 Big Data World Every minute: Google receives over 4 million search queries Facebook users share almost 2.5 million pieces of content

Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 10, October 2013,

What is Analytic Infrastructure and Why Should You Care? Robert L Grossman University of Illinois at Chicago and Open Data Group grossman@uic.edu ABSTRACT We define analytic infrastructure to be the services,

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing