Skilled in Tableau Desktop for data visualization through various charts such as bar charts, line charts, combination charts, pivot table, scatter plots, pie charts and packed bubbles and use multiple measures for comparison such as Individual Axis, Blended Axis, and Dual Axis.

Published the dashboard reports to Tableau Server for navigating the developed dashboards in web.

Co-ordinate with business and understand analytics requirements.

Automated the jobs by pulling data from different sources to load data into HDFS tables using Oozier workflows.

Interface with SME's, Analytics team Account managers and Domain Architects to review to-be developed solution.

Collaborated with the infrastructure, network, database, application and BI teams to ensure data quality and availability.

Involved in architecture design, development and implementation of Hadoop deployment, backup and recovery systems followed by Agile.

Contributing to the development of key data integration and advanced analytics solutions leveraging Apache Hadoop and other big data technologies for leading organizations using major HadoopCloudera Distribution.

Implemented data log directly into HDFS using Flume in Cloudera- CDH and Involve in loading data from LINUX file system to HDFS in Cloudera - CDH.

Experience in running Hadoop streaming jobs to process terabytes of xml format data and importing and exporting data into HDFS and assisted in exporting analyzed data to RDBMS using SQOOP in Cloudera.

Design and Development of Integration APIs using various Data Structure concepts, Java Collection Framework along with exception handling mechanism to return response within 500ms. Usage of Java Thread concept to handle concurrent request.

Installed and configured MapReduce, HIVE and the HDFS and Developing Sparkscripts by using Java per the requirement to read/write JSON files. Working on Importing and exporting data into HDFS and Hive using Sqoop.

Hands on experience on HadoopAdministration, development, NoSQL in in ClouderaLoad and transform large sets of structured, semi structured and unstructured data.

ImplementedHBase tables to load large sets of structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.

Created reports for the BI team using Sqoop to export data into HDFS and Hive, Configure and install Hadoop and Hadoop ecosystems (Hive/Pig/ HBase/ Sqoop/ Flume).

Designed and implemented a distributed data storage system based on HBase and HDFS. Importing and exporting data into HDFS and Hive.

Design & Implement DataWarehouse creating facts and dimension tables and loading them using Informatica Power Center Tools fetching data from the OLTP system to the Analytics DataWarehouse. Coordinating with business user to gather the new requirements and working with existing issues, worked on reading multiple data formats on HDFS using Scala. Loading data into parquet files by applying transformation using Impala. Executing parameterized Pig, Hive, impala, and UNIX batches in Production.

Involve in converting Hive/SQL queries into Spark transformations using SparkRDDs and Scala, Analyzed the SQLscripts and designed the solution to implement using Scala.

JavaScript Requires
Sorry this site will not function properly without the use of scripts.
The scripts are safe and will not harm your computer in anyway.
Adjust your settings to allow scripts for this site and reload the site.