Hadoop Administrator

This job is no longer available.

Job ID: 11331

Buchanan Technologies is currently seeking an experienced Hadoop Administrator to join an emerging data technologies team in a large enterprise environment, for a full-time, direct hire career opportunity in the Dallas/Ft. Worth, TX area!

Overview

Deployment of big data platform technologies activities includes identifying and enabling opportunities for automation of fast integration and deployment of applications. Operation of big data platform technologies activities include monitoring and troubleshooting incidents, enabling security policies, managing data storage and compute resources. Responsibility also includes coding, testing, and documentation of new or modified automation for deployment and monitoring. This role participates along with team counterparts to architect an end-to-end framework developed on a group of core data technologies. Other aspects of the role include developing standards and processes for big data platforms in support of projects and initiatives.

Responsibilities

Manage Hadoop and Spark cluster environments, on bare-metal and container infrastructure, including service allocation and configuration for the cluster, capacity planning, performance tuning, and ongoing monitoring.

Work with data engineering related groups in the support of deployment of Hadoop and Spark jobs.

Work with IT Operations and Information Security Operations with monitoring and troubleshooting of incidents to maintain service levels.

Work with Information Security Vulnerability Management and vendors to remediate known impacting vulnerabilities.

Contributes to planning and implementation of new/upgraded hardware and software releases.

Responsible for monitoring the Linux, Hadoop, and Spark communities and vendors and report on important defects, feature changes, and or enhancements to the team.

Research and recommend innovative, and where possible, automated approaches for administration tasks. Identify approaches to efficiencies in resource utilization, provide economies of scale, and simplify support issues.

Qualifications

Knowledge

Excellent knowledge of Linux, AIX, or other Unix flavors

Deep understanding of Hadoop and Spark cluster security, networking connectivity and IO throughput along with other factors that affect distributed system performance

Strong working knowledge of disaster recovery, incident management, and security best practices