Full Stack & Big Data Engineer

This job is no longer available.

155993BR

Job Description

Do you have a passion for learning and applying modern software engineering practices to help solve business problems or bring new insight? If so, we are looking for energetic team members to join our agile squad in 88 University Place, NYC.

Job Duties:We are looking for someone to develop a modern, cloud based solution for a strategic initiative that enables IBM to more effectively and efficiently make business decisions based using your solution. Your team's responsibility will be to create the working foundation of infrastructure and architecture design of the large scale, big data solution. This will include the development of the prototype through to a full operational system that many different teams (including yours) perform development within.

The key experiences required include Apache SPARK, Apache Kafka, Python, and Parquet formats. Also, significant experience using ETL (extract, transform, and load tools) not mentioned are needed. Experience with Scala is also a nice to have.

You will also be responsible for selecting and using the DevOps tools for continuous integration, builds, and monitoring of the solutions. You will join and exciting team, in an open landscaped, collaborative environment using agile delivery methods. Any experience in converting legacy ETL into the newer technologies is highly valued as well.

Above all, we are looking for applicants who will thrive in an open, energetic, flexible, fun-spirited, collaborative environment and desire creative freedom and an opportunity to work on high performing teams!

No remote opportunities existWill consider relocation for candidates with the right skills/experienceMust have the ability to work in the US without current/future need for IBM sponsorship

Experience in Jasmine/Karma or a similar unit/functional testing framework.Experience in Test Driven Development (TDD)Experience in Automated TestingExperience with Data Science / Data Analyst and their associated tools.

Required Professional and Technical Expertise:

1 year professional experience using Apache Spark and Kafka

2 years professional experience using Python

1 year professional experience using Parquet formats

2 years professional experience using ETL (extract, transform, and load) tools.