Delete Post

Big Data Architect

Summary

Synechron is one of the fastest-growing digital, business consulting technology firms in the world. Specialized in financial services, the b...

Description

Synechron is one of the fastest-growing digital, business consulting technology firms in the world. Specialized in financial services, the businessrsquo focus on embracing the most cutting-edge innovations combined with expert knowledge and technical expertise has allowed Synechron to reach 500 million in annual revenue, 8,000 employees and 18 offices worldwide. Synechron is agile enough to invest RD into the latest technologies to help financial services firms stand at the cutting-edge of innovation yet, also large enough to scale any global project. Learn more at http Synechron draws on over 15 years of financial services IT consulting experience to provide expert systems integration expertise and technical development work in highly-complex areas within financial services. This includes Enterprise Architecture Strategy, Application Development Maintenance, Quality Assurance, Infrastructure Management, Data Analytics and Cloud Computing. Synechron is one of the worldrsquos leading systems integrators for specialist technology solutions including Murex, Calypso, Pega, and others and also provides traditional offshoring capabilities with off-shore development centers located in Pune, Bangalore, Hyderabad, and Chennai as well as near-shoring capabilities for European banks with development centers in Serbia. Synechronrsquos technology team works with traditional technologies and platforms like Java, C , Python, and others as well as the most cutting-edge technologies from blockchain to artificial intelligence. Learn more at httpsynechron.comtechnology httpsynechron.comtechnology20 Synechron Inc. is seeking Big Data Architect with experience in financial services to join our Charlotte, NC team. Must Haves At least 5 years of experience in designing and architecting solutions using Spark, Hadoop (Specifically HDP), Python (PySpark) Expert problem solving skills Experience in data ingestion and management using Kafka, Sqoop, Hive, HBaseCassandra databases At least 5 years of experience in developing solutions in Python, PySpark, Scala, R Hands on experience in infrastructure sizing, analytical tool configuration Experience in managing Hadoop (specifically HDP) clusters Experience in configuring and tracking cluster performance using Ambari, Dr. Elephant and related tools Experience in container management, configuration and deployment of HDP cluster Experience in integrating JenkinsAnthill Pro build process to a configurable container managed deployment environment Experience in Data Engineering (data loading, clean up, transformation, partitioning, bucketing) Experience in optimizing data and compute performance (network bandwidth, utilization etc.,) Ability to translate requirements into architectural design Nice to have Experience in NoSQL and Graph databases Experience in project planning and management Experience in Dataiku, domino data lab, IBM DSX