Beschreibung
For a project at our client‘s site, an international bank based in Zurich, we are looking for an experiencedBig Data Engineer – Apache Kafka (60862)
In this role you will operate and maintain a whole platform end to end and work closely with the development teams.
Your Qualifications:
• Strong knowledge in designing and operating Kafka Clusters (Confluent and Apache Kafka) on-premise
• +5 Years of experience in design, sizing, implementation and maintaining Hortonworks based Hadoop Clusters
• Deep knowledge in securing and protection of Hadoop clusters (Ranger, Sentry, Kerberos, Knox, SSL, Shuffle)
• +5 Years of Experience in designing Big Data architectures demonstrated experience in gathering and understanding customer business requirements to introduce BigData technologies
• Well versed working with tools from the Hadoop ecosystem, like Hadoop, Hive, Impala, Spark, Kafka, Solr, Flume
• +5 Years of Experience in DevOps automation with Ansible and Terraform
• Experience with IBM DB2 is a plus and IBM Power Systems is a plus
• Experienced hands on in implementing complex security requirements in the financial industry
• Good abstraction and conceptual skills combined with a self-reliant, team-minded, communicative, proactive personality
• Fluent in English, German is an advantage
Your Responsibilities:
• Engineer and integrate the platform from a technology point of view
• Engineer core Big Data platform capabilities
Off to new destinations! Apply now directly on or contact our team on .