Beschreibung
Role: DevOps for Cloudera Hadoop EcosystemLocation: Lucerne, Switzerland (Remote work until Pandemic/ Lockdown)
Language requirement: English only
Joining date: in a short notice
Essential Skills:
DevOps for Cloudera Hadoop Ecosystem
Good Linux, ETL, debugging and interfaces knowledge
Good clarity on cloud system concepts
The person should have:
1. Experience with one or more major Hadoop distributions and various ecosystem components (e.g. HDFS, Sqoop, Impala, Spark, Flume, Kafka, Nifi etc.) is a must while experience with Data Concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management) would be a big strength.
2. Details of Hadoop ecosystem viz. Hadoop, Spark, nodes, clusters, YARN, schedules, encryption etc.
3. Good Programming / Scripting Skills (Python, Java, C/C , Scala, Bash, Korn Shell) and understanding of DevOps Tools (Chef, Docker, Puppet, Bamboo, Jenkins)
4. Prior hands on Big Data platform development experience in large projects with ability to translate technical designs into working solutions and to build and maintain positive relationships with Architect and Big Data Project teams to work out platform features