Beschreibung
We are looking for an individual who can work under their own direction towards agreed targets/goals and with creative approach to work.
Responsibilities:
-Responsible to ingest data from files, streams and databases. Process the data using Spark, Python
-Develop programs in PySpark and Python as part of data cleaning and processing
-Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems
-Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform
-Provide high operational excellence guaranteeing high availability and platform stability
-Implement scalable solutions to meet the ever-increasing data volumes, using big data/Palantir technologies Pyspark, any Cloud computing etc.
Expectations:
-Over all 7 to 12 years of IT experience. Extensive experience in Big Data, Analytics, ETL technologies
-Application Development background along with knowledge of Analytics libraries, statistical and big data computing libraries
-Minimum 4+ years of experience in Spark, Python/Scala/Java programming.
-Hands on experience in coding, designing and development of complex data pipelines using big data technologies
-Experience in developing applications on Big Data. Design and build highly scalable data pipelines
-Expertise in Python, SQL Database, Spark, non-relational databases, Kafka, Hadoop
-Knowledge of Palantir would be added advantage
NO REMOTE WORK IS ALLOWED, THE POSITION IS BASED IN ZÜRICH