Beschreibung
For the assignment to our customer in Zürich we're looking for an Azure Data EngineerMain tasks/activities:
The departement of our client is responsible for the finance integration of all special case legal entities (e.g., new acquisitions and joint ventures) globally. For their data engineering use-case, they use Azure Databricks, Azure Data Factory and Azure Data Lake (Delta Lake) to extract, load and transform data from various source systems and deliver it in a harmonized data model into the new global finance platform built on SAP FPSL.
Must have skills:
3+ years of experience in data engineering application development using Spark, preferably in an enterprise environment on Microsoft Azure
5+ years of experience in building enterprise big data / data engineering applications using continuous integration tools like Azure DevOps
Strong understanding of reusable software design patterns
Experience in Azure Data Factory, Azure Data Lake and Azure Log Analytics
Design and implementation of production-grade solutions
Delivering high-quality code, focusing on simplicity, performance, maintainability and scalability
Practical experience in applying an agile way of working and in applying DevOps methodologies
Excellent analytical and conceptual skills to understand complex technology stacks and their dependencies
Nice to have skills, or willingness to learn:
Proficient in Python/PySpark and SQL (Spark SQL)
Enterprise system integration experience
Insurance know-how
Finance & accounting or actuarial knowledge
Start: asap
Duration: 12 month (with option of extension)
Degree of employment: 100%
Working location: Zürich (remote work possible)
For more information please get in touch with Mr. Patrick Casutt or send us your CV to We are looking forward to hearing from you.