Requirements:
- 3–5+ years of experience in Data Engineering, MLOps, or DevOps roles
- Strong experience with GCP (GCS, BigQuery, Cloud Run, Vertex AI) or similar cloud environments
- Hands-on experience with Airflow, dbt, and data pipeline orchestration tools
- Strong proficiency in Python and SQL, following clean code principles
- Experience with GitLab CI/CD and automated deployment pipelines
- Strong knowledge of Docker, containerized workloads, and microservices
- Experience with infrastructure-as-code (Terraform)
- Understanding of event-driven and API-based system architectures
- Familiarity with the full ML lifecycle (training, deployment, monitoring, scaling)
- Strong understanding of observability, monitoring, and production reliability practices
- Collaborative mindset, supporting Data Science and Engineering teams
- Strong problem-solving skills balancing technical and business requirements
- Strong English communication skills (German is a plus)
- Degree in Computer Science, Software Engineering, or related field
Responsibilities:
- Take ownership of the MLOps framework and drive its adoption across ML projects
- Design, build, and maintain ETL and ML pipelines using GCP services, Airflow, and orchestration tools
- Implement and maintain CI/CD pipelines (GitLab) for automated ML deployment workflows
- Manage cloud infrastructure using Terraform, Docker, and cloud-native GCP services
- Provide technical support and guidance to Data Science teams on MLOps/DevOps practices
- Monitor and optimize production ML systems for performance, scalability, and stability
- Develop and integrate APIs for model serving and data flow automation
- Design and maintain ML infrastructure and data pipeline architectures
- Improve system maturity, reliability, and maintainability of ML platforms
- Collaborate with cross-functional teams to ensure operational excellence of ML systems