Senior DevOps Engineer
About On The Spot
About the team
Unity Aura is a large-scale mobile platform focused on content discovery at scale. The product serves millions of users and is built in a fast-moving, high-growth environment.
The stack reflects that: modern technologies, with a strong emphasis on scalability and resilience as the platform continues to expand.
This role suits someone who wants to work on a technically complex product with real reach and see the impact of their decisions in production.
A good fit here is a team-oriented engineer with strong technical judgment, a sense of ownership, and the ability to choose the right tools for the problem at hand.
About the role
You’ll take a leading role in shaping our data infrastructure, setting clear architectural standards for AI/ML workloads and influencing decisions across data and engineering teams.
The focus is on building a smooth, self-service developer experience while improving pipeline reliability and day-to-day productivity for data engineers.
You’ll work closely with DevOps, Data Engineering, and ML Ops to align on infrastructure-as-code and orchestration practices, covering the full DataOps stack from pipelines to AI/ML infrastructure.
[01]
Responsibilities
- Design Data-Native Cloud Solutions: Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
- Accelerate Data Engineer Experience: Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
- Engineer Robust Data Platforms: Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
- Drive DataOps Excellence: Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale
- Collaborate on high-level technical designs with ML and Backend engineers to build resilient systems
Requirements
- 5+ years of hands-on DevOps experience building, shipping, and operating production systems
- Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration)
- Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM)
- Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
- GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar
- Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar
- Scalability & Performance: Proven experience managing production environments characterized by high traffic volumes and large amounts of data, with a focus on maintaining system reliability and cost-efficiency at scale
Nice to have
- Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools
- Data Pipeline Orchestration: Demonstrated success building and optimizing data pipeline deployment using modern tools (Airflow, Temporal, Kubernetes operators) and implementing GitOps practices for data workloads
- Data Engineer Experience Focus: Track record of creating and improving self-service platforms, deployment tools, and monitoring solutions that measurably enhance data engineering team productivity
- Data Infrastructure Deep Knowledge: Extensive experience designing infrastructure for data-intensive workloads including streaming platforms (Kafka, Kinesis), data processing frameworks (Spark, Flink), storage solutions, and comprehensive observability systems
Benefits
- Work in a highly professional team. Informal and friendly atmosphere in the team
- Paid vacation — 20 business days per year, 100% sick leave payment
- Equipment provision
- Partially compensated educational costs (for courses, language learning, certifications, professional events, etc.)
- Legal and Accounting support in Poland
- Ability to work from our office in the Warsaw center
- 3 additional Friday-days off (U days) per contract year
- 5 sick days per year
- Medical insurance
- Flexible working hours — we care about you and your output
- Online English classes
- Bright and memorable corporate life: corporate parties, gifts to employees on important occasions
Join us
To be considered for this position, you must have Polish citizenship, a D-type work visa, or a residence permit. Thank you!
Contribute to our growth
Know someone perfect for this role? Let us know about them. We have a referral program to recognize your support.


.png)