Job description

We’re looking for a Data Engineer to build scalable, production-grade data platforms

We’re expanding our data team and are looking for a Data Engineer (3+ years of experience) who enjoys working with large-scale data systems, Spark, Databricks, and AWS-based lakehouse architectures.

Tasks
  • Build and maintain ETL/ELT pipelines for batch and streaming workloads;
  • Implement transformations, cleansing, and enrichment in Databricks;
  • Automate deployments and pipeline orchestration;
  • Ensure data quality via validation, monitoring, and alerting;
  • Operate an AWS-based lakehouse and optimize Databricks jobs/clusters for cost and performance;
  • Implement governance, lineage, access controls, auditing, and data quality monitoring across workspaces and cloud platforms.
Hard skills
  • 3+ years in data engineering on cloud and distributed systems;
  • Strong Spark;
  • Hands-on Databricks (Spark, PySpark, SQL, Delta Lake, MLflow);
  • Experience with AWS data services and building data lake/lakehouse solutions;
  • Familiarity with orchestration tools for data pipelines;
  • DevOps/CI/CD experience (Terraform, GitHub Actions, Jenkins).
What we offer

We offer:
1. 24 calendar days of paid vacation per year (after the trial period);
2. Paid sick days (after the trial period);
3. Possibility to work remotely;
4. Possibility to visit English courses;
5. Possibility of having consultations with a psychologist.

Contact for vacancies