Platform Administrator (Databricks)

Уровень дохода не указан

Опыт работы: 3–6 лет

Полная занятость

График: 5/2

Рабочие часы: 8

Формат работы: удалённо

Напишите телефон, чтобы работодатель мог связаться с вами

Пройдите капчу
Чтобы подтвердить, что вы не робот, введите текст с картинки:
captcha
Неверный текст. Пожалуйста, повторите попытку.

Nitka Technologies develops software for customers in the US and Europe and brings together about 300 professionals from Eastern Europe, North and South America, Armenia, Georgia and Kazakhstan.

We are looking for an experienced Platform Administrator (Databricks) for a long-term project. The customer is a company in California, an industry leader in the sale of tickets to events in Europe and the USA.

We offer 100% remote, full-time work.

Main tasks:
  • Manage multiple Databricks workspaces (dev/qa/prod);
  • Configure cluster policies, governance & compliance;
  • Create & maintain unity catalog objects: catalogs, schemas, grants, service principals, external storage etc;
  • Monitor and debug failed / long-running jobs using system tables (job_run_timeline, node_timeline, workflow_run);
  • Troubleshoot cluster crashes, driver OOM, executor failures, memory leaks;
  • Investigate Python / Spark errors, dependency conflicts (PyPI, WHL, Maven)
  • Assist users with cluster/job configuration, notebook errors, unity catalog permissions;
  • Explain platform limitations and best practices to Data engineers;
  • Maintain confluence pages with platform rules & troubleshooting guides;

Requirements:

  • Experience in implementation, integration or technical support;
  • Experience with Linux, Bash;
  • Confident SQL skills;
  • Strong experience of working with AWS Infrastructure (ec2, s3, iam, vpc, secret management, sqs);
  • Experience with Python sufficient for writing small scripts or applications;
  • Experience of maintaining Databricks jobs & environments;
  • Experience with REST API/SDK;
  • Understanding of file-based databases (DeltaLake, Parquet, Hive);
  • Understanding of of cluster types & node families;
  • Spoken English at Intermediate level or higher;

Would be a plus:

  • Databricks REST API / Databricks SDK;
  • Working with schema evolution, time travel, vacuum, compaction, z-order;
  • Debugging corrupted delta tables (conflicting commits, tombstones, missing checkpoints);
  • Understanding of acid implementation on top of object storage;
  • Spark knowledge (jobs, partitions, queries), experience in Kafka or similar technology, familiarity with Terraform / Gitlab CI;
  • Cost monitoring for platform services and objects;
  • Experience in enterprise Data platforms;

Working conditions:

  • Remote work;
  • Full-time (8 hours/day);
  • Flexible schedule;
  • Attractive USD compensation;
  • Paid vacation, holidays.

Ключевые навыки

  • Linux
  • SQL
  • Python
  • Databricks
  • REST API
  • Apache Hive
  • Bash
  • AWS
  • ec2
  • s3
  • iam
  • vpc
  • secret management
  • sqs
  • SDK
  • Deltalake
  • Parquet
  • Kafka
  • Terraform
  • GitLab CI
  • Английский — B1 — Средний

Задайте вопрос работодателю

Он получит его с откликом на вакансию
Вакансия опубликована 12 декабря 2025 в Сербии