Última atualização: 9 de Setembro de 2025
Data Engineer - Data Platform
Via Ashbyhq
Sobre
The team
Join our Data Engineering Team at Kraken!
Are you passionate about designing and building scalable data systems that power one of the fastest-growing companies in cryptocurrency? We’re seeking a skilled Data Engineer to join our Data Platform team and help us architect the future of Kraken’s data ecosystem.
As a Data Engineer at Kraken, you’ll be responsible for building and maintaining high-performance data pipelines, ensuring the reliability and scalability of our data infrastructure, and enabling teams across the company to access clean, consistent, and timely data. You’ll work with modern technologies and large-scale datasets, playing a key role in making data accessible for analytics, machine learning, and product innovation.
The opportunity
- Build scalable and reliable data pipelines that collect, transform, load and curate data from internal systems
- Augment data platform with data pipelines from external systems.
- Ensure high data quality for pipelines you build and make them auditable
- Drive data systems to be as near real-time as possible
- Support design and deployment of distributed data store that will be central source of truth across the organization
- Build data connections to company's internal IT systems
- Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store
- Evaluate new technologies and build prototypes for continuous improvements in data engineering.
Skills you should HODL
- 5+ years of work experience in relevant field (Data Engineer, DWH Engineer, Software Engineer, etc)
- Experience with data-lake and data-warehousing technologies and relevant data modeling best practices (Presto, Athena, Glue, etc)
- Proficiency in at least one of the main programming languages used: Python and Scala. Additional programming languages expertise is a big plus!
- Experience building data pipelines/ETL in Airflow, and familiarity with software design principles.
- Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, or similar.
- Expertise in Apache Spark, or similar Big Data technologies, with a proven record of processing high volumes and velocity of datasets.
- Experience with business requirements gathering for data sourcing.
- Bonus - Kafka and other streaming technologies like Apache Flink.
Outras Informações
Selecionamos as principais informações da posição. Para conferir o descritivo completo, clique em "acessar"
Hey!
Cadastre-se na Remotar para ter acesso a todos os recursos da plataforma, inclusive inscrever-se em vagas exclusivas e selecionadas!