Última atualização: 8 de Outubro de 2025
Sobre
The Opportunity
As a Data Engineer, you will play a key role in shaping the data infrastructure that powers our AWS-based data lake. You’ll design, deploy, and maintain data pipelines to ensure a seamless, scalable, and reliable flow of information that underpins our analytics and decision-making.
By leveraging your expertise with AWS, Databricks, Airbyte (or similar ELT/ETL tools), and custom solutions in Python or Go, you’ll collaborate closely with our team to transform raw data into a strategic asset for the company.
Responsibilities
- Data Infrastructure Setup: Architect and maintain robust, scalable, and secure data infrastructure on AWS leveraging Databricks.
- Data Pipeline Development: Design, develop, and maintain data pipelines, primarily using tools like Airbyte and custom-built services in Go, to automate data ingestion and ETL processes.
- Data Lake Management: Oversee the creation and maintenance of the data lake, ensuring efficient storage, high data quality, and effective partitioning, organization, performance, monitoring and alerting.
- Integration and Customization: Integrate tools like Airbyte with various data sources and customize data flows to align with specific business needs. Where necessary, build custom connectors in Go to support unique data requirements.
- Performance and Scalability: Optimize data pipelines and data lake storage for performance and scalability, ensuring low latency and high availability.
- Data Governance and Security: Implement best practices for data governance, security, and compliance in AWS and Databricks, including access control, encryption, and monitoring.
- Collaboration and Documentation: Work closely with platform engineers, data analysts and other stakeholders to understand data requirements and document infrastructure, processes, and best practices.
Qualifications
- Experience in Data Engineering: 3+ years of experience as a Data Engineer, with a focus on data lake architecture and ETL pipeline development.
- AWS Proficiency: Strong experience with Databricks and AWS services including but not limited to S3, Glue, Lambda, Redshift, and IAM.
- ETL Expertise: Hands-on experience with Airbyte or similar ETL tools for data ingestion and transformation.
- Proficiency in Python/Go: Experience writing services and connectors in Python/Go, particularly for data pipeline automation.
- SQL and Data Modeling: Solid understanding of data modeling, SQL, and database concepts.
- Data Governance and Security: Experience implementing security and governance best practices in cloud environments.
Working at Trust Wallet
- Do something meaningful; Be a part of the future of finance technology and the no.1 company in the industry
- Fast moving, challenging and unique business problems
- International work environment, flat organization, flexible working hours
- Great career development opportunities in a growing compan
Benefícios
- Be a part of the world’s leading blockchain ecosystem that continues to grow.
- Excellent learning and career development opportunities.
- Work alongside diverse, world-class talent, in an environment where learning and growth opportunities are endless.
- Tackle fast-paced, challenging and unique projects.
- Work in a truly global organization, with international teams and a flat organizational structure.
- Work fully remotely with flexible working hours.
- Enjoy competitive salary and benefits.
Hey!
Cadastre-se na Remotar para ter acesso a todos os recursos da plataforma, inclusive inscrever-se em vagas exclusivas e selecionadas!