About the company
Kraken, the trusted and secure digital asset exchange, is on a mission to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion. Our 2,350+ Krakenites are a world-class team ranging from the crypto-curious to industry experts, united by our desire to discover and unlock the potential of crypto and blockchain technology. As a fully remote company, we already have Krakenites in 70+ countries (speaking 50+ languages). We're one of the most diverse organizations on the planet and this remains key to our values. We continue to lead the industry with new product advancements like Kraken NFT, on- and off-chain staking and instant bitcoin transfers via the Lightning Network.
Job Summary
About The Role
The Data Engineering team is responsible for designing and implementing scalable solutions that allow the company to make data-driven decisions quickly and accurately on several terabytes of data. The team maintains the company’s data warehouse and data lake, and is responsible for creating various pipelines to move and process vast amounts of data, including both batch and streamed data, through different data products. In this role, you will work closely with the Finance team to build data products that support Finance transformation, enabling the company to be an industry pioneer.
Responsibilities
Build scalable and reliable data pipelines that collect, transform, load and curate data from both internal systems and blockchains Ensure high data quality for pipelines you build and make them auditable Drive data systems to be as near real-time as possible Build data connections to company's internal IT systems Develop, customize, and configure self service tools that help our data consumers to extract and analyze data from our massive internal data store Evaluate new technologies and build prototypes for continuous improvements in data engineering
Requirements
5+ years of work experience in relevant field (Data Engineer, DW Engineer, Software Engineer, etc) Experience with data warehouse technologies and relevant data modeling best practices (Athena, Iceberg, Druid, etc.) Experience building data pipelines/ETL and familiarity with design principles (Apache Airflow is a big plus!) Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, Pandas, or similar Proficiency in a major programming language (e.g. Scala, Python, ...) Experience with business requirements gathering for data sourcing
Nice to have
Experience working with cloud services (e.g. AWS, GCP, …) and/or Kubernetes Experience in building and contributing to data lakes in the cloud Designing and writing CI/CD pipelines Working with petabytes of data
Location Tagging: #EU #US
We’re powered by people from around the world with their own unique and diverse experiences. We value all Krakenites and their talents, contributions, and perspectives, regardless of their background. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!
As an equal opportunity employer we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.