About the company
BitMEX is the world’s leading cryptocurrency derivatives trading platform, which has pioneered cryptocurrency trading through relentless commitment to change, and continues to set benchmarks for innovation, liquidity, and security today. As the world's most advanced peer-to-peer crypto-products trading platform and API, BitMEX gives knowledge, confidence, and precision to hundreds of thousands of traders, transacting billions of USD a day. Join us, as we build a thriving cryptocurrency ecosystem through strategic investments in emerging cryptocurrency technology, and create the future of digital financial services.
Job Summary
Responsibilities
đź“ŤDesign and maintain enhancements to our data warehouse, data lake, and data pipelines to increase their reliability and consistency đź“ŤImprove queriability of large historical datasets through industry-standard tools, careful data representation and aggregation for both technical and business units đź“ŤEnsure data governance and security/retention policies can be implemented and enforced đź“ŤEnsure that operational system integrations driven from the data stack are running, monitored and available đź“ŤContinually review the data platform to ensure that it is fit for purpose and meeting the needs of the business đź“ŤSupport and maintain downstream integrations from our data lake, for example business intelligence and visualization tools and third party systems
Qualifications
đź“ŤExperience in the data engineering field with demonstrated design and technical implementation of data warehouses đź“ŤExperience with OLAP databases, how they differ from OLTP databases, and data structuring/modeling with understanding of key data points in a business sense for trade-offs between storage/performance and usability đź“ŤExperience with building, deploying, and troubleshooting reliable, idempotent and consistent data pipelines working with disparate in-house and external data-sources, e.g. using Airflow DAGs đź“ŤExperience with AWS Redshift, Glue Data Catalog, S3, PostgreSQL, Parquet, Iceberg, Trino, and how they are managed using Terraform & Kubernetes đź“ŤExperience with data loading, extraction, manipulation and preparing data for ingestion and integration with visualization platforms such as Tableau.