About the company
Nansen is a blockchain analytics platform that enriches on-chain data with millions of wallets labels. Crypto investors use Nansen to discover opportunities, perform due diligence and defend their portfolios with our real-time dashboards and alerts.
Job Summary
What You'll Do:
šDesign, build, and scale performant data pipelines and infrastructure, primarily using ClickHouse, Python, and dbt. šBuild systems that handle large-scale streaming and batch data, with a strong emphasis on correctness and operational stability. šOwn the end-to-end lifecycle of data pipelines, from raw ingestion to clean, well-defined datasets consumed by downstream teams. šImprove pipeline observability, data quality checks, and failure handling to ensure predictable operation at scale. šCollaborate closely with data consumers (data engineers, product engineers, researchers) to define clear dataset contracts and schemas. šUse AI tools and agents such as Cursor, MCPs, and LLMs to accelerate development, automate repetitive work, and boost quality. šBring fresh thinking to the table, staying current with best practices and evolving your toolkit over time.
What We're Looking For:
šProven experience building and operating production data pipelines that run continuously and reliably. šStrong data engineering and software engineering fundamentals, with deep experience in Python and SQL. šHands-on experience with ClickHouse and dbt in production environments. šSolid understanding of streaming and batch ingestion patterns, including backfills, reprocessing, and schema evolution. šComfortable working across the data infrastructure stack: ingestion, transformation, storage, and exposure to downstream systems.
If this role isnāt the perfect fit, there are plenty of exciting opportunities in blockchain technology, cryptocurrency startups, and remote crypto jobs to explore. Check them on our Jobs Board.



