About the company
Our mission is to bring blockchain to a billion people. The Alchemy Platform is a world class developer platform designed to make building on the blockchain easy. We've built leading infrastructure in the space, powering over $105 billion in transactions for tens of millions of users in 99% of countries worldwide. The Alchemy team draws from decades of deep expertise in massively scalable infrastructure, AI, and blockchain from leadership roles at leading companies and universities like Google, Microsoft, Facebook, Stanford, and MIT. Alchemy recently raised a Series C1 at a $10.2B valuation led by Lightspeed and Silver Lake. Previously, Alchemy raised from a16z, Coatue, Addition, Stanford University, Coinbase, the Chairman of Google, Charles Schwab, and the founders and executives of leading organizations. Alchemy powers the top blockchain companies globally and has been featured in TechCrunch, Forbes, Bloomberg, and elsewhere.
Job Summary
What You'll Do:
šDesign and Implementation: Architect and develop data infrastructure solutions leveraging Snowflakeās capabilities to meet business needs. šData Integration: Manage and optimize data pipelines, ensuring seamless integration from diverse data sources. šPerformance Tuning: šConduct performance optimization for Snowflake environments, including storage, compute, and query tuning. šSecurity and Compliance: šImplement best practices for data security, privacy, and governance in alignment with organizational policies and industry standards. šImplement best practices to meet SLA requirements of Business Continuity and Disaster Recovery šCollaboration: šPartner with data analysts, scientists, and business stakeholders to understand requirements and deliver solutions and data insights that drive impact. šBuild production DAG workflows for batch data processing and storage šMonitoring and Maintenance: šEstablish monitoring systems for reliability and proactively address issues to ensure system uptime and data in; šSet up frameworks and tools to help team members create and debug pipelines by themselves
What We're Looking For:
šBS degree in Computer Science or similar technical discipline, MS/PhD a plus. š6+ years experience in a software engineering discipline, with at least 4+ years experience in data engineering or data infrastructure šExperience with Airflow, Temporal, or other workflow orchestration tools šExperience in streaming data architectures using Kafka and Flink is a plus šExperience with Snowflake / Spark / Trino or other query engines šExperience with data modeling frameworks such as DBT and SQLMesh šExperience working with Apache Iceberg or other data lake formats. šFamiliar with datalake ingestion patterns