About the company
Creating a culture of collaboration in the U.S. From Austin and Atlanta to Highlands Ranch and our space in Mission Rock, our coast-to-coast hubs offer benefits, amenities and flexibility that complement each office, it’s vibe and the unique people.
Job Summary
Responsibilities included, but limited to:
📍Build and optimize Spark or Hive queries for both batch and interactive data processing on Hadoop. 📍Develop and automate ETL pipelines using Apache Airflow to schedule, monitor, and manage dependencies and data flows across multiple systems. 📍Support the integration of data sources and facilitate reporting solutions for business stakeholders utilizing Tableau and Power BI. 📍Enhance and maintain existing data workflows, collaborating with operations and analytics teams to ensure reliable, high-performance data delivery. 📍Troubleshoot issues related to data pipeline performance or job orchestration on Spark, Hadoop, and Airflow platforms.
Qualifications
Basic Qualifications: 📍Pursuing a Bachelor’s or Master’s degree in Computer Science, Data Science, Software Engineering, Information Systems, or related field  graduating December 2026 – August 2027 📍Strong communications skills, specifically, the absence of repeated grammatical or typographical errors, clear and concise written and spoken communications that demonstrate professional judgment.   
If you’re passionate about blockchain and decentralized technologies, explore more opportunities in web3 and cryptocurrency careers.