About the company
We combine the power of Web3 and creativity to build experiences that connect people from all corners of the globe.
Job Summary
Responsibilities. What you’ll do
📍Take ownership of and developing critical infrastructure and new features for our next-generation analytics platform, supporting central functions such as marketing, live ops and finance 📍 Build scalable, accurate, and extensible stream processing applications using Spark, Kafka, Hive, Druid, Airflow, Scala/Java 📍Ensure best practices and standards in our data ecosystem. 📍 Establish and improve automation process and optimization of data infrastructure. 📍Collaborate with cross disciplinary teams, and effectively communicate with stakeholders.
Requirements. Who you are and what you’ve done
📍 5+ years software engineering experience 📍3+ years of data engineering experience, especially working on back-end data infrastructure. 📍 Extensive experience with Java, Python, Kafka, Druid, Hadoop/EMR 📍 Experience building highly scalable real-time and batch processing data pipelines. 📍 Bachelor's degree in computer science/statistics/mathematics, data engineering, or other fields with equivalent proven engineering experience 📍 Understanding of tradeoffs between off the shelf services and home-grown solutions 📍 Experience with workflow orchestration tools such as Oozie, Luigi, and AirflowExperience with SQL and SQL-like languages 📍 Proficiency in writing well-tested code, including unit-testing, integration testing and end-to-end testing.