Create New Account
Sign up to continue searching for suitable jobs in Web 3.0

OR
Terms of Use
Already have an account?

Log In to Your Account
Log in to continue searching for suitable jobs in Web 3.0

OR
Don’t have an account?
OpenAI
Analytics Data Engineer
at OpenAI
4 months ago | 573 views | 4 applications

Analytics Data Engineer

Full-time
San Francisco
Per year
$245,000 To $385,000

About the company

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. Our Communications team is composed of PR/Media Relations, Events, Design, and other external-facing functions. The team’s ethos is to support OpenAI's mission and goals by clearly and authentically explaining our technology, values, and approach to safely building powerful AI. The Events team is a dynamic group dedicated to crafting extraordinary experiences that encompass our company's values and mission. Our team is driven by a passion for bringing people together to connect in meaningful ways.

Job Summary

In this role, you will:

📍Design, build and manage our data pipelines, ensuring all user event data is seamlessly integrated into our data warehouse. 📍Develop canonical datasets to track key product metrics including user growth, engagement, and revenue. 📍Work collaboratively with various teams, including, Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand their data needs and provide solutions. 📍Implement robust and fault-tolerant systems for data ingestion and processing. 📍Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to bear. 📍Ensure the security, integrity, and compliance of data according to industry and company standards.

You might thrive in this role if you:

📍Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering). 📍Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java. 📍Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3). 📍Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks. 📍Solid understanding of Spark and ability to write, debug and optimize Spark code.

Similar jobs

about 13 hours ago | 4 views | Be the first one to apply
Full-time
San Francisco
about 13 hours ago | 8 views | Be the first one to apply
Full-time
Kuala Lumpur
$13,000 To $27,000 per year
1 day ago | 15 views | Be the first one to apply
Full-time
Remote
1 day ago | 8 views | Be the first one to apply
Full-time
United States
$170,000 To $215,000 per year
1 day ago | 11 views | 1 applications
$104,000 To $120,000 per year