About the company
Figure is transforming the trillion dollar financial services industry using blockchain technology. In six years, Figure has unveiled a series of fintech firsts using the Provenance blockchain for loan origination, equity management, private fund services, banking and payments sectors - bringing speed, efficiency and savings to both consumers and institutions. Today, Figure is one of less than a thousand companies considered a unicorn, globally. Our mission requires us to have a creative, team-oriented, and supportive environment where everyone can do their absolute best. The team is composed of driven, innovative, collaborative, and curious people who love architecting ground-breaking technologies. We value individuals who bring an entrepreneurial mindset to every task and will embrace our culture of innovation. Every day at Figure is a journey in continuous learning yet a daily focus on getting work done that makes a difference. Join a team of proven leaders who have already created billions of dollars in value in the FinTech space!
Job Summary
What You’ll Do
📍Leverage Spark, Airflow, Apache Beam, Google Kubernetes Engine, BigQuery and other tools to build robust and efficient data pipelines. 📍Expand and optimize our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. 📍Collaborate with project leads and other software engineers across multiple teams. 📍Be a leader, use your voice, and apply your tech skills to solve real world problems.
What We Look For
📍BS degree in Computer Science or related technical field, or equivalent practical experience.
📍6+ years of proven working experience as a data engineer
📍5+ years writing production-quality Python.
📍3+ years’ experience with Spark, either in Python or in Scala. Bonus points for Spark on EMR, Dataproc, or Kubernetes.
📍Working knowledge of Google Cloud tools (compute, cloud storage, GKE, GCR, AutoML, etc.)
📍Expertise building and optimizing data pipelines using Kafka (preferred), Kinesis, or other event bus.
📍Deep experience with data frameworks and tools like Spark, Spark Streaming, Apache Beam, and Airflow.
📍Comfort with and experience working within CI/CD processes and tools and software development practices.
📍Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of SQL and NoSQL databases, including BigQuery (preferable), MySQL/MariaDB, Postgres or Cassandra.
📍Expertise in data modeling for data science, reporting, and analytics, including dimensional and transactional models.
📍Ability to thrive in a fast-paced growing company.