About the company
FalconX is the most advanced digital asset platform for institutions. We provide trade execution, credit & treasury management, prime offering and market making services. Given our global operations, industry-leading technology and deep liquidity, we have facilitated client transactions of $1 trillion in volume. Our products & services are regulated, compliant and trusted. We are a team of engineers, product builders, institutional sales and trading leaders, operations experts, and business strategists. Our teammates have entrepreneurial experience and come from companies such as Google, Apple, Paypal, Citadel, Bridgewater, and Goldman Sachs. And, we embody our values: Think big; Drive bold outcomes; Be one team; Iterate with speed; and be an entrepreneur. We prioritize learning. Outcomes are mission-critical, but we also believe that learning in success and in failure will drive our continued success. Our industry is emergent - there’s no shortage of experiments to get involved with and to continue growing and learning together.
Job Summary
What you’ll be working on:
📍Provide technical and thought leadership for Data Engineering and Business Intelligence 📍Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning 📍Develop and maintain core data framework and key infrastructures 📍Data Warehouse design and data modeling for efficient and cost effective reporting 📍Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance.
Skills you'll need:
📍Degree in Computer Science, a related field or equivalent professional experience 📍3+ years of strong experience with data transformation & ETL on large data sets using open technologies like Spark, SQL and Python 📍3+ years of complex SQL with strong knowledge of SQL optimization and understanding of logical & physical execution plans 📍You have at least 1 year working in AWS environment, and should be familiar with modern web technologies like AWS cloud, MySQL database, Redis caching, messaging tools like Kafka/SQS etc 📍Experience in advanced Data Lake, Data Warehouse concepts & Data Modeling experience (i.e. Relational, Dimensional, internet-scale logs) 📍Knowledge of Python, Spark (Batch/Streaming), SparkSQL and PySpark 📍Proficient in at least one of the following Object-oriented programming languages -- 📍Python / Java / C++