About the company
IMC is a leading trading firm, known worldwide for our advanced, low-latency technology and world-class execution capabilities. Over the past 30 years, we’ve been a stabilizing force in the financial markets – providing the essential liquidity our counterparties depend on. Across offices in the US, Europe, and Asia Pacific, our talented employees are united by our entrepreneurial spirit, exceptional culture, and commitment to giving back. It's a strong foundation that allows us to grow and add new capabilities, year after year. From entering dynamic new markets, to developing a state-of-the-art research environment and diversifying our trading strategies, we dare to imagine what could be and work together to make it happen.
Job Summary
Core Responsibilities
📍Architect, develop and deploy our Big Data environment (Kafka, Hadoop, Dremio, etc.) 📍Build, deploy, and monitor our data processing pipelines (Java, Python, Spark, Flink) 📍Collaborate with development teams on data modeling, data ingestion, and capacity planning 📍Work with users to ensure data integrity and availability 📍Act as a Big Data SME and consult on a variety of data-related questions from users and developers
Skills and Experience
📍5+ years' experience working in a mature data engineering environment 📍3+ years of experience building Kafka streaming applications and/or maintaining Kafka clusters 📍2+ years of experience building applications/pipelines with Big Data backends (S3, HDFS, Databricks, Iceberg, etc) 📍Experience with Apache Spark, Apache Flink or similar tools 📍Strong Java, Python, and SQL development skills
If you’re passionate about blockchain and decentralized technologies, explore more opportunities in web3 and cryptocurrency careers.




