About the company
IMC is a leading trading firm, known worldwide for our advanced, low-latency technology and world-class execution capabilities. Over the past 30 years, weāve been a stabilizing force in the financial markets ā providing the essential liquidity our counterparties depend on. Across offices in the US, Europe, and Asia Pacific, our talented employees are united by our entrepreneurial spirit, exceptional culture, and commitment to giving back. It's a strong foundation that allows us to grow and add new capabilities, year after year. From entering dynamic new markets, to developing a state-of-the-art research environment and diversifying our trading strategies, we dare to imagine what could be and work together to make it happen.
Job Summary
Your Core Responsibilities:
šDevelop large-scale distributed training pipelines to manage datasets and complex models šBuild and optimize low-latency inference pipelines, ensuring models deliver real-time predictions in production systems šDevelop libraries to improve the performance of machine learning frameworks šMaximize performance in training and inference using GPU hardware and acceleration libraries šDesign scalable model frameworks capable of handling high-volume trading data and delivering real-time, high-accuracy predictions šCollaborate with quantitative researchers to automate ML experiments, hyperparameter tuning, and model retraining šPartner with HPC specialists to optimize workflows, improve training speed, and reduce costs šEvaluate and roll out third-party tools to enhance model development, training, and inference capabilities šDig into the internals of open-source ML tools to extend their capabilities and improve performance
Your Skills and Experience:
š5+ years of experience in machine learning with a focus on training or inference systems šHands-on experience with real-time, low-latency ML pipelines in high-performance environments is a strong plus šStrong engineering skills, including Python, CUDA, or C++ šKnowledge of machine learning frameworks such as PyTorch, TensorFlow, or JAX šProficiency in GPU programming for training and inference acceleration (e.g., CuDNN, TensorRT) šExperience with distributed training for scaling ML workloads (e.g., Horovod, NCCL) šExposure to cloud platforms and orchestration tools šA track record of contributing to open-source projects in machine learning, data science, or distributed systems is a plus
Looking for your next challenge? The world of crypto offers exciting roles in blockchain development, web3 innovations, and remote opportunities.



