About the company
Instead of waiting for best talent to come to you (majority of which are irrelevant applications), do targeted reach out using AI and India's largest talent database
Job Summary
Key Responsibilities
📍Design, build, and maintain large-scale batch and streaming data pipelines using Databricks and Spark (PySpark/Scala) 📍Develop and optimize data transformations leveraging Delta Lake and lakehouse architectures 📍Implement job scheduling, workflow orchestration, and performance tuning for scalable data processing 📍Write clean, maintainable, and well-documented Python and SQL code following best practices and version control standards 📍Manage distributed data storage formats, metadata, and performance optimization strategies
What Makes You a Great Fit
📍4–6+ years of experience in Data Engineering or Data Platform roles 📍Strong hands-on expertise in Databricks, Spark (PySpark/Scala), and Delta Lake 📍Proficiency in Python and SQL with deep understanding of distributed data systems 📍Experience building and maintaining scalable batch and streaming pipelines 📍Solid understanding of cloud-native data architectures across AWS, Azure, or GCP 📍Knowledge of data governance, security, access control, and compliance best practices 📍Experience with data modeling and data warehousing concepts 📍Familiarity with Django/Django REST Framework or similar API development frameworks is a plus
The future of finance is here — whether you’re interested in blockchain, cryptocurrency, or remote web3 jobs, there’s a perfect role waiting for you.





