About the company
We work with businesses globally to deliver tailored, end-to-end Artificial Intelligence, Consulting, Data, Digital, Cloud & DevOps and Software Engineering solutions that drive value and growth. Our business domain expertise covers hi-tech, financial services and insurance, while we explore the art of the possible in our groundbreaking Financial Labs (FinLabs).
Job Summary
Responsibilities:
📍Data Pipeline Development: 📍Build and maintain scalable ETL/ELT pipelines using Databricks. 📍Leverage PySpark/Spark and SQL to transform and process large datasets. 📍Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems. 📍Collaboration & Analysis: 📍Work Closely with multiple teams to prepare data for dashboard and BI Tools. 📍Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
Requirements:
📍Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.). 📍Proficiency in Azure Cloud Services. 📍Solid Understanding of Spark and PySpark for big data processing. 📍Experience in relational databases. 📍Knowledge on Databricks Asset Bundles and GitLab.
The crypto industry is evolving rapidly, offering new opportunities in blockchain, web3, and remote crypto roles — don’t miss your chance to be part of it.