About the company
We work with businesses globally to deliver tailored, end-to-end Artificial Intelligence, Consulting, Data, Digital, Cloud & DevOps and Software Engineering solutions that drive value and growth. Our business domain expertise covers hi-tech, financial services and insurance, while we explore the art of the possible in our groundbreaking Financial Labs (FinLabs).
Job Summary
Responsibilities:
šLead the design and development of data pipelines for ingestion, transformation, and loading of data from various sources (databases, APIs, streaming platforms) into our data warehouse/lake. šWrite optimized and maintainable SQL queries and leverage SQLAlchemy for efficient database interaction. šImplement robust data quality checks and monitoring systems to ensure data integrity and accuracy. šDevelop comprehensive documentation and contribute to knowledge sharing within the team. šContribute to the design and implementation of data governance policies and procedures.\n
Required skills and qualifications
š5+ years of hands-on experience in a Data Engineering role, with a strong proficiency in Python (version 3.6+). šExtensive experience working with relational databases and writing complex SQL queries. šProven expertise with SQLAlchemy or similar ORM libraries. šExperience with workflow management tools like Airflow (experience with PySpark or PyFlink is a major plus). šSolid understanding of data warehousing concepts and experience working with large datasets. šAbility to guide and mentor junior developers, fostering a collaborative team environment. šStrong communication skills, both written and verbal, with the ability to explain complex technical concepts to both technical and non-technical audiences. šExperience deploying and managing applications on cloud platforms like Openshift, ECS, or Kubernetes.
Looking for your next challenge? The world of crypto offers exciting roles in blockchain development, web3 innovations, and remote opportunities.


.jpg?1700169058)
