About the company
Our UniQue IT people are the most valuable part of Uni Systems; their knowledge and experience has made us the leading and reliable systems integrator of today and has contributed to our steady financial growth. We have created and are maintaining a stable working environment for our employees, with countless opportunities to innovate and thrive. Our work culture recognizes our UniQue IT people, supports the free sharing of ideas and the flow of information via open communication, while appreciating and effectively utilizing the talents, skills and perspectives of each employee. At Uni Systems, we are providing equal employment opportunities and banning any form of discrimination on grounds of gender, religion, race, color, nationality, disability, social class, political beliefs, age, marital status, sexual orientation or any other characteristics.
Job Summary
Description
šOur UniQue Team is growing and in our amazing team, we are looking for a AWS Data Engineer to join us. You will be working on a number of exciting projects in a challenging, dynamic and fast-paced environment.
šDesign, implement, and maintain the data architecture for all AWS data services šOptimize data models for performance and efficiency šWrite SQL queries to support data analysis and reporting šMonitor and troubleshoot data pipelines šGenerate reports and dashboards to visualize data šCollaborate with software engineers to design and implement data-driven features šPerform root cause analysis on data issues šKeep up to date with new AWS data services and features šMaintain documentation of the data architecture and ETL processes šTrain other team members on how to use the data architecture and ETL processes
Requirements
šBachelorās degree in computer science, engineering, or related field šMinimum 3 years professional experience working with data in a business setting šExperience with AWS data services šProficient in SQL for data analysis and manipulation šExperience with ETL (extract, transform, load) processes šStrong understanding of modern approaches to data engineering and the best approach to solve our customersā unique data problems šKnowledge of SQL and Python, other languages are preferable (PySpark, Java, Scala) šExperience in Big Data technologies (Hadoop, Spark, Databricks, Snowflake) šStrong experience building ETL / ELT pipelines, Data Lakes and Data Warehouses šExperience with Apache Spark