Job Type
Work Type
Location
Experience
1. Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and load data from various sources into our data warehouse or data lake.
2. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and implement solutions that support business insights and decision-making.
3. Optimize data pipelines and processes for performance, reliability, and scalability.
4. Design and implement data models, schemas, and metadata to support data governance and analytics requirements.
5. Monitor and troubleshoot data processing jobs, performance issues, and data quality problems.
6. Ensure data security, privacy, and compliance with regulatory requirements.
7. Stay updated on emerging technologies and best practices in data engineering
and contribute to the continuous improvement of our data infrastructure and processes.
8. Document technical specifications, data flows, and system architecture.
9. Experience with data security in the data management domain specially at the level of data lakes.
10. Experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
11. Experience with cloud platforms (AWS, Azure, or GCP) and familiarity with DevOps practices.
Required Skills: