Experience:1. Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and load data from various sources into our data warehouse or data lake.2. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and implement solutions that support business insights and decision-making.3. Optimize data pipelines and processes for performance, reliability, and scalability.4. Design and implement data models, schemas, and metadata to support data governance and analytics requirements.5. Monitor and troubleshoot data processing jobs, performance issues, and data quality problems.6. Ensure data security, privacy, and compliance with regulatory requirements.7. Stay updated on emerging technologies and best practices in data engineeringand contribute to the continuous improvement of our data infrastructure and processes.8. Document technical specifications, data flows, and system architecture.9. Experience with data security in the data management domain specially at the level of data lakes.10. Experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).11. Experience with cloud platforms (AWS, Azure, or GCP) and familiarity with DevOps practices.Required Skills:Knowledge of agile development methodologies.Familiarity with version control systems (e.g., TFS).Excellent analytical and problem-solving skills.Ability to work collaboratively in a team-oriented environment.Strong proficiency in SQL and experience with databases (SQL and NoSQL) and distributed storage systems.Proficient in at least one programming language (Python, Java, Scala) and data manipulation libraries (e.g., Pandas, NumPy).Proficient in big data technologies such as Hadoop, Spark, or Kafka.Proficient in data integration and ETL tools.Skilled in data engineering on the cloud and on-premise.Strong problem-solving and troubleshooting skills.Excellent teamwork and collaboration skills.Proficient in developing data pipelines.Proficient in building data lakes, warehouses, and data architecture.