What you'll do
You will work primarily on backend data platforms, focusing on data modeling, database design, and efficient data processing pipelines implemented in Python.
Your responsibilities will include:
- Design, develop, and maintain scalable data processing pipelines using Python
- Design and optimize data models and database schemas following best practices and industry standards
- Build efficient and performant data storage structures for large and complex datasets
- Develop and maintain ETL / ELT processes for structured and semi-structured data
- Design and develop data-focused microservices using Python
- Work with data formats such as XML, JSON, and similar; experience with XBRL, DPM, SDMX is a strong plus
- Ensure data quality, consistency, traceability, and performance across the data lifecycle
- Collaborate closely with data architects, analysts, and domain experts to translate requirements into robust technical solutions
- Write clean, maintainable, and well-tested code, following development standards and participating in code reviews
- Support system integration via APIs and messaging-based communication
- Participate in Agile ceremonies and contribute to the continuous improvement of engineering practices