PP – Data Engineer B. – Job0219

Summary

We are seeking a highly skilled and experienced Senior Data Engineer to join our Credit Platform Data team. In this pivotal role, you will be responsible for designing, building, and maintaining robust data pipelines and ETL processes that enable the ingestion, transformation, and loading of data from diverse sources into our data warehouse. Your work will directly impact the reliability, scalability, and efficiency of our data infrastructure, supporting critical credit platform operations and analytics. Collaborating closely with product managers, analysts, and other stakeholders, you will help translate business requirements into scalable data solutions that drive informed decision-making and business growth.

Responsibilities

As a Senior Data Engineer on the Credit Platform Data team, your core responsibilities will include:

  • Architect, develop, and maintain scalable and reliable ETL pipelines to ingest, transform, and load data from multiple sources into the data warehouse, ensuring data integrity and performance.
  • Develop and optimize data models that support efficient querying and reporting, aligning with business needs and platform scalability.
  • Work closely with product managers, data analysts, and other stakeholders to understand data requirements and translate them into technical solutions that meet business objectives.
  • Implement automated data quality checks and validation processes to ensure accuracy, completeness, and consistency of data across systems.
  • Proactively identify, diagnose, and resolve data-related issues, minimizing downtime and ensuring continuous data availability.
  • Maintain comprehensive documentation of data pipelines, processes, and system architecture. Participate actively in design and code reviews to uphold high-quality standards.
  • Stay abreast of emerging data engineering technologies and industry best practices to recommend and implement improvements that enhance system performance and scalability.

Requirements

Must-Have Skills

** Bachelor’s degree in Computer Science, Engineering, or a related field.

* SQL: Expert-level proficiency in SQL for querying, manipulating, and optimizing data in relational databases. Ability to write complex queries and optimize them for performance.

* Python: Strong programming skills in Python, including experience with data manipulation libraries and scripting for automation.

* PySpark: Proficient in using PySpark for distributed data processing and building scalable ETL pipelines on big data platforms.

* Pandas: Experience with Pandas library for data analysis and transformation in Python, enabling efficient handling of structured data.

* ETL (Extract, Transform, Load): Deep understanding of ETL concepts and hands-on experience designing and implementing ETL workflows that ensure data accuracy and timeliness.

* Data Modeling: Skilled in designing logical and physical data models that support efficient data storage, retrieval, and analytics.

* Data Warehousing: Experience working with data warehouse architectures, including star and snowflake schemas, and optimizing data storage for analytical workloads.

* Unix: Proficient in Unix/Linux operating systems, including command-line tools and shell scripting for automation and system management.

* Shell Scripting: Ability to write and maintain shell scripts to automate routine tasks and data pipeline operations.

* Database Development: Experience in developing and maintaining databases, including schema design, indexing, and query optimization.

* Automation Testing: Knowledge of automation testing frameworks and practices to ensure the reliability and quality of data pipelines and ETL processes.

Target Start Date: 9/29/2025Expected Duration: OngoingTime Zone: PST / ES /CTCountry Restrictions: Brazil and Argentina ONLY

Job Type: Remote
Allowed Country: Argentina Brazil

Solicitar este puesto

Maximum allowed file size is 50 MB. Allowed type(s): .pdf, .doc, .docx