102469 – Sr. Data Engineer

102469 – Sr. Data Engineer

Summary

As a Senior Data Engineer, you will design, build, and support the core systems that power Milwaukee Tool’s data platform, enabling fast, data-driven decisions across the organization. This role focuses on delivering scalable data pipelines, self-service tools, and governance solutions that ensure trusted, accessible data for analytics and machine learning.

You will work closely with business partners and the Data Platform team to create enterprise-grade data products, influence engineering best practices, and mentor other engineers. The role offers the opportunity to shape long-term data strategy, improve operational reliability, and accelerate adoption of a modern data platform built on Azure and Databricks technologies.

Responsibilities

  • Design and build scalable data pipelines to ingest, transform, and curate data from APIs, databases, files, and event streams.
  • Lead technical design reviews and translate complex business needs into enterprise-grade data solutions.
  • Develop and optimize advanced data models (dimensional, data vault, domain-driven, canonical) to support analytics and productized datasets.
  • Champion SDLC best practices, continuous delivery, and data infrastructure automation using CI/CD and Infrastructure-as-Code.
  • Optimize distributed workloads using SQL, Python, and Spark; mentor others on tuning and scalable design patterns.
  • Build reusable data frameworks, libraries, and reference architectures to accelerate team productivity.
  • Perform root-cause analysis for major data incidents and lead long-term remediation to improve operational reliability.
  • Provide technical mentorship, guide code reviews, and help shape engineering capability maturity.
  • Collaborate with Architects, Data Leads, Product Owners, and cross-functional teams to define long-term data strategies.
  • Perform other duties as assigned.

Requirements

  • Bachelor’s degree in Computer Science, Information Systems, or equivalent experience.
  • 5 to 7+ years of experience in data engineering or a related technical field.
  • Expertise in SQL and advanced proficiency in at least one programming language (Python preferred).
  • Hands-on experience with Azure.
  • Extensive hands-on experience building scalable pipelines and workflows in Databricks (Delta Lake, Spark, Unity Catalog, Jobs, Workflows).
  • Hands-on experience with distributed data processing technologies such as Apache Spark.
  • Proven experience designing and implementing complex data models across multiple business domains.
  • Strong knowledge of version control, CI/CD, DevOps/DataOps, automated testing, and engineering best practices.
  • Strong problem-solving, debugging, and analytical skills in complex, multi-system environments.
  • Ability to lead cross-functional engineering initiatives and thrive in agile, collaborative teams.

Nice to Have

  • Experience with Databricks Unity Catalog, Delta Live Tables, or Databricks Workflows.
  • DataOps experience (pipeline observability, monitoring, automated quality).
  • Knowledge of metadata management or cataloging platforms (Purview, Collibra, Alation).
  • Experience with streaming frameworks (Kafka, Event Hubs, Kinesis) used with Spark Structured Streaming.
  • Experience working in an Agile environment.

Solicitar este puesto

Maximum allowed file size is 50 MB. Allowed type(s): .pdf