Senior Data & Platform Engineer

Data EngineerData EngineerFull TimeRemoteTeam 1,001-5,000

Location

United States

Posted

4 days ago

Salary

$175K - $200K / year

No structured requirement data.

Job Description

This description is a summary of our understanding of the job description. Click on 'Apply' button to find out more.

Role Description

The Senior Data & Platform Engineer is responsible for the day‑to‑day technical execution, reliability, and evolution of GID’s enterprise data platform. This role ensures the data foundation is scalable, secure, cost‑optimized, and ready to power analytics, AI, operational automation, and internal applications. The role spans hands‑on engineering, platform ownership, standards development, and technical leadership to accelerate data driven decision making across the organization. The Senior Data & Platform Engineer will embrace our company value of accountability, inclusiveness, energizing and courageousness.

Responsibilities

  • Platform Ownership & Architecture
    • Own design architecture, reliability, and performance of the enterprise data platform—including ingestion, storage, processing, orchestration, and consumption layers.
    • Implement and maintain modern data stack components (e.g., Snowflake, DBT, orchestration frameworks, metadata tools, quality frameworks).
    • Ensure platform scalability, availability, security posture, and cost efficiency.
  • Pipeline Engineering & Data Products
    • Build and maintain analytics‑ready and AI‑ready data pipelines, transformation models, semantic layers, and shared data services.
    • Develop and execute a unified and forward-looking vision for data products and engineering.
    • Set foundation for AI use cases with implementation of semantic layer, context graphs, vector DBs, and stay current with the latest best practices.
    • Design and implement reusable data assets, domain models, and standardized transformation patterns.
  • Governance, Quality, and Controls
    • Establish and enforce standards for data quality, data contracts, observability, lineage, and metadata management.
    • Implement access controls, RBAC, PII protection, and compliance with privacy regulations (GDPR, CCPA, internal retention policies).
    • Partner with data governance to establish stewardship practices, certified datasets, and SLA expectations.
  • Collaboration & Delivery
    • Partner with application engineering, data scientists, data analytics units to develop data products that unlock enterprise value.
    • Translate business requirements into scalable data architecture and reusable technical solutions.
    • Drive technical prioritization, sprint planning, and execution of the platform roadmap.
  • Leadership & Team Development
    • Mentor data engineers and act as the principal technical lead for engineering best practices.
    • Introduce modern engineering patterns.
    • Create documentation standards, operational runbooks, and incident response processes.

Qualifications

  • 7–10+ years in data & analytics engineering, platform engineering, data architecture or related technical fields.
  • Proven experience designing and operating modern cloud data platforms, especially Snowflake and DBT.
  • Strong understanding of Snowflake platform including advanced Snowflake features and Microsoft Azure, including data services, security and cost governance.
  • Hands‑on expertise with building data pipelines using tools such as Azure Data Factory, Fivetran, Matillion, or similar ingestion & ETL frameworks.
  • Proficiency in SQL, Python, and data modeling.
  • Strong understanding of data lifecycle management, DevOps practices, data orchestration, and production data operations.
  • Strong problem‑solving and systems thinking abilities.
  • Ability to work in fast‑moving environments with ambiguous or evolving requirements.
  • Excellent communication skills with the ability to simplify complexity for non‑technical audiences.
  • A mindset of automation, reusability, and continuous improvement.
  • Experience operating in organizations modernizing legacy data landscapes into a modern cloud data stack.

Compensation

Our company considers a range of factors including education and experience when determining base compensation. Compensation range: $175,000 - $200,000 plus 15% bonus potential.

Benefits

  • Comprehensive benefits package, including medical, dental, vision, 401k, and PTO.
  • 1 hour of paid sick and safe time for every 30 hours worked.
  • 10 days of paid vacation time accrued bi-weekly.
  • 6 weeks of paid parental leave.
  • 10 paid holidays annually.
  • Up to 3 floating days.

Job Requirements

  • 7–10+ years in data & analytics engineering, platform engineering, data architecture or related technical fields.
  • Proven experience designing and operating modern cloud data platforms, especially Snowflake and DBT.
  • Strong understanding of Snowflake platform including advanced Snowflake features and Microsoft Azure, including data services, security and cost governance.
  • Hands‑on expertise with building data pipelines using tools such as Azure Data Factory, Fivetran, Matillion, or similar ingestion & ETL frameworks.
  • Proficiency in SQL, Python, and data modeling.
  • Strong understanding of data lifecycle management, DevOps practices, data orchestration, and production data operations.
  • Strong problem‑solving and systems thinking abilities.
  • Ability to work in fast‑moving environments with ambiguous or evolving requirements.
  • Excellent communication skills with the ability to simplify complexity for non‑technical audiences.
  • A mindset of automation, reusability, and continuous improvement.
  • Experience operating in organizations modernizing legacy data landscapes into a modern cloud data stack.
  • Compensation
  • Our company considers a range of factors including education and experience when determining base compensation. Compensation range: $175,000 - $200,000 plus 15% bonus potential.

Benefits

  • Comprehensive benefits package, including medical, dental, vision, 401k, and PTO.
  • 1 hour of paid sick and safe time for every 30 hours worked.
  • 10 days of paid vacation time accrued bi-weekly.
  • 6 weeks of paid parental leave.
  • 10 paid holidays annually.
  • Up to 3 floating days.

Related Categories

Related Job Pages

More Data Engineer Jobs

Spark Engineer

Bright Vision Technologies

"Retrieve the best out of you" in each process what you do.

Data Engineer4 days ago
Full TimeRemoteTeam 51-200Since 2020H1B Sponsor

We are looking for a skilled Spark Engineer to join our dynamic team and contribute to our mission of transforming business processes through technology. This is a fantastic opportunity to join an established and well-respected organization offering tremendous career growth poten...

United States

Data Engineer I (US)

Payscale

Payscale powers compensation decisions for more than 25% of the US workforce

Data Engineer4 days ago
Full TimeRemoteTeam 501-1,000Since 2002H1B No Sponsor

The Data Engineer I will help manage the data warehouse, maintain existing data pipelines, and support operations by monitoring jobs and validating data loads. They will also collaborate with partner teams on data access requests and contribute to internal tooling and documentation.

United States
$84.2K - $126K / year

Senior Databricks Engineer

Thermo Fisher Scientific

The World Leader In Serving Science

Data Engineer4 days ago
Full TimeRemoteTeam 10,001+H1B Sponsor

The Senior Databricks Engineer will architect, build, and optimize data solutions to support Company’s digital transformation strategy. This includes developing scalable data pipelines, implementing data governance, and collaborating with various teams to deliver impactful data environments.

United States

Spark Engineer

Bright Vision Technologies

"Retrieve the best out of you" in each process what you do.

Data Engineer4 days ago
Full TimeRemoteTeam 51-200Since 2020H1B Sponsor

The Spark Engineer will contribute to building scalable, high-performance analytics platforms using distributed data processing technologies. The role involves leveraging technologies like Apache Spark and working on ETL pipelines.

United States