Quilt

Ever wondered how your favorite local shops compete with the big guys? That’s where we come in. We’re Quilt Software, providing Main Street's unsung heroes – from quirky cheese shops to family-run jewelry stores – with the tools they need to compete. Last year, we helped 14,000+ shops make over $2 billion in sales with our family of industry-specific software solutions. If you get a kick out of supporting local businesses, love great software, and want to be part of a company that’s powering Main Street, we’d love to chat. Come join us in our quest to keep local retail not just alive, but thriving!

Senior Data Engineer

Data EngineerData EngineerFull TimeRemote

Location

United States

Posted

11 days ago

Salary

Not specified

No structured requirement data.

Job Description

This description is a summary of our understanding of the job description. Click on 'Apply' button to find out more.

Role Description

We’re looking for a Senior Data Engineer to design, build, and optimize our data platforms so teams across the company can make fast, reliable, data-driven decisions. You’ll be a key technical leader, owning end-to-end data pipelines and modeling, and setting best practices around how we work with data.

You’ll work heavily with Databricks, Spark, SQL, and Python, building scalable data solutions that power analytics, reporting, and data products. Experience in payments or financial services is a strong plus.

What You’ll Do

  • Design and build data pipelines
  • Develop, maintain, and optimize ETL/ELT pipelines on Databricks and Spark
  • Integrate data from multiple internal and external sources into a centralized data platform
  • Own data modeling & architecture
  • Design and maintain robust data models (e.g., star/snowflake schemas, data vault, dimensional models) to support analytics and self-service BI
  • Establish and enforce data modeling standards and documentation
  • Ensure data quality, reliability, and performance
  • Implement data quality checks, validation frameworks, and monitoring
  • Tune queries and jobs for performance and cost efficiency in Databricks and downstream systems
  • Collaborate and lead
  • Partner with data analysts, data scientists, and product/engineering teams to understand data needs and translate them into technical solutions
  • Provide technical leadership and mentorship to other data engineers; help review designs and code
  • Governance & best practices
  • Contribute to and refine our data governance, security, and access control practices
  • Drive best practices around version control, CI/CD for data, and code standards

Qualifications

  • 7+ years of professional experience as a Data Engineer, Software Engineer, or similar role
  • Strong hands-on experience with Databricks (or a very similar cloud data platform) including cluster management, jobs, and notebooks
  • Advanced experience with Apache Spark for batch and/or streaming data processing
  • Expert-level SQL skills (complex joins, window functions, query optimization)
  • Strong Python skills for data engineering (e.g., PySpark, data processing libraries, scripting)
  • Proven experience in data modeling and designing schemas for analytics and reporting
  • Experience building and maintaining data pipelines in a cloud environment (AWS, Azure, or GCP)
  • Strong understanding of data warehousing concepts, ETL/ELT best practices, and data lifecycle
  • Solid software engineering fundamentals: version control (git), testing, code reviews, and CI/CD
  • Excellent communication skills and the ability to collaborate with technical and non-technical stakeholders

Requirements

  • Experience in payments, fintech, banking, or broader financial services (e.g., transaction data, ledgers, risk, fraud, reconciliation)
  • Experience with streaming technologies (e.g., Spark Structured Streaming, Kafka, Kinesis, or similar)
  • Familiarity with dbt or similar transformations-as-code frameworks
  • Experience with orchestration tools (e.g., Airflow, Databricks Workflows)
  • Knowledge of BI tools (e.g., Power BI, Tableau, Looker) and how data models power them
  • Exposure to machine learning workflows and supporting data science teams
  • Experience implementing data governance, lineage, and catalog tools

What You’ll Bring

  • A product mindset: you think about the end users of data and build with usability in mind
  • A bias for automation, reliability, and scalability over one-off solutions
  • Comfort with ambiguity, ownership of complex problems, and a desire to continuously improve the data ecosystem

Benefits

  • 401k investment opportunity, with company match
  • Medical, Dental, and Vision Plans
  • Paid Time Off
  • Paid Parental Leave
  • Paid Volunteer Leave
  • Fully Remote Work environment

Role Information

Full-Time, Remote in the United States. Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.

Company Description

Ever wondered how your favorite local shops compete with the big guys? That’s where we come in. We’re Quilt Software, providing Main Street's unsung heroes – from quirky cheese shops to family-run jewelry stores – with the tools they need to compete. Last year, we helped 14,000+ shops make over $2 billion in sales with our family of industry-specific software solutions.

If you get a kick out of supporting local businesses, love great software, and want to be part of a company that’s powering Main Street, we’d love to chat. Come join us in our quest to keep local retail not just alive, but thriving!

Job Requirements

  • 7+ years of professional experience as a Data Engineer, Software Engineer, or similar role
  • Strong hands-on experience with Databricks (or a very similar cloud data platform) including cluster management, jobs, and notebooks
  • Advanced experience with Apache Spark for batch and/or streaming data processing
  • Expert-level SQL skills (complex joins, window functions, query optimization)
  • Strong Python skills for data engineering (e.g., PySpark, data processing libraries, scripting)
  • Proven experience in data modeling and designing schemas for analytics and reporting
  • Experience building and maintaining data pipelines in a cloud environment (AWS, Azure, or GCP)
  • Strong understanding of data warehousing concepts, ETL/ELT best practices, and data lifecycle
  • Solid software engineering fundamentals: version control (git), testing, code reviews, and CI/CD
  • Excellent communication skills and the ability to collaborate with technical and non-technical stakeholders
  • Experience in payments, fintech, banking, or broader financial services (e.g., transaction data, ledgers, risk, fraud, reconciliation)
  • Experience with streaming technologies (e.g., Spark Structured Streaming, Kafka, Kinesis, or similar)
  • Familiarity with dbt or similar transformations-as-code frameworks
  • Experience with orchestration tools (e.g., Airflow, Databricks Workflows)
  • Knowledge of BI tools (e.g., Power BI, Tableau, Looker) and how data models power them
  • Exposure to machine learning workflows and supporting data science teams
  • Experience implementing data governance, lineage, and catalog tools
  • What You’ll Bring
  • A product mindset: you think about the end users of data and build with usability in mind
  • A bias for automation, reliability, and scalability over one-off solutions
  • Comfort with ambiguity, ownership of complex problems, and a desire to continuously improve the data ecosystem

Benefits

  • 401k investment opportunity, with company match
  • Medical, Dental, and Vision Plans
  • Paid Time Off
  • Paid Parental Leave
  • Paid Volunteer Leave
  • Fully Remote Work environment
  • Role Information
  • Full-Time, Remote in the United States. Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.

Related Categories

Related Job Pages

More Data Engineer Jobs

Full TimeRemoteTeam 501-1,000H1B No Sponsor

Lead Data Engineer developing data architectures for federal consulting organization

Amazon RedshiftAWSCloudETL
Virginia
Full TimeRemoteTeam 1,001-5,000

Ergotron, Inc. has an opening for a Sr. Data Integration Engineer in Eagan, MN. 40 hrs/week. Job duties include: Create, maintain, monitor and support optimal application integration solutions leveraging API’s, web services and system-specific functionality to execute business pr...

United States

Data Engineer

Magna Legal Services

Magna LS is a litigation consulting and support company providing services from Discovery to Trial.

Data Engineer11 days ago
Full TimeRemoteTeam 501-1,000Since 2007

Data Engineer leading design and scalability of Snowflake data platform for Magna Legal Services

SQL
United States
$130K - $150K / year

Risk Data AI/ML Engineer

SoFi

SoFi helps you save, spend, earn, borrow, invest, and protect your money–all in one app. NMLS 1121636

Data Engineer11 days ago
Full TimeRemoteTeam 1,001-5,000Since 2011H1B No Sponsor

Employee Applicant Privacy Notice Who we are: Shape a brighter financial future with us. Together with our members, we’re changing the way people think about and interact with personal finance. We’re a next-generation financial services company and national bank using innovative,...

United States