Automatically save on energy costs and live more sustainably.
Data Engineer
Location
District Of Columbia
Posted
2 days ago
Salary
Not specified
Job Description
Job Requirements
- 3–6+ years of experience in a data engineering or analytics engineering role
- Strong dbt fundamentals
- Solid SQL knowledge
- Python for pipeline development or data quality tooling is a plus
- Hands-on experience with Snowflake
- Comfort with GCP data services (BigQuery, Cloud Storage, Pub/Sub) is a plus
- Experience building dashboards for business stakeholders in tools like Hex, Looker, or similar
- Genuine curiosity about electricity pricing, competitive markets, and the grid work is preferred
- An AI-native approach to productivity.
Benefits
- Competitive salary + meaningful equity + benefits
- Flexible on location but value regular in-person collaboration with the team.
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
We are looking for a skilled Spark Engineer to join our dynamic team and contribute to our mission of transforming business processes through technology. This is a fantastic opportunity to join an established and well-respected organization offering tremendous career growth poten...
Data Engineer I (US)
PayscalePayscale powers compensation decisions for more than 25% of the US workforce
The Data Engineer I will help manage the data warehouse, maintain existing data pipelines, and support operations by monitoring jobs and validating data loads. They will also collaborate with partner teams on data access requests and contribute to internal tooling and documentation.
The Senior Databricks Engineer will architect, build, and optimize data solutions to support Company’s digital transformation strategy. This includes developing scalable data pipelines, implementing data governance, and collaborating with various teams to deliver impactful data environments.
The Spark Engineer will contribute to building scalable, high-performance analytics platforms using distributed data processing technologies. The role involves leveraging technologies like Apache Spark and working on ETL pipelines.