Fractal River
We help growth companies scale faster by building their customer operations, analytics and data engineering capabilities
Data Engineer
Location
United States
Posted
123 days ago
Salary
Not specified
Bachelor Degree1 yr expExperience acceptedEnglishAmazon RedshiftAWSAzureBig QueryCloudDockerGoogle Cloud PlatformPythonSQLTerraform
Job Description
• Help design and implement data pipelines and analytic infrastructures with the latest technologies.
• Create and deploy AI tools and agentic infrastructure to enhance both client and internal capabilities.
• Build data warehouses using tools like dbt, working with platforms such as Snowflake, Redshift, and BigQuery.
• Create beautiful dashboards and reports, and work with customers to create self-service data exploration capabilities.
• Build data pipelines and integrations to connect systems, automate processes, and ensure data flows seamlessly across platforms.
• Leverage APIs from multiple systems to extract and update data, trigger and monitor processes, and in general help tie our customers’ infrastructures into cohesive platforms that power their growth.
• Maintain and oversee cloud infrastructure to ensure it runs with the reliability and performance our customers expect.
• Help create data models, establish data governance frameworks, define metrics, and ensure data quality across reporting layers.
• Develop technical documentation and best practices for our customers.
• Drive internal improvement initiatives by identifying opportunities for efficiency gains, discovering new best practices, and proposing internal projects that enhance our capabilities and processes.
• Contribute to our evolving DevOps and DataOps practices, helping shape how we work as we continuously improve.
• Coordinate projects, activities, and tasks to ensure objectives and key results are met.
• Identify opportunities, generate innovative solutions, and improve existing product features.
Job Requirements
- Our ideal candidate is someone with 1-5 years of working experience in fast-paced environments, a high tolerance for ambiguity, and a passion for constant learning.
- Comfortable with Python and SQL fundamentals, but more importantly, you know when and how to leverage AI tools to accelerate your work while maintaining quality and understanding.
- Eager to leverage AI tools (ChatGPT, Claude, Gemini, etc.) to fuel creativity and problem-solving—not to replace your deliverables, but to expand your thinking and improve them.
- Collaborative and communicative, able to work with customers and team members effectively.
- Adaptable: Thrives in environments with constant iteration and welcomes creative solutions to challenging problems.
- Process-minded: Capable of developing and improving best practices in data, DevOps, and AI-assisted workflows.
- Detail-oriented: You have strong attention to detail and care about the quality of your work.
- English Language Requirements: CEFR level C1 (Advanced) means you can:
- Understand a wide range of demanding, longer texts and recognize implicit meaning
- Express yourself fluently and spontaneously without much obvious searching for expressions
- Use language flexibly and effectively for social, academic, and professional purposes
- Produce clear, well-structured, detailed text on complex subjects
- Nice-to-have's (not required):
- Familiarity with data warehouses like Snowflake, Redshift, or BigQuery
- Experience with data modeling, data governance, and creating complex visualizations
- Experience with BI tools such as Looker, Sigma, or Power BI
- AWS/GCP/Azure services knowledge
- Experience with development tools such as Terraform, Docker, CircleCI, and dbt
- Certifications (AWS, Google Professional Data Engineer, etc.)
- Knowledge of Google Analytics, Salesforce, HubSpot, and/or Zendesk
- Comfort working in non-hierarchical, diverse work environments
- Bachelor’s degree in computer science or similar field
Benefits
- Personal development plan with an accelerated career track.
- Access to an extensive reference library of books, case studies, and best practices.
- Unlimited access to AI tools (ChatGPT, Claude, Gemini, etc.)
- Unlimited budget for work-related books.
- Online training (Udemy, nanodegrees, etc.), English language training.
- Stretch assignments, pair programming, and code reviews with new technologies.
- Yearly performance bonus based on personal performance.
- Yearly success bonus based on company performance.
- Home office setup including fast internet, large monitor, and your favorite keyboard and mouse.
- After a few months, gaming chair, espresso maker, standing desk, and speakers (or other items like these).
- Monthly wellness budget to cover items such as sodas, tea, snacks, pet care, yoga, or other wellness-related items.
- Company card for wellness budget and company expenses.
- Three floating holidays and unlimited (but reasonable) PTO starting the first year.
- Fun company off-sites!
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
Data Engineer124 days ago
ContractRemoteTeam 11-50Since 2009H1B No Sponsor
Data Architect designing cloud-based data architecture for enterprise clients
AirflowBigQueryCloudPandasPython
Data Engineer126 days ago
Full TimeRemoteTeam 11-50Since 2017H1B No Sponsor
Data Architect Lead overseeing AI-enabled data platform development.
AzureCloudERPETLSQL
United States
Data Engineer126 days ago
Full TimeRemoteTeam 11-50Since 2017H1B No Sponsor
Senior Data Engineer developing scalable data architecture for global ERP systems
AirflowAzureCloudERPETLPythonSparkSQL
United States
Data Engineer126 days ago
Full TimeRemoteTeam 10,001+Since 1978H1B No Sponsor
Online Data Engineer Manager managing data engineering teams at The Home Depot
AirflowBigQueryETLJavaScriptPySparkPythonReactSQL