Senior Data Engineer- Data Science & AI
Location
United States
Posted
8 days ago
Salary
Not specified
No structured requirement data.
Job Description
By joining Sedgwick, you'll be part of something truly meaningful. It’s what our 33,000 colleagues do every day for people around the world who are facing the unexpected. We invite you to grow your career with us, experience our caring culture, and enjoy work-life balance. Here, there’s no limit to what you can achieve.
Newsweek Recognizes Sedgwick as America’s Greatest Workplaces National Top Companies
Certified as a Great Place to Work®
Fortune Best Workplaces in Financial Services & Insurance
Senior Data Engineer- Data Science & AIRole Overview
As a Senior Data Engineer within the Transformation Office, you are the hands-on architect of the data supply chain for our most advanced initiatives. You will be responsible for the "heavy lifting" required to fuel Data Science models and AI applications with high-fidelity data. Your mission is to build the pipelines that bridge our legacy on-prem systems (Mainframes, SQL Server, DB2) with our modern Snowflake environment and AWS/Azure AI stacks. You are a "day-one" builder who ensures that data is not just moved, but engineered for the specific requirements of model training, feature stores, and RAG-based AI systems.
Key Responsibilities
• Hybrid Data Pipeline Execution: Design and implement robust ETL/ELT pipelines to ingest data from legacy on-prem sources, AWS (S3/RDS), and Azure (Blob/SQL), centralizing it for consumption in Snowflake and AI services.
• Engineering for Data Science: Build and maintain Feature Stores and specialized datasets optimized for machine learning, ensuring Data Scientists have immediate access to clean, versioned, and statistically valid data.
• Engineering for AI (RAG & LLMs): Develop the data pipelines required for Generative AI, including the automated extraction, chunking, and loading of unstructured data into vector stores across AWS and Azure.
• Snowflake Power-User Execution: Act as the technical lead for our Snowflake data warehouse, implementing sophisticated data modeling, Snowpipe automation, and compute optimization to support high-concurrency AI workloads.
• Legacy "Back-Reach" Engineering: Execute non-invasive data extraction patterns to unlock mission-critical data from decades-old on-premise systems without disrupting core business operations.
• Multi-Cloud Orchestration: Manage complex, cross-platform data workflows using Airflow, Step Functions, or Azure Data Factory, ensuring the synchronization of data across our multi-cloud AI posture.
• IT & Security Diplomacy: Partner directly with central IT, Database Administrators, and Security teams to solve connectivity hurdles (PrivateLink, IAM, firewalls) and secure "license to operate" for new data flows.
• Data Quality for Model Integrity: Implement automated validation and observability layers to detect data drift and quality issues that could compromise the accuracy of production AI and Data Science models.
• Cost & Performance Management: Drive the efficiency of our data stack by optimizing storage and query performance in Snowflake, AWS, and Azure to manage the ROI of the Transformation Office.
• Direct Stakeholder Collaboration: Work as a dedicated engineering partner to MLOps and Data Science teams to rapidly iterate on data requirements for evolving AI use cases.
Qualifications
• Education: Bachelor’s degree in Computer Science, Data Engineering, or a related field is required. A Master’s degree is highly desirable.
• Proven Execution: 6+ years of hands-on data engineering experience, with a track record of building production-grade pipelines for Data Science and AI in multi-cloud environments.
• Snowflake Mastery: Expert-level proficiency in Snowflake architecture, including data sharing, performance tuning, and the integration of Snowflake with external cloud AI services.
• Multi-Cloud Proficiency: Advanced, hands-on knowledge of AWS (S3, Glue, Lambda) and Azure (Data Factory, Synapse) data services.
• Technical Stack: Mastery of Python, SQL, and PySpark. Deep experience with data orchestration and containerization (Docker).
• Legacy Expertise: Proven ability to interface with "old world" tech (on-premise SQL, Mainframe extracts, flat files) and transform it for modern cloud consumption.
• AI/DS Fluency: A strong understanding of the specific data needs for Machine Learning (feature engineering) and Generative AI (vectorization and embedding pipelines).
• Execution Mindset: A "get-it-done" attitude, capable of navigating enterprise bureaucracy and technical debt to ship code at the speed required by a Transformation Office.
#LI-TS1 #remote
Sedgwick is an Equal Opportunity Employer and a Drug-Free Workplace.
If you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, consider applying for it anyway! Sedgwick is building a diverse, equitable, and inclusive workplace and recognizes that each person possesses a unique combination of skills, knowledge, and experience. You may be just the right candidate for this or other roles.
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
This role focuses on building and improving large-scale data collection systems used to support responsible advertising and high-quality machine learning datasets. Contribute to the development of data pipelines, monitoring systems, and data quality tools that support large-scale...
Senior Staff Engineer, Data
FloQastFloQast delivers workflow automation created by accountants for accountants.
As a Senior Staff Engineer, Data, you'll lead the design and implementation of FloQast's core data platform, ensuring scalability and reliability. You'll define standards for data ingestion and governance while collaborating with multiple teams and mentoring engineers.
Staff Data Engineer
BestowBuilding cutting-edge technology and data solutions for life insurance and annuities.
As a Staff Data Engineer, lead the technical roadmap for data infrastructure, manage complex data architectures, and mentor engineers while driving the organization's data platform innovations.
Senior Data Engineer
Sanctuary ComputerWe’re a small team of design-oriented software developers, with a strong understanding of user experience and product. We work in design, branding and engineering roles, with clients like Nike, General Electric, The Nobel Prize, Herman Miller, Adobe, Dig Inn and many others. That same thoughtfulness is applied inward, too. We understand that technologists & engineers have a strong sense of art and integrity, and are happier being a part of a team that promotes those values. We partner with our design team (XXIX) to build compelling digital products for industry leaders, early-to-mid stage startups, hardware companies, healthcare services, restaurant groups, and lifestyle brands.
The Senior Data Engineer will design and maintain data pipelines, ensuring scalability and reliability while integrating diverse data sources and optimizing workflows.