Senior Data Engineer -23158-1
Content + Source + Freshness • 16 Dec 2025 • 95% confidence
Offer value
With a favorable compensation range and significant responsibilities, this position is well-suited for seasoned data engineers looking to advance their careers.
- Hourly rate: $80–$85/hr
- Participation in impactful AI/ML projects
- Mentorship roles available
- Requires significant AWS experience
Pros
- Hourly compensation ($80-$85/hr) is attractive for the data engineering field
- Engagement in cutting-edge AI/ML projects
- Mentoring opportunities for junior engineers
Cons
- Contract role may affect job security
- Expectations around AWS proficiency may limit some applicants
- Hybrid role requires onsite presence for part of the week
Who it's for
Senior / Lead • Hybrid with onsite requirements
Good fit
- Senior data engineers with a passion for AI and analytics
- Mentors looking to guide junior team members
- Professionals eager to innovate in data solutions
Not recommended for
- Entry-level candidates without sufficient experience
- Individuals uninterested in hybrid work formats
- Those not keen on mentoring opportunities
Motivation fit
Key skills
About the job
Apply now: Senior Data Engineer, Location is Hybrid (Burbank, CA). The start date is September 30, 2025, for this 4-month contract position with potential extension.
Job Title: Senior Data Engineer
Location-Type: Hybrid (3 days onsite – Burbank, CA)
Start Date Is: September 30, 2025 (or 2 weeks from offer)
Duration: 4 months (Contract, potential extension)
Compensation Range: $80.00 – $85.00/hr W2
Job Description:
We are seeking a Senior Data Engineer to join a lean-agile product delivery team focused on building scalable, governed, and AI/ML-ready data solutions. As part of a cross-functional pod, you will design and implement high-performance data pipelines, support analytics and machine learning workflows, and embed governance into all aspects of data delivery. This role requires strong AWS expertise, hands-on engineering, and the ability to collaborate across engineering, product, and architecture teams.
Day-to-Day Responsibilities:
-
Design & Build Scalable Data Pipelines: Develop batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and orchestration frameworks like Airflow.
-
Optimize & Monitor: Ensure pipelines are resilient, cost-efficient, and scalable.
-
Enable Analytics & AI/ML: Deliver structured, high-quality data to BI tools and ML workflows; partner with data scientists to support feature engineering and model deployment.
-
Ensure Governance & Quality: Embed validation, lineage, tagging, and metadata standards into pipelines; contribute to enterprise data catalog.
-
Collaborate & Mentor: Participate in Agile ceremonies, architecture syncs, and backlog refinement. Mentor junior engineers and advocate for reusable services across pods.
Requirements:
Must-Haves:
-
7 years of experience in data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3).
-
Proficiency with SQL, Python, and PySpark for data transformations.
-
Experience with orchestration tools such as Airflow or Step Functions.
-
Proven ability to optimize pipelines for both batch and streaming use cases.
-
Understanding of data governance practices, including lineage, validation, and cataloging.
-
Experience with modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
Nice-to-Haves:
-
Experience influencing platform-first approaches across pods.
-
Strong collaboration and mentoring skills.
-
Knowledge of advanced governance practices and large-scale data platform operations.
Soft Skills:
-
Excellent communication skills for cross-functional collaboration.
-
Ability to mentor and guide junior engineers.
-
Proactive problem solver with strong organizational and teamwork skills.
