title-image
Turrior - Let work find you
Recruiters get AI-ranked shortlists and automated outreach, filling roles up to 5× faster.
0%
Popularity
0d
Avg. Time to Hire
0h
Recruiter Res. Time
0%
HR Satisfaction
Careers at TRIA Recruitment
All open opportunities, right here. Explore, apply, grow.
Apply now

Senior Data Engineer - Azure

Other
other
4 Nov 2025
Verified by Turrior

Content + Source + Freshness • 12 Dec 2025 • 95% confidence

88 / 100

Offer value

This role is well-compensated with advanced technical challenges and excellent career advancement opportunities.

  • Competitive salary up to £75k plus bonuses.
  • Senior role with significant responsibilities.
  • Opportunity to innovate in a key industry.
Pros
  • Substantial salary and bonus structure.
  • Pivotal role in data transformation within a leading FMCG.
  • Collaboration with cross-functional teams for impactful results.
Cons
  • High expectations for advanced technical knowledge.
  • Fast-paced environment with potential for high workloads.

Who it's for

Senior • Hybrid

Good fit
  • Experienced data engineers with senior-level experience.
  • Professionals looking to advance in data architecture.
Not recommended for
  • Individuals without a solid technical background.
  • Candidates seeking less challenging roles.

Motivation fit

Desire to lead data initiatives.Interest in advanced data processing technologies.

Key skills

Data EngineeringAzure DatabricksETL/ELT processes
Score: 88/100 AI verified analysis

About the job

Senior Databricks Data Engineer 2-3 days on-site - central London up to £75k + 20% Bonus + Excellent Benefits Our client is a leading global Retail/FMCG brand undergoing an exciting period of rapid growth and transformation. With significant investment in data and technology, they are building a world-class data platform to power decision-making across every area of the business - from supply chain and logistics to marketing, customer sales and in-store operations. We are seeking an experienced Senior Data Engineer with deep expertise in Databricks and Medallion Architecture (Bronze-Silver-Gold) to design, build, and optimize our next-generation data platform. This role will be pivotal in developing scalable data pipelines, enabling advanced analytics, and driving data quality and governance across the organization. You'll work closely with data scientists, analysts, and business stakeholders to transform raw data into trusted, actionable insights that power critical business decisions. Key Responsibilities Design and implement scalable data pipelines and ETL/ELT workflows in Databricks using PySpark, SQL, and Delta Lake. Architect and manage the Medallion (Bronze, Silver, Gold) data architecture for optimal data organization, transformation, and consumption. Develop and maintain data models, schemas, and data quality frameworks across multiple domains. Integrate data from a variety of structured and unstructured sources, including APIs, relational databases, and streaming data. Optimize performance, scalability, and cost efficiency of Databricks clusters and workflows. Collaborate with cross-functional teams to support analytics, machine learning, and business intelligence use cases. Implement data governance, lineage, and observability best practices using tools such as Unity Catalog, DataHub, or Collibra. Mentor junior engineers, fostering best practices in data engineering, testing, and DevOps for data (DataOps). Stay current with emerging technologies in cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, Delta Lake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design, and data lakehouse concepts. Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog and Delta Live Tables. Familiarity with streaming frameworks (Structured Streaming, Kafka, etc.). Background in data observability and metadata management. Exposure to machine learning pipelines or MLflow within Databricks. Knowledge of infrastructure as code (IaC) using Terraform or similar tools. To apply for this role please email across your CV ASAP.

Similar Jobs

7 months agoOther
Other

End-to-end AI hiring for modern HR teams

Turrior uses artificial intelligence to create job listings, automate candidate screening, conduct video interviews, and apply comprehensive AI scoring — helping companies hire faster, more accurately, and with lower operational costs.

Key benefits:

  • AI-powered job creation and structured job data
  • Intelligent candidate screening and automated shortlisting
  • Video interviews with AI-based answer analysis
  • Comprehensive AI scoring of skills, experience, and role fit
  • Recruitment process automation and reduced time-to-hire

Share job