Senior Data Engineer
23 Oct 2025
Europe
Verified by Turrior
Content + Source + Freshness • 12 Dec 2025 • 95% confidence
88 / 100
Offer value
This role represents a significant opportunity to work at the intersection of AI and data engineering, offering a competitive package and room for individual growth.
- Role at the forefront of AI and automation technology
- Collaborative work environment with opportunities for growth
- Requires extensive knowledge in data engineering principles
Pros
- Innovative role at a leading edge of AI technology
- Work with diverse teams across various functions
- Supportive environment encouraging continuous learning
Cons
- Technical complexity may be overwhelming for some
- Requires up-to-date knowledge on AI and data tools
- Possibility of tight deadlines with high deliverables
Who it's for
Senior • Remote
Good fit
- Seasoned data engineers and AI practitioners
- Individuals wanting to be at the forefront of AI integration
- Professionals excited to work remotely and collaborate across teams
Not recommended for
- Inexperienced candidates or those lacking data engineering background
- Individuals preferring traditional, non-technical roles
- Those who thrive in structured environments rather than agility
Motivation fit
Aspiration to innovate in data infrastructure and AIInterest in collaborative engineering environmentsDesire to influence and mentor junior engineers
Key skills
Distributed systems and data pipelines engineeringExpertise in AI integration and data modelingCollaboration in multi-functional teamsProblem-solving and performance optimization
Score: 88/100 AI verified analysis
About the job
Deine Aufgaben
At Hypatos, we build vertical AI Agents that automate document-heavy, back-office workflows across enterprise functions like finance, compliance, procurement, and customer service. To scale our impact, we are looking for a Senior Data Engineer who will architect and build robust, scalable data infrastructure powering our AI-driven automation systems.
This role is deeply technical and cross-functional: You’ll work closely with AI Engineers, Product, and Delivery teams to design distributed data pipelines, optimize real-time data flows, and ensure our systems are reliable, performant, and secure at scale.
Key Responsibilities
This role is deeply technical and cross-functional: You’ll work closely with AI Engineers, Product, and Delivery teams to design distributed data pipelines, optimize real-time data flows, and ensure our systems are reliable, performant, and secure at scale.
Key Responsibilities
- Design and implement distributed data pipelines using Spark (PySpark), Kafka, and Kubernetes.
- Build and maintain scalable data infrastructure for real-time and batch processing.
- Develop and optimize data models and storage solutions using ClickHouse and vector databases.
- Collaborate with AI engineers to support Retrieval-Augmented Generation (RAG) pipelines and other LLM-powered workflows.
- Ensure data reliability, performance, and security across all environments.
- Contribute to CI/CD workflows, testing frameworks, and monitoring solutions for data systems.
- Translate business requirements into scalable data solutions in collaboration with Product and Delivery teams.
- Mentor junior engineers and promote best practices in data engineering and distributed systems.
Dein Profil
- Attitude: Team player
- Background: Degree in Computer Science, Software Engineering, or a related discipline (or equivalent professional experience).
- Experience: 5+ years of experience in data engineering or backend systems, with a strong track record of building production-grade distributed systems.
- Distributed Systems: Deep understanding of distributed computing patterns and technologies (e.g., Spark, Kafka, Kubernetes).
- Data Infrastructure: Hands-on experience with ClickHouse, vector databases, and data modeling for high-performance applications.
- DevOps: Familiarity with CI/CD pipelines, GitHub workflows, Docker, and cloud native environments.
- AI Integration: Exposure to RAG pipelines and LLM-powered systems is a strong plus.
- Problem-Solving: Strong analytical mindset with attention to detail, debugging skills, and performance optimization experience.
- Communication Skills: Ability to clearly articulate technical concepts and collaborate effectively across teams.
- Growth Mindset: Openness to learning new technologies and adapting to evolving problem spaces.
Nice to Have
- Experience with Python in production environments.
- Familiarity with AI/ML-powered products or enterprise automation platforms.
- Knowledge of cloud platforms (AWS, GCP, or Azure).
- Experience mentoring or leading engineering teams.
- Exposure to enterprise system integrations (e.g., SAP, Salesforce, Oracle)
Unser Versprechen
- We trust amazing people to do amazing things and make a long-term impact - we give you Freedom and ownership of meaningful work that directly impacts the business
- Beyond a top market compensation, you will enjoy a personal development budget, meal allowance, sporting activities and free beers as well as a hybrid working model
- We're building a positive organizational culture where personal and professional growth are just as important as business growth
- We believe different perspectives make Hypatos a better community - that is why we're committed to building a diverse and inclusive environment where you feel you belong
Über uns
Hypatos is redefining enterprise work by deploying LLM-powered AI Agents into the heart of business operations. Backed by leading investors (Elaia, Blackfin Tech, Grazia Equity, UVC Partners, DTFC, Plug & Play), we are expanding rapidly and building the foundation for the next generation of intelligent business systems.
Join us to shape the future of enterprise automation and help improve the way hundreds of millions of people work every day.
Join us to shape the future of enterprise automation and help improve the way hundreds of millions of people work every day.

