Staff ML Engineer, Robotics
Content + Source + Freshness • 12 Dec 2025 • 95% confidence
Offer value
High value due to significant impact on robotics technology, opportunity for advanced machine learning application, and leadership potential.
- Focus on deploying cutting-edge machine learning in robotics
- Critical role in advancing perception and navigation systems
- Requires significant experience and expertise
Pros
- Influence on the future of robotic perception and navigation
- Engage in advanced machine learning applications
- Mentoring opportunities for junior engineers
Cons
- Challenging workload and expectations for innovation
- Requires deep expertise in machine learning and robotics
- High level of cross-functional collaboration can be demanding
Who it's for
Senior • Remote or On-site in the US
Good fit
- Senior ML engineers passionate about robotics
- Individuals eager to influence technological innovations
- Candidates with mentoring aspirations
Not recommended for
- Entry-level engineers without domain experience
- Those avoiding collaborative work
- Professionals disinclined to engage with hands-on projects
Motivation fit
Key skills
About the job
What we’re doing isn’t easy, but nothing worth doing ever is.
We envision a future powered by robots that work seamlessly with human teams. We build artificial intelligence that enables service robots to collaborate with people and adapt to dynamic human environments. Join our mission-driven, venture-backed team as we build out current and future generations of humanoid robots.
As a Staff ML Engineer, Perception / Robotics, you will develop, deploy, and optimize machine learning models that enable robots to understand and navigate complex human environments. You will lead the design of ML systems, from sensor fusion to real-time inference, ensuring robustness in safety-critical, real-world deployments.
Responsibilities
- Develop and deploy ML models for perception/navigation tasks such as object detection, semantic segmentation, tracking, scene understanding, localization, and path prediction.
- Design and implement sensor fusion and mapping pipelines combining vision, depth, LIDAR, IMU, and other signals for robust perception and navigation in dynamic spaces.
- Build real-time ML inference pipelines optimized for robotic hardware and embedded compute.
- Setup data collection, labeling strategies, dataset curation, and synthetic data augmentation for training and evaluation.
- Establish metrics, benchmarks, and test frameworks to validate ML models in both simulation and real-world environments.
- Collaborate with robotics software engineers to integrate perception and navigation intelligence into autonomy stacks.
- Work with operations to analyze field data, diagnose performance gaps, and iterate on model improvements.
- Contribute to long-term ML and perception and navigation architecture decisions, influencing the roadmap for future robots.
- Mentor junior ML engineers and contribute to building strong applied ML best practices within the team.
Skills and Experience
- Master’s or PhD in Computer Science, Robotics, Machine Learning, or related field.
- 8+ years of experience in applied machine learning, computer vision, or robotics perception.
- Strong background in deep learning frameworks (PyTorch, TensorFlow, JAX).
- Hands-on experience with real-time perception/navigation tasks (detection, tracking, segmentation,path planning).
- Expertise in one or more sensor modalities: RGB/depth cameras, LIDAR, radar, or multimodal fusion.
- Experience deploying ML models on edge/embedded hardware (e.g., Jetson, TPU, ARM-based platforms).
- Familiarity with SLAM, mapping, and navigation pipelines.
- Solid software engineering skills in Python and C++ for ML system integration.
- Proven ability to take ML models from research prototype to production deployment.
- Strong debugging skills for diagnosing ML performance gaps in fielded systems.
