1. Home
  2. Jobs
  3. United States
  4. Washington
  5. Seattle
  6. Artificial Intelligence
  7. Principal Research Scientist – Vision AI, Simulation & Physical AI
CE
Centificcentific.com

Principal Research Scientist – Vision AI, Simulation & Physical AI

$350k – $400k YearlySeattle, Washington, United States | Palo Alto, California, United States (Hybrid)Full-time2h ago

About Centific

Centific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster.

Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets.

About Job

Lead Research Scientist – Foundation Models for Vision AI & Physical AI

Company: Centific

Location: Seattle, WA or Palo Alto, CA (Hybrid/Remote)

About the Team

Centific’s Physical AI Lab is building the next generation of embodied intelligence at the intersection of multimodal foundation models, simulation, agentic AI, and real-world robotics. Our mission is to move from perception and reasoning to robust real-world action across safety, industrial, healthcare, warehouse, autonomous systems, and smart environment use cases.

We are looking for a Research Leader with deep foundational model-building experience, a strong publication record, and the ability to translate frontier research into deployable systems and long-term IP.

The Role

As Lead Research Scientist, you will define and drive Centific’s research agenda in Vision AI, multimodal foundation models, simulation-first learning, agentic AI, and embodied intelligence. You will lead a small team of researchers, engineers, and interns while contributing directly to model design, large-scale training, benchmarking, and external scientific visibility.

This role is for someone who has gone beyond applying existing models and has materially advanced architectures, training methods, datasets, or evaluation frameworks in AI, robotics, vision, autonomous driving, or multimodal learning.

What You’ll Do

· Lead high-impact research in multimodal foundation models, world models, embodied AI, vision-language-action systems, and agentic AI.

· Develop new approaches for perception, temporal reasoning, spatial intelligence, affordance understanding, autonomous decision-making, and sim2real transfer.

· Advance challenging robotics capabilities including dexterous manipulation, contact-rich interaction, bimanual coordination, long-horizon task execution, navigation in dynamic environments, and robust action under uncertainty.

· Contribute to large-scale model building, including multimodal pretraining, distributed training, fine-tuning, distillation, and evaluation of models for vision, robotics, and autonomous systems.

· Help shape research relevant to autonomous driving and mobile autonomy, including scene understanding, multimodal sensor reasoning, planning-aware perception, and edge-case robustness.

· Guide integration of research with simulation and digital twin platforms such as Isaac Sim, Isaac Lab, MuJoCo, Omniverse, or related environments.

· Establish rigorous benchmarks and reproducible evaluation frameworks for robustness, safety, generalization, manipulation success, policy performance, and real-world deployment readiness.

· Mentor Ph.D. interns and engineers, and help build a strong research culture grounded in rigor, speed, originality, and scientific excellence.

Minimum Qualifications

· Ph.D. in Computer Science, Robotics, Machine Learning, Computer Vision, Autonomous Systems, or a related field.

· Strong publication record in top venues such as CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, CoRL, RSS, or leading autonomous driving/robotics venues.

· 5+ years of research experience in academia, industry, or advanced R&D environments.

· Demonstrated experience building or advancing large-scale foundational models, novel architectures, or training methods in multimodal AI, vision, robotics, autonomous driving, embodied AI, world models, or simulation-based learning.

· Deep expertise in PyTorch and/or JAX, GPU training, distributed experimentation, and large-scale model development.

· Proven ability to lead ambitious technical programs and mentor junior researchers.

Preferred Qualifications

· Publications or patents in multimodal foundation models, dexterous robotics, autonomous driving, spatial intelligence, simulation-based learning, manipulation, or embodied AI.

· Strong experience in Vision AI, including perception, tracking, grounding, 3D scene understanding, video understanding, sensor fusion, or multimodal reasoning.

· Familiarity with agentic AI systems, tool-using agents, planning frameworks, and memory-based architectures; experience with agentic memory, knowledge graphs, or long-horizon reasoning systems is a plus.

· Experience with Isaac Sim, MuJoCo, OpenUSD/Omniverse, Open3D, PyTorch3D, NeRF/3DGS, or related simulation and 3D stacks.

· Familiarity with imitation learning, reinforcement learning, planning, MPC, control, teleoperation data pipelines, or policy learning for robotics and autonomous systems.

· Experience with Ray, Kubernetes, Triton, TensorRT, Docker, W&B, or large-scale training and deployment infrastructure.

· Background in trustworthy AI, robotics safety, evaluation, or explainability for autonomous systems.

What Success Looks Like

· Publishable, reproducible, and deployable research that strengthens Centific’s Physical AI portfolio.

· New technical IP in multimodal AI, simulation, dexterous robotics, autonomous systems, and embodied intelligence.

· Strong mentorship and research leadership across a growing team.

· Demonstrable impact on model robustness, large-scale training capability, sim2real performance, manipulation capability, and real-world deployability.

Our Stack

Modeling: PyTorch, JAX, Hugging Face, xFormers

Simulation: Isaac Sim, Isaac Lab, MuJoCo, OpenUSD, Omniverse, Open3D

Systems: Python, Ray, FastAPI, Docker, Kubernetes, Triton, TensorRT

Multimodal AI: CLIP, SAM, VLMs, world models, vision-language-action architectures, agent frameworks

Why Join Centific

Help define Centific’s research direction in Physical AI, publish frontier work, mentor the next generation of researchers, and see your science move into real systems.

How to Apply

Email your CV, publication list, GitHub/Google Scholar, and optional 1-page research statement to diana.moeck@centific.com

Subject: Principal Research Scientist – Foundation Models, Vision AI & Physical AI

Salary Range: $350k -$400k

Centific is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, ancestry, citizenship status, age, mental or physical disability, medical condition, sex (including pregnancy), gender identity or expression, sexual orientation, marital status, familial status, veteran status, or any other characteristic protected by applicable law. We consider qualified applicants regardless of criminal histories, consistent with legal requirements.