14 Remote Perception Engineering Jobs You Can Apply for Today

Resume Captain Lighthouse Icon

Today’s List at a Glance

A hand-picked list of top-tier roles for ambitious professionals. Here’s the breakdown:

  • 💰 Salary Range: $120K – $356K
  • 🏢 Top Companies Hiring: NVIDIA, Waymo, General Motors
  • 📍 Geographic Spread: 2 remote positions, with strong concentration in Bay Area hubs (Mountain View, Santa Clara, Palo Alto, Sunnyvale), plus opportunities in Bellevue, Pittsburgh, and Southfield (MI).
  • 🪜 Seniority Level: Focus on senior, staff, and lead-level engineering roles—designed for experienced contributors and technical managers.

Featured Autonomous Vehicle Perception & ML Roles

Senior Software Engineer, Perception – Autonomous Vehicles at NVIDIA

📍 Location: Santa Clara, CA 95051

💰 Salary: $184K – $356K

Why it’s a great opportunity: High-pay senior role focused on perception systems and production-grade pipelines—ideal for Python and deep-learning engineers building real-time AV perception.

View Job Post →

Senior ML Engineer – Perception at General Motors

📍 Location: Mountain View, AR (Stone County)

💰 Salary: $158K – $241K

Why it’s a great opportunity: Strong applied ML role working on vehicle perception models and deployment—great for engineers experienced in model development and validation for AVs.

View Job Post →

Computer Vision Engineer, Geometry & Perception at Anduril

📍 Location: Bellevue, WA

💰 Salary: $191K – $253K

Why it’s a great opportunity: Emphasis on computer vision and sensor fusion for autonomous platforms—perfect for engineers who combine geometry, CV, and Python-based tooling.

View Job Post →

Engineer/Senior Engineer – Perception Capabilities at Motional

📍 Location: Pittsburgh, PA

💰 Salary: $136K – $225K

Why it’s a great opportunity: Focus on CV and deep learning for self-driving vehicles—an excellent fit for engineers building real-time perception systems and inference pipelines.

View Job Post →

Our AI Resume Optimizer can help you tailor your resume’s content, section by section, for each of these specific roles.

Optimize Your Resume Now

Senior Software Engineer, Perception State Estimation at Latitude AI

📍 Location: Palo Alto, CA

💰 Salary: $174K – $261K

Why it’s a great opportunity: Lead state-estimation and multi-object tracking—ideal for those experienced in sensor fusion, tracking, and low-latency inference in Python.

View Job Post →

Software Engineer, ML Inference, Simulation Infrastructure at Waymo

📍 Location: Mountain View, CA

💰 Salary: $170K – $216K

Why it’s a great opportunity: Blend of ML inference and simulation infra—perfect for engineers building scalable deployment, validation, and simulation pipelines for perception models.

View Job Post →

Parking Perception DNN Engineer at NVIDIA

📍 Location: Santa Clara, CA

💰 Salary: $184K – $356K

Why it’s a great opportunity: Specialized DNN role focusing on 3D obstacle detection and multi-sensor fusion—excellent compensation for deep-learning engineers working in parking and close-range perception.

View Job Post →

Systems Engineer, Perception at Waymo

📍 Location: Mountain View, CA

💰 Salary: $196K – $248K

Why it’s a great opportunity: Systems-level role evaluating and qualifying perception algorithms—ideal for engineers interested in safety-critical qualification and cross-functional validation.

View Job Post →

Software Engineer – E2E Autonomy at Applied Intuition

📍 Location: Sunnyvale, CA

💰 Salary: $153K – $222K

Why it’s a great opportunity: End-to-end autonomy role focused on ML tooling and large datasets—great for engineers building perception data pipelines and training infrastructure in Python.

View Job Post →

Staff ML Engineer, Dynamic World Perception at AV Virtual Travel

📍 Location: US-Anywhere (Remote)

💰 Salary: $130K – $180K

Why it’s a great opportunity: Remote leadership role concentrating on real-time road and dynamic scene detection—well-suited to engineers driving perception stack productionization.

View Job Post →

Senior Machine Learning Engineer, LLM/VLM Visual Reasoning at Waymo

📍 Location: Mountain View, CA

💰 Salary: $204K – $259K

Why it’s a great opportunity: Advanced ML role combining visual reasoning with large models—ideal for engineers applying VLM/LLM techniques to perception and decision-making.

View Job Post →

Sr. Machine Learning Engineer, Autonomous Driving & Parking at Lucid Motors

📍 Location: Southfield, MI

💰 Salary: $120K – $150K

Why it’s a great opportunity: Focus on autonomous driving and parking systems—strong fit for engineers working on perception and mobility-focused ML solutions.

View Job Post →

Software Engineer, Autonomous Vehicles at Helmai Virtual Travel

📍 Location: US-Anywhere (Remote)

💰 Salary: $120K – $150K

Why it’s a great opportunity: Remote role centered on real-time AV systems—good for Python developers building autonomy software and perception integration across platforms.

View Job Post →

Sr. Perception Engineer at Gusto

📍 Location: Fremont, CA

💰 Salary: $150K – $250K

Why it’s a great opportunity: Senior perception role focused on 3D understanding from cameras and LiDAR—ideal for CV experts transitioning into AV perception.

View Job Post →

Resume Captain Steering Wheel Icon

Strategic Playbook for Landing These Roles

🎯

Profile of an Ideal Candidate

  • Core Responsibility: Design, implement, and productionize real-time perception systems (3D object detection, tracking, and state estimation) that fuse camera, LiDAR, and other sensors for safe autonomous operation.
  • Essential Experience: A strong background in computer vision and deep learning with hands-on experience in sensor fusion, state estimation, and deploying models in production using Python (and often C++), plus familiarity with ML inference optimizations.
  • Key Competencies: Beyond technical prowess, these roles require cross-functional collaboration, clear communication about trade-offs and safety, and the ability to drive end-to-end projects from data to deployment.
📄

The Resume Blueprint: Keywords & Metrics

Keywords to Target:

Sensor Fusion
3D Object Detection
State Estimation
Real-time Inference
Model Optimization (TensorRT)

Metrics that Matter:

Reduced inference latency by 30–50% by converting models to optimized runtimes (e.g., TensorRT) and pruning/quantizing networks to meet embedded real-time constraints.

Increased 3D detection AP by 6–12 points through targeted dataset augmentation, sensor calibration improvements, and architecture changes validated on holdout benchmarks.

Scaled validation to 10k+ simulation scenarios with automated CI pipelines to catch regressions before fleet testing, improving release confidence and reducing field faults.

💬

Nailing the Narrative: Your Interview Strategy

Be prepared to answer tough, strategic questions. Here are some specific examples:

“Describe a complex perception pipeline problem where sensor misalignment or calibration caused failures. How did you diagnose it, and what long-term fixes did you implement?”

“Walk us through a time you optimized an inference stack for latency and throughput—what trade-offs did you make between accuracy, model size, and safety?”

“How have you convinced cross-functional stakeholders (safety, systems, product) to accept a model change that reduced false positives but increased compute cost?”

💡

Pro Tip: Structure answers with Situation → Task → Action → Result, quantify impact (latency, AP, simulation coverage), and explicitly call out safety trade-offs and verification steps you put in place.

🚀

Put Your Playbook into Action


Sign Up for Free!

Table of Contents