🤖

Rivian Autonomy & AI Roadmap

Dec 12, 2025

Summary

  • Rivian presented advances in autonomy, sensors, compute, and AI across vehicles and business.
  • Transitioned from rules-based autonomy to AI-centric, end-to-end learned driving models.
  • Gen 2 R1 (launched mid-2024) deployed new sensor and compute stack; Gen 3 (R2, late 2026) expands sensors and introduces in-house Rivian Autonomy Processor.
  • Introduced Rivian unified intelligence and Rivian Assistant for integrated in-vehicle AI experiences and operations.

Action Items

  • (late 2026 – Rivian) Deploy Gen 3 autonomy platform on R2 vehicle platform.
  • (later this month – Rivian) Issue OTA update expanding hands-free coverage to >3.5M miles in North America.
  • (early 2026 – Rivian) Roll out Rivian Assistant to Gen 2 and Gen 1 customers.
  • (2026 – Rivian) Begin rolling out point-to-point address-to-address driving capabilities.
  • (post-launch – Rivian) Continue testing silicon, systems, and fleet integration; expand ADR triggers and ground truth usage.

Autonomy Platform Overview

  • Shift from rules-based perception/planner to neural network–based, end-to-end learned driving models.
  • Data flywheel: deployed fleet collects triggered events for offline training, distills models back to vehicles.
  • Development benefits from vertically integrated control of network, software platforms, and vehicles.

Gen 2 (R1) Capabilities

  • Sensor set: 55 megapixels total cameras and five radars.
  • Inference platform: ~10x improved over Gen 1.
  • Current hands-free coverage near 150,000 miles; OTA to expand to >3.5M miles soon.

Gen 3 (R2) Hardware Summary

| Component | Key Details | | Cameras | 11 cameras totaling 65 megapixels (10 MP more than R1) | | Radars | Five radars: one front imaging radar, four corner radars with dual short/long-range modes | | Lidar (LAR) | Front-facing long-range LAR; active illumination, 3D point clouds, ~5M points/sec density | | Autonomy Processor (RAP One) | In-house multi-chip module (MCM) on TSMC 5nm; neural engine 800 sparse INT8 TOPS per die | | Gen 3 Compute | Gen 3 autonomy computer delivers up to 1,600 TOPS when integrated; processes 5 billion pixels/sec | | Cooling Options | Liquid-cooled deployment for R2; proven configurable as air-cooled for other uses |

RAP One (Rivian Autonomy Processor) Details

  • MCM combining Rivian silicon and memory dies for tight SOC-memory integration.
  • Neural engine: 800 sparse INT8 TOPS; supports transformers, multi-headed/deformable attention, nonlinear ops.
  • Application processor: ARM Cortex A720 AE (first OEM use of ARM v9 compute platform in automotive).
  • Safety/realtime: eight ARM Cortex R52 cores for safety island and real-time processing.
  • Memory bandwidth: net ~205 GB/s via three LPDDR5 channels and weight decompression support.
  • Scalability: supports single-chip to multi-chip configurations connected via high-speed Riblink (up to 128 Gbps).
  • Functional safety: designed per ISO26262 (ASIL) with hardware redundancy, ECC, and runtime safety software.

Sensor Strategy & Integration

  • Multimodal sensing: camera primary workhorse; radar and LAR address edge cases and poor visibility.
  • LAR rationale: reduced cost, improved resolution, compact integration enabling OEM-grade hidden mounting.
  • Corner radar dual-mode enables removal of ultrasonic sensors in R2.
  • Early fusion approach: pixels, radar returns, LAR returns encoded and fused into a geometric feature space for downstream transformers.

Software & Large Driving Model (LDM)

  • LDM: end-to-end model from raw sensors to trajectories, trained on millions of miles of fleet data.
  • Uses transformers, autoregressive prediction, reinforcement learning, and tokens representing trajectory segments.
  • Reinforcement learning: sample many trajectories, rank with road-rule rankers, backpropagate to prefer optimal trajectories.
  • Supports concurrent execution of up to four models on chip.
  • Tooling: full in-house compiler, profiling, middleware stack; same middleware used across Gen 2 and Gen 3.

Data Pipeline And Training Workflow

  • Autonomy Data Recorder (ADR): selective event-triggered capture, tag, compress, upload to cloud for training and evaluation.
  • ADR supports live push of new triggers and rapid developer iteration.
  • Majority of training data auto-labeled using offline large models; reduces human annotation needs.
  • Ground-truth fleets: R2 vehicles with LAR become production ground-truth sources at fleet scale, far exceeding prototype fleets.

Validation And Release Processes

  • Cloud-based simulator runs autonomy stack across millions of real-world scenarios for statistically significant safety/performance metrics.
  • Apprentice mode runs new versions in background to compare performance against human-driven miles and previous software versions before release.
  • Continuous OTA updates deliver feature improvements across stack levels.

Product Roadmap And Features

  • Near-term: OTA expanding universal hands-free coverage to >3.5M miles in US/Canada.
  • 2026: Point-to-point address-to-address driving rollout begins.
  • Post-2026: Eyes-off driving (hands off + eyes off), then personal Level 4 autonomy (full autonomy for owned vehicles).
  • Autonomy Plus: paid tier for universal hands-free (free to Gen 2 customers until March following announcement); includes future features like point-to-point, automatic parking.

Rivian Unified Intelligence & In-Vehicle AI

  • Unified intelligence: in-house multimodal, multi-agent, multi-model platform with strong data governance and privacy focus.
  • Edge/cloud integration: orchestrates foundation models, supports memory/context, natively multimodal (audio, vision, text).
  • R2 cabin compute: ~100 TOPS dedicated to in-cabin experience, enabling offline, low-latency agentic interactions.
  • Factory/service applications: diagnostics agent connects telemetry to validate production quality; AI-powered service tools speed technician repairs and enable customer self-troubleshooting.

Rivian Assistant (Demo Highlights)

  • Agentic in-vehicle assistant integrated into OS; activation via button or “Hey, Rivian.”
  • Integrates third-party agents (example: Google Calendar) to read and manage calendar items.
  • Context-aware: connects calendar, navigation, EV planner, drive modes, and vehicle controls.
  • Natural language examples: move calendar event, navigate to destination, estimate battery on arrival, change drive mode, adjust seats, discover restaurants, compose/send messages with contextual memory.
  • Availability: planned for Gen 2 and Gen 1 customers in early 2026.

Decisions

  • Build in-house autonomy silicon (RAP One) and in-house full software/toolchain for velocity, optimization, and cost control.
  • Adopt multimodal sensor strategy (camera + radar + LAR) for production vehicles.
  • Vertically integrate sensors, compute, and software to enable faster iteration and higher-performing autonomy features.

Open Questions

  • Timeline details for wider public availability of point-to-point, eyes-off, and personal L4 beyond initial 2026 indications.
  • Pricing and licensing specifics for Autonomy Plus beyond the one-time or monthly payment options.
  • Scale and schedule for rolling R2 production volumes and expected fleet growth impact on the data flywheel.
  • Long-term roadmap for RAP silicon successors beyond RAP One and expected TOPS/performance targets.