Rivian’s Roadmap to AI Architecture and Autonomy with Founder and CEO RJ Scaringe

No Priors Podcast

Sarah Guo & RJ Scaringe

Rivian’s Autonomy Strategy, Proprietary Chips, the R2, and the Future of EVs

[Cold Open]

RJ Scaringe: By 2030, it’ll be inconceivable to buy a car and not expect it to drive itself. Every single one of our cars, we want to have the ability for it to operate at very high levels of autonomy. Radars are extremely cheap. LiDARs are very cheap. But the really expensive part of the system is actually the onboard inference—an order of magnitude more expensive than any of the perception stack. My view is EV adoption in the United States is a reflection of the lack of choice. As consumers, we need lots of choices. We need to have variety. We self-identify with the thing we drive. The world doesn’t need another Model Y. The world needs another choice.

[Introduction]

Sarah Guo: Hi listeners, welcome back to No Priors. Today I’m here with RJ Scaringe, the founder and CEO of Rivian. We’re here to talk about their autonomy strategy, proprietary chips, their coming R2 model, whether Americans want EVs, and what our relationship to cars is going to be in the age of AI. Let’s get into it. RJ, thanks so much for doing this.

RJ Scaringe: Thank you for having me.

Becoming an Autonomy Company

Sarah Guo: So, Rivian’s already an incredibly cool company. How did you decide it was going to become an autonomy company? When did that happen?

RJ Scaringe: From the beginning, we thought of it as a transportation and mobility company. Even before Rivian became Rivian, when I was thinking about the first products, it was unclear what kind of car it would be—or even if it was a car—but it was always clear we wanted to be at the front edge of helping to redefine what it means to have access to personal transportation. Autonomy has always been part of the strategy, but it’s now fully coming to life with the technology that we’re building.

Sarah Guo: When you think about the function of Rivian, there’s transportation, there’s also the experience. How long ago did you guys start investing in the autonomy strategy?

RJ Scaringe: We launched R1 at the very end of 2021. And we used what I’ll broadly characterize as a 1.0 approach to autonomy. We had a perception platform. We used a third-party front-facing camera—essentially a third-party solution that then plugged into an overall framework that we built—but it was all rules-based. So the camera fed a rules-based planner. The planner would then make a bunch of decisions around the feeds from the perception. And the moment we launched, we knew it was the wrong approach, but it was the thing we’d started working on well before the launch. So at the end of 2021, beginning of 2022, we made the decision to completely reset the platform.

Sarah Guo: Was that hard as a decision?

RJ Scaringe: No, because it was so clear. When you’re building something like this, you recognize you’re going to spend many, many billions of dollars creating it. We knew that at the core of transportation is driving, and at the core of that is a shift to having the vehicle be capable of driving itself. So we made the decision to redo it—clean sheet, no legacy of what we had built in Gen One. That first launched from a hardware point of view in the middle of 2024, with our Gen Two vehicles. Not a single line of shared code, not a single piece of common hardware on the perception or compute side. And then we had to build the actual data flywheel—grow the car park to build enough of a data flywheel to then start to train the model.

RJ Scaringe: What we showed at our Autonomy Day late last year, late in 2025, was the beginnings of a series of really exciting steps for how this is going to grow and expand. I say this all the time: the last three years compared to the next three years are going to look very different. The rate of progress we saw in autonomy between 2021 and 2025, versus what we’re going to see between today and 2029 or 2030—they’re completely different slopes. That really comes back to entirely new architectures now being used to develop self-driving. Truly AI architectures, whereas before, these were not AI architectures in the true sense. They were using machine vision but really rules-based environments that we defined as humans. We codified them, which is very different than how it works today.

The Architectural Revolution in Autonomy

Sarah Guo: You might actually have perfect timing here. I got to be part of investing in the first wave of independent autonomy bets that were working with the OEMs at my last investing firm. That was eight to ten years ago. And as you mentioned, there have been several architectural revolutions since then. For companies to make that shift from separate perception and planning systems to more end-to-end neural networks—I asked because I felt it was actually quite a hard decision for people in choosing their partners, from a technical perspective.

RJ Scaringe: Well, you can see it. If you go back to the very beginning of the idea of self-driving, a lot of effort, a lot of spend happened for companies to build these rules-based environments and these more classic systems. And when transformer-based encoding came along just a couple years ago, it shifted very rapidly—it was clear that the future state was going to be neural net–based. It was hard because if you’re a company that built all these systems, the question is: do I keep investing in what I had? What do I do with all this work that was built before? And the reality is the vast majority of it is going to be pure throwaway. It wasn’t a gradual shift. It was a complete rethink of how things are architected.

Building In-House vs. Partnering

Sarah Guo: How did you decide that this was going to be an in-house effort versus a partner effort, given that most people who made cars said they were going to partner or buy something here?

RJ Scaringe: The emotional and philosophical answer is: on things that are really important, we’ve taken the approach of vertically integrating them. Electronics, our software, all the high-voltage systems in the vehicle—motors, inverters, all the power electronics—these are all things we develop and build in-house. In a few cases, we had to start with something that was either off the shelf or partially off the shelf, but today all of that’s completely in-house.

RJ Scaringe: In the case of self-driving, we knew long-term it needed to be developed internally. We started, as I said, with a Mobileye-centric solution, which a lot of folks did—particularly in that 2015 to 2021 time frame. But when you really look at what’s necessary to be successful in a neural net–based approach, there’s a core set of ingredients that very few people have, and I think we uniquely have them.

RJ Scaringe: First and foremost, you need to have complete control of the perception platform—everything the system is capable of observing, whether that’s cameras, radars, or LiDAR, or some combination of all three. You need to have control of that, meaning there’s no intermediary company processing some of the information. That’s powerful because you can then feed raw signals into your system.

RJ Scaringe: The system needs to be capable of triggering unique or interesting or noteworthy events that you can then use to train on. Those triggered moments need to be captured, saved on the vehicle, and then when the time arises—ideally over Wi-Fi—sent up. The reason I say Wi-Fi is this is a lot of data. You could of course do it over LTE, but it’s expensive. You have to have a really robust data architecture on the vehicle. Then you need to be able to send it off-board and use that with a lot of GPUs to train a model.

RJ Scaringe: Companies that are developing independent solutions and are not a car company typically don’t have access to the type of mileage that we do—the huge amount of data that our vehicles generate. If you’re developing from a sensor set point of view, you typically don’t have the vehicle architecture and the vehicle car park. So we just came to the view that we have all these ingredients to do it really well.

RJ Scaringe: And it’s not an optional thing. The companies that do this well will exist. The companies that don’t do this well—I feel really strongly about this—they will not exist. They will shrink to nothing. Asymptotically approach zero.

Sarah Guo: You think it can only be delivered in a vertically integrated way?

RJ Scaringe: No, I think there’s more than one, less than five companies outside of China that have the necessary ingredients to do this. The capital, the GPUs, the car park with enough vehicles to generate enough data. I say more than one, less than five—it’s probably more than one, less than three, maybe four. There’s a very small number of companies that can do this.

RJ Scaringe: I think the unique spot we are in right now is the 1.0 era ending.

Sarah Guo: Can I ask explicitly then—it’s you, it’s Tesla, it’s Waymo. Is that the three?

RJ Scaringe: I would include all three of those, yeah. And there’s maybe one or two others in the mix. But the challenge is you have to look at not just the moment in time for performance where we are today. Do you have the ingredients to continue making progress at a very high rate over the next four or five years? A lot of the solutions that are more 1.0-based and are sort of stuck in that framework, I think, have a truly zero percent chance of progressing to be competitive with a neural net–based approach.

RJ Scaringe: The neural net–based approach does take a lot. You have to build a ton of inference—either buy it or build it—a lot of onboard compute. We decided to build it, so we built an in-house chip to do this. You need to have a car park this large.

Sarah Guo: You just mean enough onboard compute to actually run the models in the vehicle?

RJ Scaringe: Yeah, in the vehicle. You could buy that—of course, NVIDIA makes those—but you need to be able to do that at scale and have it in every car. So we took the decision to make our chip in-house.

Sarah Guo: Is that more a capability decision or a cost decision?

RJ Scaringe: It’s cost. And we want to have it on everything. Every single one of our cars, we want to have the ability for it to operate at very high levels of autonomy. So we design, spec, and build the cameras. Radars are extremely cheap. LiDARs are now very, very cheap. But the really expensive part of the system is actually the onboard inference. It’s an order of magnitude more expensive than any of the perception stack. I think people focus on the perception because it’s the things we can visualize. But the brain is actually the most expensive part. So we brought that in-house as a way to remove cost from the system so that we can easily deploy this on every car.

Levels of Autonomy and Safety

Sarah Guo: You’re taking a step-by-step approach to levels of autonomy at Rivian. How do you think about how quickly you approach Level 4, the safety case around each of these things, and how fast your team goes against this?

RJ Scaringe: Even this question is unique because just a few years ago—2019, 2021 even—there were very clearly delineated ways to approach autonomy. There was a Level 2 approach, which was camera-heavy, maybe with a few radars. And then there was a Level 4 approach, which of course had cameras but had a lot of LiDARs. It was sort of inconceivable to think of the Level 2 system becoming a Level 4. And similarly, the Level 4 system was way overbuilt to even conceivably think about putting that on every consumer vehicle.

Sarah Guo: You didn’t want all these parts—tens of thousands of dollars of perception.

RJ Scaringe: Exactly. So what’s happened is those two worlds have just started to very clearly merge. The delineation between a Level 2, a Level 3, and a Level 4—in terms of perception and compute—has started to fade. It’s now essentially just about how capable the system is at addressing all these corner cases.

RJ Scaringe: This is what’s hard for a consumer to recognize. If you’re in a Level 2 system, a Level 3 system, or a Level 4 system, for 99.9999 percent of the time—three or four nines—it feels identical. The difference is the fifth or sixth or seventh nine—these extreme corner cases. That’s actually led to a lot of confusion, where you’ll be in a Level 2 system and think the car can drive itself. And the answer is yes, it can, under most road conditions, except these very unique corner cases.

RJ Scaringe: So to your point on safety cases, the question becomes: how confident are we in the system’s capability in covering these really obscure, unlikely, rare events—which of course, if they’re not covered well, can lead to a terrible outcome, like the vehicle in a bad collision? That’s where the neural net–based approach has just changed things a lot. The capabilities are so much stronger, and the ability now for us to deploy on a lot more vehicles—to have a car park that’s very large—has been transformative.

RJ Scaringe: We went from, a few years ago, state-of-the-art being a test development fleet of maybe a few hundred vehicles to now thousands and thousands. Every single car on the road is part of your data fleet, identifying these unique corner cases and then running the model against them to test. And now of course we’re simulating those unique cases, and we can do a lot there.

RJ Scaringe: The whole nature of it has changed so dramatically. I think by 2030, it’ll be inconceivable to buy a car and not expect it to drive itself. Maybe that’s sooner—we hope it’s sooner, we’re targeting a little sooner than that—but certainly in the very near future, that will become a must-have in a car. Sort of like it’s hard to imagine buying a car today without airbags or without air conditioning. These things were at a moment in time optional. I think in not too much time—a couple years—it’ll be hard to conceive of buying a car that can’t drop you at the airport or pick up your kids from school.

Market Implications for the Auto Industry

Sarah Guo: I would argue that right now most of the biggest car makers do not have the ingredients that you described to make this a reality. Do you think that’s going to play out in the market, where autonomy will be so important as a core feature that there’s just going to be a big market share shift to those who can figure it out?

RJ Scaringe: It’s a hard question to answer. I always characterize it like this: I think it’s inconceivable for a car company to continue to operate at scale—mass market—without a software-defined architecture. And that’s even before you get to autonomy. Just the basics—can you do over-the-air updates? Do you have control of a unified architecture?

Sarah Guo: Can you define software-defined architecture?

RJ Scaringe: Yeah, before we even get to autonomy, these are basics. The way car electronic systems have been designed, built, and evolved—with the exception of Tesla and Rivian—every car on the road has what is called a domain-based architecture. You could also call it a function-based architecture. All the functions across the vehicle—chassis control, door system control, your air conditioning system—all have little computers associated with them, what we call ECUs, electronic control units. In a modern car, you might have 100 to 150 of these. Each runs its own little island of software, and that software is written by a supplier, or more likely a supplier to the supplier. You go to a tier one, and they hire a tier two who writes the code base.

Sarah Guo: That’s why it’s impossible to debug a software system like that.

RJ Scaringe: And it’s also why it’s really hard to do an update. Imagine you have 100 different islands of software written by 100 different teams that all have to coordinate. If you want a feature—something that manifests as a feature often involves combining functions from different domains.

RJ Scaringe: A simple one to visualize: when you walk up to your car to get into it, you want it to automatically unlock. You want the HVAC to go to your preset. You want your seats to adjust. You want it to make an audible noise outside. You want the lights to do something. You probably want the audio system to do something. Those are all different little ECUs in a traditional car, and the coordination cost is really high. It’s very unlikely that a car company will make a change to that sequence because it involves coordinating among maybe ten different players.

RJ Scaringe: In contrast, with a zonal architecture where you have a very small number of computers—ideally one, two, maybe three depending on the size of the car—running one operating system that controls everything, it’s very easy. That sequence, you could make updates to in a matter of minutes—maybe an hour. You change the whole sequence of what happens when you walk up to the car, issue an over-the-air update, and it’s very straightforward.

Sarah Guo: How often does Rivian update?

RJ Scaringe: We do about one a month. We typically add a couple new features, add refinements to existing features. We’re listening to what customers are seeing and asking for. Every month the car gets notably better, and it’s created this really amazing dynamic where customers are excited for the update—they’re like, “When’s the next OTA going to drop?”

RJ Scaringe: The irony of all this is that these domain-based architectures go back to fuel injection systems. Up until the early 1960s, every car on the road was completely analog—no computers at all. The first computers were there to drive the fuel injection systems, and car companies said, “This isn’t a core competency. Let’s push that little computer to a supplier.” This is where you saw things like the Bosch fuel injection systems. It was never planned—sort of like a field of weeds. Then over the next 60 to 70 years, everything that became computer-controlled to any degree suddenly started to have a little ECU, a little computer associated with it. It just grew into this absolute disastrous mess that is, today, the network architecture in truly every car on the road with the exception of two companies.

RJ Scaringe: What I just described is what underpins the large software licensing deal we did—a $5.8 billion deal with Volkswagen Group, the second largest car company in the world—to essentially leverage our network architecture and ECU topology for all their various brands.

RJ Scaringe: So to tie it back to your question about market share: First, I think it’s inconceivable that a car company can exist at scale without a software-defined architecture that allows features to become better and better—particularly thinking about how AI starts to integrate into those features. Second, it’s inconceivable to think about a car company existing at scale without the vehicles having very high levels of autonomy.

RJ Scaringe: Car companies have a choice on both of those. Choice one: they can accept that they’re going to shrink. Choice two: go build it themselves, which is really hard because they don’t typically have these skill sets—they’re not software and electronics companies in terms of their organizational DNA. Or they can find a third party to source it from. And in both cases, there’s not great third parties to go to. In the case of autonomy, most of the third parties that emerged over the last 10 to 15 years tend to be very much classic, rules-based—what I’d call “autonomous vehicle 1.0” solutions. Those work pretty well for the business construct of selling a sensor and a function. But that structure is really flawed when you want a large data flywheel that’s constantly learning and evolving and you’re issuing updates constantly. It’s really hard to imagine that with an arm’s-length transaction. The vertically integrated stacks are going to naturally have some big advantages.

Will Autonomy Models Converge or Differentiate?

Sarah Guo: This might be an irrelevant question, but I’m curious. Do you think that the autonomy models that the three or five companies develop are fundamentally different over time? I spend a lot of time in the AI ecosystem, and the language-oriented foundation models feel like they’re converging at this moment in time. When I look at Rivian—people adventure in that thing. Do you actually want it to do different things, have different styles or capabilities? Or is it really just as much autonomy as possible, safety case?

RJ Scaringe: This is a great question. In the LLM world, a lot of it has converged because the training data sets are nearly the same—we’re taking the breadth of knowledge contained on the internet and training models off of that. In the case of driving a vehicle, there is no internet of driving data. You need both a robust sensor set to capture the data and a car park with enough vehicles in it.

RJ Scaringe: Tesla has the largest car park of vehicles by far. Our approach is that we have a higher level of capability on our perception stacks. We have better cameras, we have radar, and of course with R2 we’ll have LiDAR as well. A huge part of that strategy is that those don’t just cover corner cases better—the cameras have incredible low-light and bright-light performance, so the dynamic range is stronger. We have more cameras, a lot more megapixels. We have radar, which is great for object detection. And the LiDAR is a very powerful tool for training the models.

RJ Scaringe: Imagine 800 feet in front of us there’s a little speck in a camera. It’s hard to figure out what that is. Historically, what we would do to train that is have a LiDAR sitting on a ground-truth fleet to help train the cameras. Putting that on every single one of our cars turns our entire fleet into this amazing training platform, this data acquisition machine. That was a core part of how we thought about our strategy: we’re going to go not as heavy as, say, a Waymo on perception, but heavier than, say, Tesla, to build a really robust data platform on a vehicle-by-vehicle basis—and then with a car park that’s going to grow significantly with the expansion of R2.

RJ Scaringe: So first and foremost, there is no common internet data. The data sets we’re going to be picking up are going to be very similar, but you have to go acquire them.

Sarah Guo: But there are still different decisions about what data you care about acquiring.

RJ Scaringe: I think this gets to how does a car feel. Ultimately it needs to be safe, and the differences in the way it drives or feels are going to be more about the UI—the user interface of it. We just updated some of our features. We have three settings for how the vehicle drives: Mild, Medium, and Spicy. Spicy is the highest one. Over time, I think this will start to become part of a key decision—how does the vehicle behave.

RJ Scaringe: There’s work we’re doing to think about how the vehicle can behave in a way that, against a set of heuristics, drives like you. So the overall model is trained on how to perform in a safe way, but it actually learns some of your driving preferences and creates a model around you. Of course, in a world where you never drive the car because it’s always driving for you, there’s a way for you to set preferences—I’d like it to aggressively change lanes, I’d like it to reside in the right-hand lane. Those kinds of decisions are less around the tech, more about the product or the UI.

Sarah Guo: The ability to collect those preferences.

RJ Scaringe: Preference-based. And I think we will see that, and it’ll be a decision that a Tesla makes that may be different than how Rivian makes it. It’s hard to say today.

What the R2 Means for Rivian

Sarah Guo: Can we talk about what the R2 means for the company and some of the key design decisions here? I was just talking to Jonathan, one of your lead designers, about the constraints and aiming for more mass market and more volume.

RJ Scaringe: Yeah. R1 is a flagship product. Its average selling price is around $90,000. The R1S is the best-selling premium electric SUV in the country—electric SUVs over $70,000—and we’re the best-selling premium SUV, electric or non-electric, in the state of California. It sells really well—it outsells everything in its class, like the Tesla Model X, about 2 to 1. But because of the price, it’s limiting in terms of how much volume we can achieve with that platform.

RJ Scaringe: R2 is our first truly mass-market product, with pricing that’s going to start at $45,000. The average price of a new car in the United States is $50,000, so people in that $45,000 to $55,000 price range will have a really great choice. To date, there haven’t been a lot of great choices there. There’s sort of a singular set of great choices with the Model 3 and Model Y. And of course that’s shown through extreme market share capture—roughly 50 percent of the EV market is Model 3 or Model Y.

RJ Scaringe: There’s just such an untapped opportunity to pull customers out of ICE vehicles—internal combustion vehicles—with a choice that has characteristics that are different and unique relative to a Tesla.

Do Americans Want EVs?

Sarah Guo: Do Americans want EVs? Why haven’t they adopted them faster?

RJ Scaringe: Causality is always hard to understand, but let’s zoom out. The overall adoption rate of EVs in the United States is around 8 percent. The vast majority of vehicle buyers are buying vehicles under $70,000, with the average sale price of about $50,000. If you look at the number of vehicle choices at a price point under $70,000, there are well in excess of 300 different vehicle model line choices—hatchbacks, minivans, SUVs, two-seaters, convertibles. In the EV space, I think there’s more than one, less than three great choices. Tesla with the Model 3 and Model Y is absolutely one. But there are so few choices that if you’re looking for a form factor that’s not a Tesla, there’s almost nothing.

Sarah Guo: So you think it’s just missing product—choices people want?

RJ Scaringe: An extreme lack of choice. A shocking lack of choice. And this gets into interesting corporate psychology. Because of the success of the Model Y in particular, the EV choices that do exist outside of Tesla are often very similar to a Model Y. If you drew an outline of the side view profile of a lot of its alternatives and put it next to a Model Y, it’s almost identical.

Sarah Guo: There’s a design sketch over here of basically the Model Y and all its competitors—they’re all basically the same.

RJ Scaringe: Exactly. If you want a Model Y, buy a Model Y, versus getting a copy. All these companies are trying to create their own version of Model Y, and it’s unfortunate because they didn’t say, “Well, what can we do that’s unique and different?” For us, we think the Model Y is a great car—I’ve owned one, many folks on our team have owned one—but the world doesn’t need another Model Y. The world needs another choice.

RJ Scaringe: This is a reframing of how we look at transportation. It’s such a big space, such an area of personal expression. We need, as consumers, lots of choices. We need variety. We self-identify with the thing we drive. We just haven’t had it. So my view is that EV adoption in the United States is a reflection of the lack of choice. There’s one set of really great choices with Model 3 and Model Y. I think there needs to be many more.

RJ Scaringe: Even looking at our partnership with Volkswagen Group—a big motivator for that, which ties to our mission, was: can we take our technology platform and allow that to be expressed through a variety of really interesting and storied brands, different form factors, different price points, different segments? The more choices we have, the more it’s going to lead to broader-based adoption of electric vehicles, which creates a very positive level of momentum around the space.

RJ Scaringe: It’s worth noting—when we look at how we develop a car like R2, we don’t think of it as “this is someone who’s going to buy an EV, let’s make it good.” We think of it as: let’s make the best possible vehicle we can imagine. Incredible performance, great range, great dynamics, tons of storage—and the person buying it will be drawn into electrification because the car is just the best choice they have.

RJ Scaringe: We took that same view with R1, and on R1, the vast majority of our customers are buying an EV for the first time ever—their first EV is a Rivian. Which is really good. If all we were doing is moving customers between one or two brands, it wouldn’t be accomplishing the goal. We have to create new EV customers with products that are so compelling that it just draws people in.

Our Relationship with Cars in the Age of AI

Sarah Guo: So that leads into my very last question. I grew up thinking a car is a huge part of my identity. Love cars. Drew them. Still think they’re pretty cool. As they become more like utilitarian services with the rise of robotaxis—serving some of the function your car did before—how do you think our relationship with vehicles changes over time?

RJ Scaringe: I do think we’re going to see a shift. It’s an interesting philosophical question—why are cars such a part of our society, and why do we have this affinity for them in a way that we don’t have for other things in our life that are really important? I don’t look at my refrigerator and think, “I really love that,” in the same way that I do with a car.

RJ Scaringe: I think part of it is that a car enables personal freedom. It allows you to explore. It’s something that you not only ride in, but it becomes part of an expression of self. And I think that’s probably going to continue to some degree, but it is going to evolve.

RJ Scaringe: The way we look at it with our products—and even how we’ve laid out and contemplated the purpose of the brand—we really look at it through the lens of: the vehicles and products we make need to both enable people to go do the kinds of things they would hope to have memories of years to come. We often say the kinds of things you’d want to take photographs of. But more than just enabling it—which is a functional requirement, like can it fit your stuff, your pets, your gear, your friends—can it inspire it? Can the brand and the way we present what we’re building and the way we make design decisions inspire you to go do the things you want to remember for years to come?

RJ Scaringe: There are little design decisions we make that link to that. A flashlight in the door is an invitation to explore—an invitation to go look at things in the night.

Sarah Guo: Or the treehouse.

RJ Scaringe: Yeah, exactly. There are all these little decisions throughout the whole car that are designed to engage that element of inspiring people to imagine the life they want to have.

[Outro]

Sarah Guo: Awesome. Thank you so much, RJ. Congrats on the R2 and on the autonomy program.

RJ Scaringe: Thank you.

Sarah Guo: Find us on Twitter at @NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.