While CES may have started out as the Consumer Electronics Show, the global event has over the years expanded to include everything from enterprise technologies to automobiles. As a result, the show has not only been a regular keystone event for things like CPU and GPU announcements, but it’s also become home to major automotive technology announcements as well. And for Intel’s autonomous driving subsidiary, Mobileye, those two paths are coming directly together this year, as this morning the group is announcing their next-generation Level 4 autonomous driving SoC, the EyeQ Ultra.

Aimed for a 2025 release, the EyeQ Ultra is Mobileye’s most ambitious SoC yet, and not just for performance. In fact, as a site that admittedly rarely covers automotive-related announcements, it’s the relative lack of performance that makes today’s announcement so interesting to us. But we’re getting ahead of ourselves here; so let’s start with the basics.

The EyeQ Ultra is Mobileye’s seventh generation automotive SoC, and is designed to enable Level 4 autonomous driving – otherwise known as “high automation” level driving. Though not quite the holy grail that is Level 5 (full automation), L4 is the more immediate goal for automotive companies working on self-driving cars, as it a degree of automation that allows for cars to start, and if necessary, safely stop themselves. In practice, level 4 systems are likely to be the basis of robo-taxis and other fixed-location vehicles, where such self-driving cars will only need to operate across a known and well-defined area under limited weather conditions.

Mobileye already has the hardware to do L4 automation today, however that hardware is comprised of six to eight EyeQ chips working together. For the research and development phase that’s more than sufficient – just making it all work is quite a big deal, after all – but as L4 is now within Mobileye’s grasp, the company is working on the next step of productization of the technology: making it cheap enough and compact enough for mass market vehicles. And that’s where the EyeQ Ultra comes in.

At a high level, the EyeQ Ultra is intended to be Mobileye’s first mass market autonomous driving SoC, To accomplish this, Mobileye is designing a single-chip L4 driving system – that is, all of the necessary processing hardware is contained within a single high-end SoC. So when attached to the appropriate cameras and sensors, the EyeQ Ultra – and the Ultra alone – would be able to driving a car as per L4 standards.

But perhaps the most interesting thing about the EyeQ Ultra is that Mobileye intends to do L4 driving with a chip that is, on paper, not particularly powerful. The official performance figure for the chip is 176 TOPS, which to be sure, is a lot of performance today (in 2022). But it’s is a fraction of the performance that’s planned for high-end SoCs in the 2025 timeframe that Mobileye is targeting. In short, Mobileye not only believes that they can do L4, but that they can do it at a fraction of the performance and power consumption of their competitors.

Ultimately, the argument that Mobileye will be coming to market with is that autonomous driving has matured enough as a field that not everything needs to be done in software or in highly flexible accelerators. Instead, it’s time to start building true ASICs, with highly specialized fixed-function (or otherwise limited flexibility) components that do one thing, and do it well. Overall it’s the natural path of progression for most task-specific processors, and Mobileye believes that self-driving automotive systems are finally ready to head that direction as well.

Diving into speeds and feeds then, the EyeQ Ultra is going to have several different types of cores, each for a different task involved in operating an L4 self-driving vehicle. These are:

  • 12 RISC-V CPU cores
  • Arm GPU
  • Arm DSP
  • SIMD cores
  • VLIW cores
  • Coarse grained reconfigurable array (CGRA) cores
  • Deep learning cores

The latter 4 groups of cores comprise Mobileye’s “proprietary accelerators”, and are where the bulk of the work is done. According to Mobileye there are 64 such accelerator cores in all, though the company isn’t breaking down just how many cores are in each specific group. The entire chip, in turn, will be built on a 5nm process (presumably TSMC’s).

Ultimately, Mobileye’s investment in so much limited-flexibility hardware means that the chipmaker has relatively little hardware available for high throughput general computing – and, as their reasoning goes, relatively little need for it either. With sufficient accelerator throughput, even 176 TOPS should be enough for an L4 autonomous vehicle.

In Mobileye’s eyes, focusing so much on task specific hardware will bring a couple of benefits First and foremost is that it keeps down the total amount of silicon required. This not only allows them to get everything on to a single chip, but it keeps the associated system cost down. The second benefit is power: the fewer transistors you have to light up, the less power is spent. And even though power is relatively easy to come by in a car – particularly EVs – it can add up with multiple chips. So Mobileye is looking to make that a major feature differentiator with the EyeQ Ultra: if the lower cost doesn’t woo automakers, hopefully the lower power and cooling requirements will.

Meanwhile, although Mobileye doesn’t name any specific competitor, overall their press release/sales pitch seems like a thinly veiled shot at NVIDIA, whose own Atlan SoC is due in the same 2025 time period. Indeed, even the timing of today’s announcement (8am PT) is set against when NVIDIA’s CES 2022 keynote starts, rather than timing it to go with Intel’s 10am keynote. So Mobileye would seem to have a very clear idea of who they believe is their chief rival.

It’s a notable rivalry not only because of the firms involved (essentially Intel versus NVIDIA), but because of NVIDIA’s TOPS-centric promotion of their automotive SOCs. For reference, NVIDIA is promoting Atlan as offing over 1000 TOPS of throughput for the SoC alone, and that would be even greater throughput if used in a high-end multi-chip Drive PX setup, as NVIDIA likes to do.

Consequently, Mobileye is very mindful of ending up in a specs war with NVIDIA, as the latter’s high deep learning throughput certainly looks attractive compared to Mobileye’s lower throughput accelerator-based approach. Fundamentally, the difference in TOPS throughput just reflects the difference in design philosophy between the two groups – NVIDIA’s software defined approach versus Mobileye’s more task-specific accelerators – but the company is rightfully concerned about how much better a big TOPS number will look.

But perhaps the more important – albeit lingering – question is which approach is going to produce better results. Mobileye’s approach essentially locks into place their current technology attack and associated self-driving algorithms, while NVIDIA leaves the door open to more flexible approaches. But then what good is being flexible if it’s going to cost an arm and a leg, especially at a time when the industry as a whole is trying to bring costs down in order to get self-driving tech into more vehicles?

Which to bring things full circle here, it’s this contrast that makes Mobileye’s approach with the EyeQ Ultra so interesting. While NVIDIA is taking the equivalent of the brute force approach, there’s little reason to doubt it will work. Mobileye’s greater investment in task specific hardware on the other hand carries more risk, but if they can deliver on what they promise, then they’d be doing so with a fraction of the silicon and a fraction of the power consumed, all of which would make for a big advantage over NVIDIA.

For better or worse, it’s still the early days in the autonomous vehicle industry. Everything up until now has been one big string of experiments, and even when 2025 rolls around and the EyeQ Ultra heads out the door, it’s still going to be just the beginning of a much larger shift. So there’s still plenty of time for companies to jostle for position in the automotive market – but true mass market commercialization isn’t too far away. So for Mobileye and other automotive SoC makers, it’s time to start playing for keeps.

Source: Mobileye

Comments Locked

10 Comments

View All Comments

  • mmrezaie - Tuesday, January 4, 2022 - link

    This area is exploding, and I kinda wish car manufacturers will design in a way that a car can be upgraded, e.g., CPU stuff. Otherwise, cars are going to be obsolete sooner than what they are used to.
  • arashi - Tuesday, January 4, 2022 - link

    RIP vehicle inspectors lol if this ever pans out.
  • tipoo - Tuesday, January 4, 2022 - link

    Tesla already did that years ago. Both the media control unit and the FSD computer are modules that can be upgraded.
  • whatthe123 - Tuesday, January 4, 2022 - link

    They did that because they sell their cars on features that don't exist yet, like true FSD. They also just fill up their memory pool with OTA updates so if they didn't have that feature they'd just brick older cars. Your average car doesn't required it.
  • DanCar - Monday, February 28, 2022 - link

    Agree! Would be great if we can buy a car that won't be obsolete after L4 breakthrough!
  • StormyParis - Tuesday, January 4, 2022 - link

    I'm really puzzled by the way everyone is doing self-driving. Most people around me do only 5-10 different trips 90+% of the time. Instead of going for full automation all the time, why don't cars just learn those 5-10 trips, and when something new along them requires human attention ?
  • spoorthyv - Tuesday, January 4, 2022 - link

    I think its because its not that much easier. The hard part isnt the shape of the road. Its the pedestrians, weather, random debris, and other cars. Even with the same route those are different every single time.
  • StormyParis - Tuesday, January 4, 2022 - link

    but isn't it easier to spot them when you already "know" the road ?
  • whatthe123 - Tuesday, January 4, 2022 - link

    the mapping and training everyone is doing with AI is basically "knowing" the road. They can use oversized sensors and gather data to find common patterns. If you did it individually the rides would be pretty dangerous until your car learned the path since it would have very little information to go off of.
  • tipoo - Tuesday, January 4, 2022 - link

    That's 5-10 trips among hundreds of millions of people, i.e at that point you've already solved general autonomy. It's not like it can just remember the route you show it a few times because all the variables and other people driving are what makes this hard.

Log in

Don't have an account? Sign up now