Open side-bar Menu
 IP Showcase
Peggy Aycinena
Peggy Aycinena
Peggy Aycinena is a freelance journalist and Editor of EDA Confidential at She can be reached at peggy at aycinena dot com.

Roadmap @ ISSCC: When will we Stop Driving our Cars

February 16th, 2017 by Peggy Aycinena

Millions of people are talking about when we will stop driving our cars
, many thousands are working on it, and six among those thousands made an appearance Tuesday evening, February 7th, on a panel at IEEE’s International Solid State Circuits Conference in San Francisco.

Over the course of the hour, the six speakers outlined their different visions of the technical roadmap that must be pursued to achieve fully autonomous cars. Of the six speakers, however, only three actually attempted to answer the panel prompt and their answers were wildly disparate.

So when will we stop driving our cars? 1) It’s impossible to know. 2) Not until 2030. 3) We already are beginning to stop driving our cars.

The panel was moderated by a senior Intel engineer, heavily involved in the company’s newly organized business unit specifically focused on autonomous driving systems.

Intel’s Umberto Santoni said, autonomous driving systems must be viewed as systems within systems within systems – silicon, storage, communication, software, security.

Additionally, the infrastructure needed to provide deep knowledge of everything from changing road conditions, to info about surrounding vehicles and optimized routing decisions, must be thoroughly imbued with end-to-end functional safety features – starting at the IP-block level within the chip, through to the SoC, to the full system, and the software that sits on top of it all.

And it all must meet, among other standards, those laid out in ISO 26262, the second edition of which is about to be released.

Per Santoni, the key challenges in making all of this happen include: Establishing the right balance between hardware and software to maximize high-performance computing and high-reliability safety guarantees; Verification and validation processes that go all the way from the IC through to the data center directing the vast traffic system, and having access to partitioning tools to help understand at which level of abstraction within the design which type of verification should apply; Detailed attention to fault tolerance across the entire system, again from the lowliest IP block all the way up to the data center.

Finally, Santoni said, there is the time requirement. These systems will be very sensitive to signal latency within the network.

With so much to accomplish to make fully autonomous driving a reality, Santoni chose not to answer the panel prompt. He not tell his audience when we will stop driving our cars; he left that task to the five panelists.

They included, Roger Berg from Denso International America and Sahin Kirtavit from NVIDIA, both companies based in California, and Patrick Leteinturier from Infineon, Markus Tremmel from Robert Bosch, and Jurgen Dickmann from Daimler, these last three companies all based in Germany.

Not appearing on the panel were representatives of some of the more highly publicized companies working on autonomous driving: Google, Tesla, Apple [by rumor], Sony, and a host of universities. Nonetheless, the presentations were sufficiently diverse, they no doubt provided a good sampling of the different engineering approaches being contemplated to achieve the goal.

Roger Berg, VP of R&D at Denso, endorsed the international nature of his huge company – over 150,000 employees and 38,000 active patents – and their commitment to the ideals of environmental integrity, safety, comfort and convenience in their work on self-driving cars.

He declared autonomous driving sits within a larger contemplation of the mobile society of the future, and said the problem can be parsed into four quadrants – the widespread use of IT services, the smart grid, automated systems, and mobility technology.

There’s a blurring across these quadrants, Berg said, that’s both a cause and an effect of work on autonomous driving. He also noted that Denso’s research on radar, camera technology, and the human-machine interface are all helping to push the envelope towards self-driving cars, along with their work on cyber-security.

He said quality, functional safety, and security are the over-arching concerns in autonomous driving systems and ended on a tease: A self-driving car will soon generate two petabytes of data per year. What will the data center look like that will manage this load for millions of autonomous vehicles?

Berg did not answer the panel prompt. The challenges he laid out implied there’s too much to do to know when we will stop driving our cars.

Infineon’s Patrick Leteinturier organized the engineering outlook, not into four quadrants, but into a pyramid of problems.

At the bottom sits the “uber-physical” – practical research into braking, suspension, steering, transmission, motors and engineers. The second layer in the pyramid – the “functional layer” – includes sensors to monitor the vehicle’s surroundings, the fusion of all sensor data, and the connections required to funnel that data to the central compute platform. This platform – the “organization layer” – sits atop the pyramid and represents the place where all input data is analyzed, and all subsequent decisions made.

Beyond this idealized pyramid of engineering focus, Leteintuier emphasized that the self-driving car must also be affordable and energy-efficient, as well as comfortable and fully kitted-out with a state-of-the-art infotainment system.

These last issues are strikingly prosaic, but given the reality of the fiercely competitive automotive market worldwide, they are also of paramount concern to the marketeers who want to sell these vehicles to the consuming public.

Bosch’s Markus Tremmel said we will stop driving our cars when the car is redefined as a system with a computer in the trunk and a battery that’s sufficient to get you anywhere with confidence. He said achieving that vision of the car is so complex, “We just don’t know when we will get there.”

But when we do, he added, the car will react to its environment like a real human brain, it will sense its surroundings and execute decisions seamlessly with an optimized use of energy.

This will require, Tremmel said, more than just an automated car. It will require total electrification of our environment, total ubiquitous connectivity, wide-spread automation, and trust-worthy system integration. All of these ingredients are moving quickly toward realization, he added, but they need a lot of refinement before we will stop driving our cars.

Drilling down, Tremmel noted there are numerous key elements yet to be achieved that must precede highly autonomous driving: Legislation and a set of globally accepted standards; rigorous safety and security for the system; highly robust surround sensors; a fault-tolerant human machine interface; system intelligence to interpret a situation instantly – sense, plan, decide, execute; localization information and maps; and a highly redundant system architecture.

What’s needed, Tremmel concluded, is a new mixture of hardware, software and test to implement systems that are not only robust, but economical as well. When we’ve done that, he said, “The car will be the biggest, most complex IoT node out there!”

Daimler’s Jurgen Dickmann started off his presentation with a compelling video of the driverless car of the future, the passenger sitting comfortably inside, ignoring his surroundings, reading the paper, and enjoying a beverage as he is whisked to his destination.

[Quite honestly, it looked like an ad for BART and begged the question: Why don’t we all just use public transportation?]

Dickmann also had a slide to emphasize the ever-increasing pace of technology adoption. Electricity, telephones, [cars], radio, [airplanes], TVs, computers, personal computers, the Internet, and smart phones. Each of these technologies was embraced at a faster and faster clip, and self-driving cars will gain traction even faster, he said, citing the large number of players at CES in Las Vegas in January showcasing technology for autonomous driving systems.

Based on this evidence, Dickmann predicted that we will finally stop driving our cars somewhere around 2030.

Although, he warned, low-power consumption and a comprehensive understanding of the balance between static versus dynamic environmental data – the first handled by radar, the latter by machine learning – must become a reality before fully autonomous driving can be achieved.

NVIDIA’s Sahin Kirtavit offered the most aggressive evaluation: “Self-driving care are clearly already here, but they’re not yet perfect.”

There will not be, he said, a moment when “kaboom” we stop driving our cars. Instead there will be a gradual acceptance of the various parts of these autonomous systems, as the engineering related to each is perfected.

Ultimately, Kirtavit noted, the answer to all of this is AI. Artificial intelligence will provide solutions for the necessary levels of perception, reasoning, driving, mapping, and response-computing needed to make self-driving cars a reality.

It’s just like gaming, he said, and the reason that NVIDIA has so much to offer in the area of self-driving technology; it’s all about making a prediction of what will happen around you, making a decision, and making a move based on that decision. Kirtavit said his company is perfectly positioned to use their sophisticated GPU expertise to push the envelope in autonomous driving system.

He also made a specific comment about the human machine interface that will command the car.

With visual sensors inside the vehicle tracking the motions of the human ‘driver’, including eye movement and shifting centers of attention, the self-driving car of the future will be micro-managed by voice activation from the driver, and will in turn micro-manage the situation by improving on the choices made by that driver – either through voice command or by taking over control of the system.

And, he added, it’s important to realize that self-driving cars will only succeed in some areas, those with intense digital connectivity to the grid and the data center. Hence, as the process continues whereby we stop driving our cars, it’s possible to take comfort in knowing that we will not stop driving our cars everywhere – just in those locations where there is the necessary and sufficient infrastructure to support that move.

Clearly each one of these speakers was slicing and dicing the engineering problem of developing autonomous driving systems, each envisioning the requisite roadmap in a different way. Yet, each managed to shed light on one or more unique considerations not mentioned by the other speakers.

If you ever needed evidence that something as complex as a self-driving car must and will be designed by a large community of experts, the Tuesday evening panel at ISSCC offered vivid, irrefutable evidence.


Related posts:

Tags: , , , , , , , , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *


ClioSoft at DAC
TrueCircuits: IoTPLL

Internet Business Systems © 2018 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise