How does the on-board processor in automotive electronics enable autopilot cars to “see” and “think”?

In July 2017, the new generation A8 released by the veteran car company Audi will become the world's first production car to achieve Level 3 autopilot. It can be seen that both emerging powers and established car companies are actively developing self-driving cars, but the development of autonomous vehicles is inseparable from the support of processors, sensors and networks. Just for the onboard processor, how do you match the vision sensor to make the autonomous car “see”? With the data collected by the sensor fusion, how does the processor let the car "think"?

The camera increases the amount of data multiplied by the image processing kernel so that the car will "see"

Tan Hong, Deputy Director of China Automotive Industry Marketing Department, Toshiba Electronics (China) Co., Ltd.

The Audi L3 self-driving car A8 is expected to be legally on the road at the end of the year, not only because the auto-driving car that has been mass-produced has only reached the L2 level, but also because consumers are full of expectations for self-driving cars. Tan Hong, deputy director of China Automotive Industry Marketing Department of Toshiba Electronics (China) Co., Ltd., said in an interview with Huaqiang Electronics: "Autonomous driving is currently the hottest topic, but it will take time for mass production or large-scale promotion. ADAS (Advanced Driver Assistance System) is the basis of autonomous driving. Major automotive electronics manufacturers have been working in this field for many years. The automotive market with ADAS function can lay the foundation for the development of autonomous driving. Chinese auto manufacturers keep track of these years. The trend of global technology development, and constantly increase investment in the field of automotive electronics, it is estimated that it will closely follow the international major manufacturers to launch models of quasi-production level to test the market reaction."

Lu Xueliang, Marketing Manager, Soxi Technology Internet of Things and Automotive Electronics Solutions

For the sensors required by ADAS, Lu Xueliang, Marketing Manager of Soxi Technology Internet of Things and Automotive Electronics Solutions Group, introduced that ADAS or autonomous driving currently has a solution based on laser radar, camera and sensor fusion. Lidar has its drawbacks, such as the inability to distinguish lanes, debris or road pits. The camera's visual processing technology can better distinguish the signs on the road, pedestrians and other information, compared to radar technology, lower cost, more comprehensive functions, and higher accuracy. Lin Zhien, head of automotive electronics application technology at Renesas Electronics Greater China Automotive Electronics Sales Center, further stated: "The camera is one of the main sensors for real-time driving. For short-range scenes and some objects around the vehicle, you need to rely on the camera to sense, such as The traffic sign information, the lane line, etc., the camera information is transmitted to the car image processor for calculation and recognition, and the correct result is given to the central control system."

Lin Zhien, Minister of Automotive Electronics Application Technology, Renesas Electronics Greater China Automotive Electronics Sales Center

However, the safety of the monocular ADAS has been questioned by Tesla's safety accidents, so Tesla, Mercedes-Benz, BMW and other manufacturers have begun to use the binocular ADAS solution to make the car in complex road conditions and various weather conditions through binocular vision. It can quickly capture information on the road surface and improve vehicle safety. In addition, in the 360-degree panoramic viewing application, a wide-angle camera different from the monocular or binocular visual ADAS high-dynamic camera is required, and the number is also four or more. Therefore, whether it is to improve the safety or experience of autonomous driving, the increase in the number of cameras and the difference in types will undoubtedly bring about a multiplication of data and more complex algorithms. Therefore, Tan Hong believes that the real-time computing power and power consumption of the vehicle image processor The ability to withstand harsh conditions and costs have become the focus of competition among various manufacturers.

So, how can vehicle processor manufacturers meet the "seeing" needs of vision sensors? Lin Zhien said: "The car processor must have a dedicated hardware accelerator to process a large amount of image information. Renesas R-Car products integrate IP cores for processing images such as IMR, IMP, CNN, CV, etc., and can simultaneously perform up to 8 Data processing of the road camera. The IM multi-instruction image processing core can not only cope with the signals of multiple cameras, but also share the workload of CPU and GPU operations, which has strong performance, low latency and low performance. The advantage of power consumption."

Toshiba also has a solution for image processing. Tan Hong said: "Toshiba's VisconTI2 dedicated image processing chip has been adopted by Denso for its excellent performance and reliability. At the end of 2015, it has been put into mass production vehicles. The next generation of VisconTI4 is currently in volume production, with 8 cores + 14 hardware image/algorithm accelerators, performance is more than 2 times, and can achieve 8 functions at the same time, such as lane keeping, front vehicle identification / collision avoidance, pedestrian recognition, traffic Logo recognition, cyclist identification, obstacle recognition, etc., VisconTI4 also has the advantages of real-time, low power, maturity and so on."

Suo Xi uses hardware + software to meet the needs of automotive vision. Lu Xueliang said: "The processor of ADAS algorithm usually has GPU mode and hardware mode. GPU mode has better performance and flexibility, but the cost and power consumption are very high. Hardware mode High performance and low power consumption, but low flexibility. For this, we provide algorithms like MOD, AOD, etc., and the MIRANDA processor has specially developed a core VPU. The VPU uses hardware acceleration (HWA) and software acceleration (DWA). The related video graphics are processed in a combination of high performance, low power consumption and good flexibility. Some basic algorithms such as SOBEL algorithm, Fourier algorithm, etc. can be placed on hardware acceleration, and others can be placed in DWA. The standard OPENVX interface can be developed quickly to improve code reusability."

Multi-sensor fusion + AI requires higher performance, integrated artificial intelligence IP allows the car to "think"

It is reported that the performance of the autopilot processor has already required tens of thousands of DMIPS plus dozens of TOPS deep learning processing capabilities. In the future, more sensor fusion and higher levels of autopilot will naturally require higher performance requirements for the processor. The fusion mentioned here does not mean that the various sensors are integrated into one chip or one device, but the signals collected by various sensors are combined, analyzed and comprehensively judged by artificial intelligence technology, and the executive body of the vehicle is realized. A higher degree of automatic driving. In this regard, Lin Zhien said: "Multi-sensor fusion requires processors with different communication interfaces and buses, and must have sufficient computing power to handle and integrate different signals. But the current CAN bus is from bandwidth or From the perspective of safety, it is difficult to meet the communication needs of automobiles.” Tan Hong also believes: “With the increase in processing information and the increase in bandwidth, the automotive communication bus has encountered bottlenecks. The new communication bus Ethernet_AVB is the core of the industry. Enterprise adoption, the future may become the mainstream. Toshiba has launched Qualcomm's automotive platform through in-depth cooperation with Qualcomm, Intel and other companies. Intel's automotive platform's professional Ethernet_AVB bridge chip Neutrino is the platform standard product."

However, whether the computing power of the on-board processor is strong enough and whether it has the ability to learn deeply makes it possible to imitate the human brain to think more importantly. To this end, Nvidia has introduced a 10-core ARM 64-bit CPU and 512 nuclear Volta-based GPUs, with 30 TOPS deep learning capabilities, power consumption of 30W Xavier chip. "To achieve autopilot in the future, the on-board processor must meet the requirements of deep learning. Not only does it need to integrate CNN's IP, but CNN will do even more in chip design. Renesas R-Car third-generation products will join deep learning CNN. , CV and other private IP to meet the needs of artificial intelligence, these dedicated IP will handle all kinds of big data and meet the constantly updated artificial intelligence algorithms to further optimize the onboard processor and the entire autonomous driving system. Our next generation CNN IP will With 30 TOPS deep learning capabilities, but with the same performance, our products will consume less power. The reduction in power consumption is due to the use of new production processes, our second generation products use 28nm process, the first The third generation of products will be transferred to the 16nm process, and the next generation will adopt more advanced production processes. While the process is improved, the cost can be controlled because the number of chips that can be produced in 12-inch wafers is more controllable. Currently, R-Car Incorporating the “Emotional Engine” of artificial emotional intelligence developed by Cocoro SB, a subsidiary of Softbank Group, to identify his emotional state through the driver's language. Based on the machine learning ability of artificial intelligence, the car can continuously learn from the dialogue with the driver to provide the best response performance for the driver's needs. In the future, to meet the needs of the L5-level autonomous driving, the next generation R-Car Products with deep product learning capabilities will be further increased."

Toshiba and DENSO are working together to develop an artificial intelligence technology called Deep Neural Network Intellectual Property (DNN-IP), which will be used in image recognition systems that have been independently developed by the two companies and are more accurate. Image recognition helps achieve advanced driver assistance and autonomous driving techniques. Tan Hong also revealed that Toshiba's latest VisconTI5 has integrated the latest technologies such as deep learning and neural network on the basis of continuous improvement of computing power. It is expected to provide market in 2018-2019. It is understood that Suo Xi will also add hardware units to the next generation of products, integrating CNN's convolutional neural network IP.

Suspension Insulator

Suspension insulator,Post insulator,Metal end fittings,Power fittings

TAIZHOU HUADONG INSULATED MATERIAL CO.,LTD , https://www.thim-insulator.com