Dr.Zhang Fan shared an online course about autonomous driving simulation R&D and engineering practice

Recently, Dr. Zhang Fan, head of the 51VR Intelligent Driving and Transportation Division, was invited to share the course “51Sim-One Autonomous Driving Simulation Test Platform Independent R&D and Engineering Practice” in the online conference of the Chinese Electric Vehicles.

The course is mainly for the core groups in the field of autonomous driving, such as OEMs, core components enterprises, technical experts in the relevant fields of T1 industry, middle management personnel, engineers and researchers engaged in the development and design of intelligent network-related vehicles, etc. The share is really successful.

After the sharing, engineers from major OEMs and autopilot algorithm companies asked many questions. Dr. Zhang Fan also gave detailed answers to the questions. The following is the actual recording of this online course. The full text is about 9000 words. Highly suggest subscribing first then reading.

*The article is reproduced in the WeChat publication “Future Travel Academy”.

Hello everyone! I am Zhang Fan from 51VR.

Welcome everyone to participate in the live stream of the intelligent network connection basic support network, thank you! Today I will share with you the 51Sim-One Autonomous Driving Simulation Test Platform, our own R&D experience, and some recent engineering practices.

51VR is a high-tech innovation company of AR+VR. Founded in 2015, the overall vision is to create a truly complete and permanent virtual world.

The company’s current business is focused on computer graphics and simulation technology, with the main purpose of helping artificial intelligence training in the virtual world. Our business is divided into two major sectors, one is a smart city, the second is smart vehicle and transportation.

Now the scale of this business is about 100 employees, and the talents come from diversified educational backgrounds, software engineering basically accounts for about 70% of our entire team, such as deep learning, information engineering, and related application areas. The other 30% of our team is related to application areas, such as vehicle engineering, traffic engineering, which is equivalent to the background knowledge of the automobile industry.

At present, our work is mainly to create a Chinese original industrial software in the past one and a half years, which is 51Sim-One, an autonomous driving simulator. I will give you a detailed introduction.

Looking back at the history of simulation and previous engineering applications, autonomous driving is obviously a new technology, but the simulation technology is a very mature field.

The definition of simulation here is using a computer to calculate physical phenomena, instead of actual experiments. This is important, I will give a more detail explanation later.

Let’s take a look at the picture.

In 1960, China used a tube computer when developing hydrogen bombs. The picture on the right is the idea of ​​conspiracy theory after being hit in the World Trade Center ten years ago.

In fact, there are many studies in mechanics and architecture. They use simulations to see why the building is a collapse of the chimney-type collapse.

We simplified the definition of the simulation process in four steps——modeling, programming, and envisioning and computing. Specifically, we will apply the same steps later in the aspect of automatic driving simulation.

Let me introduce the urgency of automatic driving simulation. Everyone knows that if it refers to L5’s automatic driving. When you heard of L5 level, you may feel that it is still far away from us, but the launch time of application such as highway cruise, traffic congestion, ABP, automatic valet parking, L3 or L4 functions is basically between 2020–2025. R&D cycle of car development requires five-year preparation time, now is the key time to build test capabilities.

Therefore, it is urgent to establish guidelines and measures to reduce the risk of future new technologies posing to product safety. The following figure is a table of the strength of the traditional simulation application in the product setup.

The final model is relatively mature, and finally the certification of the product, basically all the purpose is to ensure product safety.

Let’s talk about the necessity of an automatic driving simulation. In fact, we are already building simulation capabilities. The most important thing is safety. There are two international standards, one is ISO26262, focused on functional safety is, the other is ISO21448, focused on prefabricated functional safety.

Simulation is a research and development tool that supports to pass these safety standards. Seeing the diagram on the left, you can basically see three types of accidents, and the most dangerous is the unknown potential danger. It needs simulation to explore.

The V-shaped picture on the right side should be familiar to the friends in the industry.

The V-shaped picture shows everyone has the basic requirements such as disassembled into the system, disassembled into the parts, system verification, vehicle verification.

The latest challenge comes from the new technology of automatic driving, which is the challenge specified in ISO21448, such as manual takeover, weather signal interference, design operation range, etc. We must solve these problems through simulation.

If we want to design an experiment, first of all, we have to clarify the test object and who is the target of the test. Is it a decision algorithm? Or is it just doing sensory integration or planning? Because different experimental subjects have different characteristics, they also have different design goals.

The second step is to design a reasonable experimental matrix according to the characteristics of the object, according to the target, and to define reasonable experimental parameters.

With some experimental parameters, the simulation model will be simplified in the next step, the error analysis, and output overall evaluation in the end.

If it is for autonomous driving, a schematic diagram of the ODD test in the simulation, we may define a safe range of the vehicle’s operating range, there will be some gray areas, and areas that are not suitable for operation, represented by different colors, simulation. Obviously, it can give us better coverage, just like a color map. The sign of the star is the actual physical experiment chosen because the physical experiment is limited after all, so the simulation tool is a good guide for doing physical experiments.

At the same time, if the physical experiment is consistent with the simulation result, it can be very good. To guide the simulation experiment.

Let’s take a look at some of the modules of our own automated driving simulation test platform. Currently divided into five major modules:

The first module, the automatic generation and editing module of the road environment, basically contains two parts of the core content, one is the built-in high-precision map, and the second is the high-realistic rendering scene.

The second module, the dynamic scene conversion and the model of the traffic body, we have two, one is the continuous traffic flow, and the other is the model of the intelligent opponent car.

The third module, sensor and vehicle simulation, is mainly used to describe the physical characteristics of the car, whether it is a sensor, vehicle dynamics, algorithm access, equivalent to the interface, we basically configure people, cars, roads It’s here.

The fourth module, the scene library, and the training set, we will simply build some third-party and self-set scene libraries, mainly for testing. Others are doing some training sets, such as camera training sets, such as semantic segmentation, others are lidar training sets, point cloud training sets, we have done a lot of annotations to provide customers with some standards. Test case.

In the fifth module, data analysis, and calculation, we will provide a function of data playback and report generation.

Now introduce the types of simulation tests and platform solutions.

On the left side of the picture, I have divided three-dotted boxes. The first box is the software in the ring (or the model in the ring).

Basically, we make the simulation a closed loop at the software level and code level, from the sensor model to perceived decision planning, or can skip the sensor model, directly to the object list of the target to the planning algorithm, to the control algorithm, to the dynamics, and then to the virtual environment and traffic scenarios. This is the simplest simulation to do software in the ring test. The second closed loop is the hardware in the loop. This hardware-in-the-loop has basically integrated the blue pre-controller, motion actuator, and real sensor components into the overall closed loop. At this time, there are some errors in the hardware and sometimes in an algorithm. The third closed loop is actually the driver in the ring or the whole vehicle in the ring. This is a more complete system integrated into the simulation environment. This is the type of three simulation tests, SIL, HIL and VIL of the whole vehicle.

I have been learning lately. I just saw a report that is Safety First for Automated Driving. This report was jointly released by various OEMs, and I quoted one of them.

I think it is more authoritative. Take a look at the right table, in addition to some functional safety issues, the above painted a small computer, a car, is through simulation, test each component in each test type, whether it is component-level, or the fusion of perception, or hardware in ring, software in the ring, or system level, simulation has its own use in most areas, if it is back to simulation is equivalent to an experiment, everyone will better understand why simulation has such a big effect, in fact, let you know an input and see what output you need.

This page explains in detail, for example, the software is doing the decision algorithm in the loop, simulating a process in the simulation software. In the beginning, we will create the virtual scene, mainly in two steps, one part to sweep the data, there is the original point cloud and vector graphics, we will use our own editing software — WordEditor software to do high-precision maps. Through the UE to render my model, import the map to configure the vehicle’s decision algorithm, configure the dynamic parameters of the host car, and configure the environment. The final step is the test process. We introduce the case into the simulation environment, run the cases, and finally do the analysis and output of the test results. This is the whole process.

Just a simple example is to simulate how our modules are done.

What do we think this product should have? What characteristics of the characteristics or ideal environment should have? We concluded and called MATRIX (acronyms)

First, Massive. This scale is mainly reflected in three aspects, one is the scale of the scene, the second is the background car that can support more traffic, and the third large scale is that there is a large amount of support data on the roadsides. This is the so-called large-scale definition.

Secondly, Accurate. Because the simulation is consistent with the formal one, in many cases everyone is going to simplify, but this high precision is definitely a relatively high precision that corresponds to the accuracy achieved by my target. In solving the characteristics of a physical operation law of a machine cognitive world, for example, the reality that the camera sees is different from that seen by the millimeter wave radar. There are many precisions that require physical experiments to support the reverse.

Thirdly, True. Reality is the reality of the scene and the reality of the traffic flow.

Fourthly, parallel acceleration. We use a distributed hardware cluster structure, which can be deployed in the cloud or locally. Through the optimization of the computing power of the GPU and the CPU, the autonomous driving large-scale parallel acceleration calculation can be realized.

The fifth feature is integrated. Because the industry has used a good vehicle dynamics software, as well as some good hardware simulators, real-time simulators, whether NI or dSpace, we need to do a good transition and integration.

The sixth feature is extensive compatibility. Regardless of the HD map format or the dynamic case format, there is now a definition of Open X series. There will be new ones in the future, indicating that we maintain good compatibility with world standards.

This is the last page before showing our engineering practice. Let’s briefly talk about some of the techniques we used to do the automated software toolchain before doing the simulation software. For example, we did some work for the generation of the test scenario library. We are collecting data which is basically from three points: the first one is the sensor from the roadside, the camera on the roadside, and the other is the information transmitted by the autopilot. The third point is actually the information that the environment perceives.

We have rebuilt and restored the three types of data in 1:1, so we will get some digital simulation cases, which are thousands, tens of thousands in this order of magnitude or my automated toolchain, and hundreds of thousands of them can be reached. We got these basic raw data and did two things: the first one is to do parameter generalization, and we can make the point data more effective to do a partial extension so that I can do some analysis. Secondly, getting these data can extract the driving behavior, or the interaction between the car, the car and the person. We can train the body only through deep learning methods and make the simulation of the traffic flow better.

Next, I will share with you three practical engineering practices.

The first, traffic accident case reduction and simulation test. In this virtual test environment, we basically reproduce a traffic accident. We can analyze the source of traffic accidents through the restoration of the accident.

The second is that we can connect the autopilot algorithm to the dangerous case. If new technology is adopted, it is possible to avoid such dangerous accidents and enable algorithms when to access.

I will introduce you to a traffic accident on the right side. This traffic accident is a cement tanker going straight from a distance. He has to turn to get into his cement factory. Then, there comes a car from the opposite direction. The driver has seen the cement truck’s in front of him to pass this road. Due to the high speed of both sides, the car did not decelerate and think the tanker will pass quickly. Everyone feels that each other can pass, but unfortunately, when the tanker turned, he saw an electric bicycle passing through the bicycle lane, so the tanker took emergency brakes and stopped in the middle of the road so that the car basically had no time to brake and hit the cement tanker.

The reason for this accident is very typical. Both drivers can’t actually get the whole activities of these people involved in the traffic scene. For example, the cement tanker can’t see the electric bicycle if he didn’t turn, and the car driver will not predict that the truck will brake in time.

We can change the results if both cars are trained by a simulator. With simulation, through 5G or V2X technology, we can avoid these accidents. This is the problem we’ve been figuring out. Later, you can see the videos we made, we can effectively avoid such accidents.

I share the second engineering practice, which is the multi-vehicle interactive virtual test. This is a case provided by one of our OEMs. You can see that it is four steps.

In the first step, we will restore a virtual scene.

Secondly, we will connect the vehicle’s automatic algorithm to a set of virtual environments.

Thirdly, we set disturbing cars to make interference with the automatic driving algorithm of the main car.

Finally, to analyze whether the self-driving car has a better anti-interference ability.

You can see from the picture on the right, the perspective of the main car is the angle of the auto-driving car.

You can see P1 in front of the main car is doing a block action by changing lanes. After that, the main car made an emergency brake. The lower right picture is the third car, which is called the observation car, following the first two cars. From the view of the third car, it can see two cars interacting in front of itself. The overall three cars are one manual driving vehicle and two autonomous driving vehicles. The test of this situation will happen in reality, but it will definitely be very dangerous in the testing agency. We just use the advantages of simulation to achieve the calculation and simulation of this dangerous situation.

The third case is the hardware and software in-loop test simulation of the camera we made. In fact, the application of the ring in ADS on the L2 level lane recognition is relatively early (or relatively mature). You can see two test methods, one is the black box test method, and the other is the direct RGB data injection in the algorithm. We have now revealed the two project case implementations in detail.

The biggest difference between this and the previous L2 is that besides the algorithm is different, we have to do pixel-level semantic segmentation. This is the biggest change. Therefore, the simulation needs very high requirements. The situation has been very different from the traditional one.

Now what is the hardware is in the ring trying to solve the pain point of the OEMs? That is OEMs can’t get the camera algorithm, can only read the recognition rate of some cameras and some indicators, but is very difficult to do effective testing.

Therefore, tested by simulation, we verify whether the algorithm provided by the supplier is stable and whether the performance can meet the standard requirements by creating more dangerous cases and more environmental changes. This is the real purpose of the camera hardware in the ring. Vehicles will carry more cameras in the future so the demand for the hardware in the ring at the component level should be very strong.

After reading our three simple engineering cases, let’s take a look at the development space of the simulation application, (or in which areas the technology of automatic driving simulation can be applied).

We can provide a simulation training set for algorithm companies and a test evaluation system for the supplier.

When there appears Rob Texi After (the car fleet), the simulation is to focus on accessing the operational data into the data center, which becomes an operational management operating system. So when it comes to car fleet, the system can upgrade to an operation management platform.

At the same time, the simulator includes the road elements such as the control and decision of the signal light, the control signal of the vehicle, the intelligent traffic body and the traffic flow, it is obviously a simulation strategy in the traffic field, which is obviously excessive.

The optimization of a larger-scale transportation system can support in different industries, such as aircraft simulation, military simulation, general robots, etc.

Here is a brief introduction to the understanding of smart traffic in China.

The road in China is different from foreign countries, so we called it intelligent networked cars (in Chinese). The network refers to the use of richer information on the roadside to provide a better perception of the vehicle or secondary perception to ensure the safety of the vehicle. The system contains in-vehicle data, roadside data, environmental data, etc, which are all physics data.

When there are more input conditions and data, the system can upgrade to a higher level and become a simulation analysis system. It can contain digital data such as autonomous driving simulation, traffic flow, and historical road conditions. We can set a variable on the system to analysis traffic situations, to see if it is dangerous, and how high the probability it would happen. It also provides reviews.

We can also control the system through the cloud, which can be used for emergency plan management, automatic identification, and triggering actions. When this closed-loop system runs smoothly for a period of time, it can become a database for the entire life cycle of smart traffic.

We extended the simulation of the autonomous driving vehicle to the transportation field.

We can provide the following capabilities: the basis for the evaluation, for example, when building a bridge or a tunnel, whether the road sign is clear enough, we can use the system to make a prediction and receive feedback.

For another example, these measures for traffic control can be also evaluated. With the input of traffic flows, it can also do traffic operation analysis and short-term forecasting, which can support to do some short-term public information notification, and also ensure nation traffic announcement.

The figure shows a management system for a smart community. The left picture shows that there are some road sensors in the community, there are cameras, 5G communication modules, and millimeter-wave radar. The platform can save history files and data for further use.

For example, the self-driving vehicle driving in the demonstration area can be connected to this visualization platform. It can collect communication data between vehicles (V2X) and roadside data. So this is more like a simulation complex with digital twin technology

The above is basically all the products and solutions we offer.

With the development of our company, we will have more vitality in the industry. thank you all!

The following content is the question section

Q1/ In the large-scale test, how to inject the required traffic flow information? And how to accurately implement vehicle-vehicle interaction (smart cars and non-smart cars) in a virtual test environment?

There are two ways to inject: one is random traffic flow, and the other is case-based defined traffic flow.

For the random traffic flow, we are able to adjust vehicle volume, specify the route of the vehicle, the type of vehicle, and the mode of driving. When the setting is done, the system can generate a random traffic flow. Based on open-source software, we did many internal self-development tools to realize traffic flow in advance.

For the case-based defined traffic flow, we can customize the trajectory of the opponent’s car. We pre-edit a case and a way of triggering case by case to realize traffic flow.

Q2/ The 51VR test results have no direct help on the current car AI upgrade in the racing game. Is there cooperation in this regard?

In fact, there are two ways to speed up.

The real-time simulation is calculated at a faster frequency, so this is related to the CPU. The main frequency related is to achieve this acceleration.

Another kind of acceleration can be said to be parallel computing (which is the traditional way). Basically, it is done by balancing all the computing power. There are more machine nodes that can share your testing tasks, which means they can calculate at the same time, instead of waiting for one task to be finished then do another.

To sum up, the more nodes in your CPU or your computer group, the stronger your computing power. The stronger your stand-alone computing power, the faster your calculations.

Q3/ What’s your opinion on importing actual road data scene library into a simulator?

If the scene library is entirely on actual road acquisition, it is obvious that you can only do regression verification in the simulator because your algorithm is basically stable, you can see your different parameters, what is your vehicle in the configuration at that time?

The kind of reaction change is the lack of interaction between vehicles, but it is a good process to do an algorithm regression. I don’t know if I answer your question. We import basically a 1:1 target reduction into the simulation library. In fact, in addition to the regression of the algorithm, we can also extract some dangerous cases or some edge cases. Of course, the probability of such cases will be lower.

We can also do some generalization of the scenes based on the edge case, finding the new cases is also the meaning of our actual roadside data collection. In fact, there are some restrictions. For example, when you collect one-vehicle data, you can only see the cars around your front car. If you rely on the actual roadside collection, the danger is that the chain reaction occurs on a larger scale. One-vehicle collection may have more abundant information than actual roadside collection, just like the cement truck example I shared before.

Q4/ The simulation tool introduced by 51VR effectively improves the simulation speed by deploying on the cloud platform. The analysis of the simulation results exceeds the manual processing capability. Therefore, I understand that the 51VR tool should have the ability to perform the automated evaluation.

I would like to ask how 51VR is automated to analyze and evaluate simulation results?

Our large number of simulations are all automated. There may be tens of thousands of cases running automatically. Our premise is to set the evaluation criteria, generate reports automatically, and do some filtering at the same time. The DOE analysis of the experiment is the same. The case you see is the first result of the whole. You can take the case that you failed or that you care about, and do the local optimal case, and take it out for analysis instead of you do it one by one, basically, this is a process of experimental analysis.

Automated simulation evaluation is basically that you preset the standard evaluation criteria of the target, we can do an automated evaluation, such as “surpass stop-line”, or “surpass stop-line + run a traffic light”, the invasion of dangerous distance, and rapid reduction speed. The computer can generate report generation of all these indicators automatedly and customized.

This effectiveness depends on the efficiency of the engineer in the design experiment, this is beyond the function of the software, but in the experiment.

A leading digital solution provider specialized in VR and AI