Toyota
Csrc

Shaping Safety at Toyota with Big Data, Surrogate Grass and Collaboration

Sept. 23, 2020
Regional quirks, roadside obstructions and sprinting e-scooters are part of Rini Sherony’s deep-learning gathering.

Rini Sherony studies how not just cars behave in their environment, but how the environment behaves around cars. As the lead for Toyota’s university research collaboration on active safety/advanced driver assistance systems, she studies details like how safety cameras interpret the light reflecting off a particular pattern of grass growing on the roadside (and how to recreate that reading with “surrogate” grass made of polyurethane-based material and specialized paint). And how drivers and pedestrians behave (sometimes counterintuitively) when they encounter the “New York left turn” or the “Michigan U-turn.”

Among her recent projects, Sherony—who is senior principal engineer for Toyota’s Collaborative Safety Research Center—collaborated with Indiana University-Purdue University Indianapolis (IUPUI) and others to develop human test mannequins that do a better job of simulating pedestrians walking different speeds and angles. The mannequins, which have a “skin” readable by lidar and radar cameras during road testing, are modeled on data measurements from human volunteers, including kids.

Sherony’s team also recently worked with Virginia Tech to predict the benefits of available automotive safety technology in 2050, when the most advanced safety features available today are projected to finally be in every car on the road. In addition, Sherony is active in committees for developing SAE standards for active safety/ADAS.

The data from CRSC’s projects is publicly available, so automakers, policymakers, technology research universities and tech companies can come together to solve problems.

Sherony talked with IndustryWeek about the limits of safety technology now and in the future, the challenges ahead, and how she stumbled upon her new e-scooter research project while dropping her daughter off at college.

Rini Sherony

It looks like you're working on a few projects right now—are there things that you want to highlight?

One project that just ended, we looked at, “What are the remaining safety issues in 2050, after all the cars on the road are projected to have all the crash avoidance and passive safety systems.” Approximately 46% of crashes today would still occur in 2050. Now we’re looking at the data more in depth, so we can understand why those crashes were not avoided or mitigated. When we understand what happened, then we can develop a countermeasure. Without in-depth understanding, it’s very difficult to know what you need a sensor to do, what detection, what algorithm.

I just finished another project with MIT--and we made the big data from that project publicly available. It's called the DriveSeg database. The project goal was to develop a kind of segmentation algorithm based on deep learning. So, a camera-based system that when it's running on real road, can separate different classes of objects, on the road but also on the roadside—people, bicycles and trees, for instance.That segmentation is really, really important: What is your imminent threat and how will you address that? The big data from that is made available for any researcher to take, and they can train their own internal algorithm if they want to use it.  

Then, I just started a project a couple of months ago on e-scooters. I first encountered them when I was dropping my daughter off at college at the University of Michigan a few years ago. They were everywhere—some kids were riding two or three on one scooter.

My colleagues and I started thinking, “This is going to be a big safety issue because the occupants don’t follow any path. They just come in at high speed. They’re not wearing helmets.” After a lot of discussion, we started this research project to understand the behavior. We have to cluster and classify the behaviors because then we can do countermeasures to offset those behaviors. I’m doing that project with IUPUI, our long-term collaboration partner. They are doing a major data collection to understand the e-scooter vehicle interaction, the safety issues—so that not only we can develop safety countermeasures, but also convey that information to the regulatory agencies, city planners that allow these e-scooters, and even scooter makers so they can understand what safety issues there are and how they can improve.

Are there any things that you’ve learned so far about people's behavior around e-scooters?

Definitely one is the crossing scenario. A lot of times with scooters, people do not cross on designated sidewalks or intersections; they just come flying down. And sometimes they just go in the middle of the road. And since their speed is much faster than if you were walking, it's difficult to even know that they are coming, they just come so quickly. So we can totally see that that's going to be one key issue that we have to address on how to countermeasure.

Testing e-scooters.

And then, with the MIT project, you're looking at different obstacles on the road.

The MIT project looked at pedestrians, bicycles, other cars, trees and people crossing with strollers and on the side of the road, just to see how you accurately segment them in real-world driving conditions. Especially in the U.S., the number-one fatal crashes are road departure crashes; that means that people depart the road, and then they either impact something on the side of the road or roll. When we analyze the roads where these crashes happen across the entire country, we found out that 40% of the roads don't have good lane markings, or they don’t have any lane markings. So, the systems in general—the first generation of systems, which uses lane marking—are not going to be fully effective in all of these scenarios.

Then we needed to understand how we determine the road and roadside objects, so that could be added into the detection feature. You have to test with this “real” roadside objects—real from the sensors' perspective, but not a real guardrail or real concrete divider because then it's a safety issue. So you have to create a surrogate which you test so that you know exactly what your performance is and how to improve it.

So we developed concrete divider, guardrail, a concrete curb and several versions of grass surrogates, which have the same camera radar lidar characteristics as the real objects, and they can be used for testing. I just started an assay group to standardize some of these roadside surrogates. Basically at the end of it, we will come up with a recommended document which will show that if you are going to test with this roadside objects, what the size, length and color should be, and how they will be built.

Testing surrogate grass.

What do you see as the biggest hurdles to get to the next level with the advanced crash avoidance?

We continue to work on improving and adding new features to for road departure. Lane-marking-based systems are in production, but we are adding new features so that in absence of lane marking, you can know when the car is going to depart the road. Then, in addition to that, we're looking into different intersection systems. So intersection also is very critical data--high fatalities and injuries and the intersection crashes are usually very, very complex. In fact, you cannot even have like one category of accident under an intersection. When you analyze all the crash data, you have four or five different variations—you have to look at each of them.

But another big challenge to all of this is humans, both inside and outside the vehicle. I'm sure you're familiar with the different levels in for the crash avoidance systems—some of them would fall under either Level One or Level Two. And also even on Level Three, which is a much higher level, it still needs human engagement. So when the system needs to hand over, when they're outside their ODD (operational design domain) or when they're just giving warning or in a scenario they're not supposed to work, a human needs to effectively take over immediately and drive the vehicle. And there are a lot of issues with that. Regular drivers, they don't pay attention, they are not engaged—especially when they are not doing all the driving.

Then you see outside the vehicle also a lot of variation of behavior, from pedestrians, bicycles to e- scooters. And the human behavior is so different, even within the same country, even region to region. How somebody behaves in Ann Arbor is not how somebody behaves in Boston.

We have done many data collections at different cities and in the rural Ann Arbor area. And there's such diverse behavior. It is very challenging to have a system work with humans when you have such diverse behavior across the country. And forget about when I'm trying to do globally—it's like a whole other issue.

And that will continue to be a challenge for a while, but that's why a lot of data collection is necessary in real-world driving conditions--to capture all of these behaviors, then data mining, then applying machine learning algorithm-deep learning to kind of recognize different clusters of behavior and train the system so that next time it sees a particular situation, the system is able to recognize that “OK, this is potentially a safety situation.”

What would be an example of a different behavior from region to region?

For example, on New York roads, there is a very complex left turn. It is at an angle—sometimes it has five lanes instead of four--they have a fifth lane only coming from one direction. So all these are monitored, sometimes by light and sometimes by stop signs. To take a turn in there is extremely complex. The car has to know exactly where everybody is, which direction they're going, track them, and decide what's the safest way to do it. Versus in Michigan, we have the wonderful Michigan U-turn, which is not common across many places. So to know that somebody is going to suddenly take a U-turn and come in close proximity is very challenging.

Can you highlight some of the partnerships you’re bringing to the table?

The partnerships with different universities that we have are immensely critical to succeed. From Toyota's side, we bring in complex real-world problems we need to solve. From the partner side, they bring in expertise to break down those problems and develop countermeasures. Our universities always say, “Hey, we are very good in writing AI algorithms, but we don't know what to use it for. We don't know what problem to solve, but you guys bring some amazing problems.” So together working organically, that collaboration, is amazing for me, because you get to work with not only the smart people, but all these young people who are so passionate, who feel so great about working on something which will help improve road safety, will help saving people's lives. They can bring in some amazing out-of-the-box ideas. This collaboration with universities—with the students, graduate students and professors—it's very, very critical for us to make meaningful results.

Can you think of an example when a student or someone at the university came up with an out-of-the-box idea?

Some of the first projects we worked on with CSRC were with Indiana University Purdue at Indianapolis. One of the first tasks was to develop the pedestrian mannequin which is representative of real people, and it was not easy. So we did have real people come into the lab and scanned them with radar and all the other sensors, but then to develop a mannequin, which not only looks like it, also articulation so when people walk, their hand and legs move in a certain way, so to make the mannequin do that, and accurately, it was very, very challenging. One student took the mannequin home, and he put in his yard and he changed different things to make it move ... It was so realistic his neighbors thought it was an intruder! 

Steve the pedestrian dummy has 'skin' that emits a radar signature.

Do you have an idea of how the open-source information is being used? Do you have dialogue with the other organizations that are using it?

Oh yes, absolutely. One example is the SAE standard organization group, which I co-chaired with a professor. Other automakers were part of the task force—there was GM, Ford, Daimler, suppliers. We provided all the details and together with their feedback developed the standards. We also had many meetings with National Highway Transportation Safety Agency, who are also developing testing protocols for some of these methods. They used some of our mannequins for their testing, and for a bicyclists' mannequin we developed, we ended up doing a global harmonization effort with the European standardization organization, EuroNCAP, and their suppliers who are building that. They ended up changing part of their mannequins’ design because some of the adaptation we did was a lot more realistic to what you see in real conditions--although it was much harder to build it.

So yes, we have worked very closely, not only in the U.S,, but globally with other OEMs, suppliers, with the regulatory agencies to provide this information because we want everybody to learn from it and make the systems better, make the standards better. So overall we can together achieve a lot of crash and injury reduction and make the road safer.   

Got a manufacturing candidate for Profiles in Leadership? Contact leadership editor Laura Putre.

Popular Sponsored Recommendations

Voice your opinion!

To join the conversation, and become an exclusive member of IndustryWeek, create an account today!