Updated: Aug 26, 2019
This episode is a live recording of our interview with German Ros at the CVPR 2019 conference. German Ros is a Research Scientist at Intel Intelligent Systems Labs. Prior to joining Intel Labs, Ros served as a Research Scientist at Toyota Research Institute and has helped companies such as Yandex, Drive.ai, and Volkswagen in machine learning. Ros received his PhD in Computer Science at the Autonomous University of Barcelona.
Ros also co-organized the CARLA autonomous driving challenge, in which teams submitted autonomous agents to be tested on the cloud. During the interview, he discussed the CARLA autonomous driving challenge and the gap between the industry and academic research in the field.
Robin.ly is a content platform dedicated to helping engineers and researchers develop leadership, entrepreneurship, and AI insights to scale their impacts in the new tech era.
Subscribe to our newsletter to stay updated for more CVPR interviews and inspiring talks:
Wenli: Thank you so much for joining us here. Can you tell us about yourself?
I'm German Ros. I'm a research scientist. I have been one of the co-organizers of the CARLA autonomous driving challenge, and I'm also the leader of the CARLA Simulator team.
Wenli: Can you tell us a little bit more about this challenge?
The CARLA autonomous driving challenge happens on the cloud. And the purpose is simple, to have teams submitting autonomous agents that are able to move from starting location to an end, and going through very challenging traffic situations, like intersections, roundabouts or traffic intersections. So they have to deal with all these complex scenarios in a safe way.
The challenge has many different tasks. We have a set of different towns, different weather conditions. Basically, we evaluate how good the different agents are in terms of safety in traffic and infractions, and if they are also able to complete the route.
Wenli: Interesting. What are the criteria that you use to evaluate each team? Are you timing them?
We're not timing them. We don't think it's about time, we think it's about safety. So basically, we're considering if they were able to reach the target location, the destination, and also the amount of infractions they make. For example, we account for collisions against other vehicles, running red lights, running stop signs. Basically, we discount points for all those infractions.
Wenli: What type of groups are in the competition? Are they university students or PhD students?
There are some of everything. There are many, many labs. We have more than 200 subscribers in the challenges. They were organized in about 60 teams. We see participants from labs from all around the world. For example, in the workshop, we invited some of the top participants to join the challenge. They came from Japan, Chinese universities, Brazil, France. They’re from all around the world, and you see those undergrad students are super passionate and motivated and PhD students and postdocs, senior doctors. It’s very interesting.
Wenli: How long did you give them to prepare?
Actually, this kind of thing takes forever. They didn't have much time. I think they just had a couple of months, but teams reacted very well, they worked super hard.
Wenli: How did they collect enough data?
German Ros: We provided them some maps and some mechanism to provide data. We sent them some baselines to use this code to acquire data. There are these routes that are going to reflect similar situations. Based on that they were able to start with some advantages, so they didn’t have to start from scratch.
Wenli: Go back to our topic that you were organizing this sharing workshop. I wanted to know that when you were inviting speakers, who did you invite?
For us, the concept we had in mind was to bring a good representation of what's happening in industry and academia, so we wanted to balance these two. For academia, we wanted to have the most important labs in Europe and in the US at least represented in the workshop. We decided to invite people like Andreas Geiger who has contributed to the community of autonomous driving for a long time. He is the creator of the very famous KITTI benchmark, which has really brought another progress to the autonomous driving community.
We invited senior professors like Trevor Darrell from UC Berkeley. He has been really fostering the development of autonomous driving with all his activities, like Berkeley released a really amazing dataset of autonomous driving. We thought they were able to perform differently from the academic side.
And then for industry side, it is clear that Tesla, Uber, Waymo, these are the big players. We were just trying to get representations from those companies at a scientific level. So we brought Raquel Urtasun from Uber, who is the chief scientist taking care of Uber ATG. From Waymo, we brought Drago Anguelov, who is one of the perception leaders. And from Tesla, we had Andrej Karpathy, who is leading the autopilot effort.
Wenli: That's really an amazing thing to be able to invite so many big names in the industry.
Yeah, they were very kind to participate, and actually some of them are sponsoring the challenge. Uber and Waymo are sponsoring the challenge. We are very grateful to them for helping us to make this happen.
Wenli: I know there are a few corporations out there that are hosting such challenges in different fields in computer science. I know this is only the first year, but do you have some amazing results from the challenge?
We have seen a lot of progress. I'm very happy about all the progress that has happened throughout the last two years, especially because of the motivation of the challenge. So I remembered that a couple of years ago, from the academic perspective, where we tried to do navigation in autonomous driving. We were barely able to follow the lane. It took a lot of work. It wasn't a commodity to do this kind of things.
Now, I see teams from all around the world being able to really deal with very complex situations, like intersection, roundabouts on huge highways and in different scenarios. For me seeing now that is becoming a commodity, and everyone can do that, it’s amazing. I never thought of that. Now the focus is safety towards dealing with more complex situations, for example, what if you have adversarial enemy in the road, like a car that is trying to provoke an accident. How is your vehicle going to deal with that situation? We're moving to new challenges and I think that's pretty exciting.
Wenli: I know that you are a research scientist at Intel yourself. How has your work been applied to the commercial side?
Basically, we try to anticipate what's going to be needed. For example, in this case, there are more classical approaches in autonomous driving now like model based on modularity, like people create a perception task, a planner, some sort of forecasting mechanism and controller. And we're trying to say, okay, what will happen if these are critical changes? Do we need to think of different ways of approaching the problem, like can we be more data-driven? For instance, we have done a lot of research on how to break agents that aren't fully trained with supervision.
One big project that I'm working on a collaboration with Berkeley, is exactly trying to do that. We have a simulator, in this case, CARLA. We trained our agents to learn how to drive and cope with different traffic situations in a simulation, and we have techniques to bring that AI model back to the car without any change. The car/agent doesn't need to see any real data, by experiencing the virtual world, it will be able to navigate in the real world. I think that's pretty exciting, how far can we get by learning for simulation? Of course, at some point, we will need to have real data. I think it’s very promising to see that most of the things that you need to learn in order to drive safely can be learnt in a simulator or a video game.
Wenli: Nice, it is exciting. What are some differences that you noticed between the academic world and the industry? Any gap?
I think that's a very interesting topic. We have recently noticed that the gap between industry and academia is increasing, basically because industry has a lot of resources in terms of access to data, access to hardware, so they can collect millions of miles. They have large teams.
What we have seen is that, for example, companies like Waymo or Uber are able to deal with more complex cases. They are already exploring the “long tail", as we call, while academia is still learning how to cope with simple situations. So they recently moved from follow-the-rules kind of situation to more complicated traffic scenarios where they have to deal with dynamic objects and so on. But they're still not facing the same problems that Uber, Waymo and other companies are facing, what they called “long tail”.
Wenli: What are some long-term effects of this situation? And how do we solve this?
I'm concerned that if innovation just comes from industry, it is going to be pretty limited. I really believe that autonomous driving is a problem that is far from being solved. If we really want to expedite the solution of the problem, we should be collaborating all together. But if part of the research power which happens in academia, doesn't have access to the same resources, how are they going to be able to help?
Wenli: Because in a lot of corporations, the data is not public I guess.
German Ros: Exactly. The academic people don’t have public data or public tools. I think we talk different languages. That’s the problem. There are not enough standards, we're trying to push for standard platforms, we are trying to push for ways to interchange data. Actually, that's the motivation behind the CARLA simulator. And also, the challenge is supposed to bring a way to evaluate different agents in further conditions because so far, another problem is, it is very hard to tell the reality behind autonomous driving, because different companies report different numbers, and they are out of context. It's impossible to put those numbers together in context. We need mechanisms to do fair assessment. I think the CARLA challenge is a way to do that: send your stack, and all the stacks get evaluated in the same conditions. You can really understand the whole picture.
Wenli: Will you still be hosting the workshop and also organizing the challenge next year?
Yes. We think that this is an important thing for the community. We think that it can really benefit the research community to have this sort of tools that allow you to understand what is your position in a global way. We plan to keep hosting the challenge for many years.
Wenli: Nice. What are some of your expectations or goals you’re hoping to achieve in next year’s challenge.
Definitely we would like to get more participants, but we would like to see and promote participants to share their code. We would like to see new teams not starting from scratch, but start from AI stacks that worked well in the previous year, and then build it on top, and all together being able to improve step by step. I would like to see real collaborations between teams. The idea is not just about competing; competing is just our way to foster innovation. We also want to share; we want to create a community for people to be able to share these tools and improve all together.
Wenli: It's really meaningful, what you're working on and also devoting your time and helping with this challenge and sharing with this community.
Thank you very much.
Wenli: Thank you so much for joining us.