The icon indicates free access to the linked research on JSTOR.

A cyclist in Austin, Texas had an awkward encounter with a Google self-driving car last month. He happened to approach a 4-way stop at the same time as the automated vehicle. The self-driving car got to the stop sign slightly before him, so he intended to wait for it to go through the intersection before proceeding. During this time, the cyclist did what is called a track stand, remaining on his bike with his feet on his pedals, balancing on the spot. Road cyclist to do track stands because their shoes secure to their pedals. This improves the efficiency of their pedal stroke, but can make stopping cumbersome. As the cyclist tried to maintain his balance, he rocked back and forth on his bike. Unfortunately, the Google self-driving car, which had started to move through the intersection, was unsure what to make of this and kept abruptly stopping whenever it sensed that the cyclist was moving. The dance continued for two minutes, until the safety drivers inside the self-driving car were able to command the car to proceed.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

This incident was first shared on RoadBikeReview.com and subsequently reported on in Robotics Trend. Brad Templeton, a self-driving car consultant, explained why this situation was so tricky: four-way stops are not as rule governed as we would like to believe. You cannot simple teach a self-driving car that when it stops first, it gets to go first, because some drivers will illegally jump the queue and go when it is not their turn. Conversely, overly cautious drivers might wait for cars that stopped after them. Therefore, a self-driving car must be taught to wait its turn, but also stop if a car jumps the queue, and go if the other cars are taking too long. In the case of the cyclist, it is likely that the self-driving car interpreted the slight movement of the cyclist doing the track stand as indicators that he might jump the queue.

Does this incident mean that the Google self driving cars are unsafe? Probably not. All reported accidents involving a Google self-driving car have been caused by human drivers. For example, on July 1, 2015, a distracted driver behind a Google self-driving car drove into its rear bumper after failing to note that traffic had stopped. Even in the incident with the cyclist, he reported feeling safe. “I felt safer dealing with a self-driving car than a human-operated one,” he said on RoadBikeReview.com. In reality, 90% of all car accidents are caused by human error. Self-driving cars, on the other hand, do not rubberneck or brake unpredictably and can react in milliseconds, as opposed to normal human reaction times of about 3 seconds.

So how do self-driving cars work? Self-driving cars use radar, LiDAR (light and radar), and cameras to sense their world. Radar senses objects far away; LiDAR senses objects in mid-range; and cameras spot nearby objects. Once the car locates the objects in its environment, it calculates the likely trajectories of the objects to plot a course that can navigate around them. Because it is impossible to perfectly predict the movements, the car calculates many different possibilities and then uses probabilistic algorithms to decide how to proceed.

The Google self-driving car can distinguish between cars, cyclists, pedestrians, and even traffic cones. Google has been testing and refining the software that controls the car for six years and over a million miles, giving their cars the equivalent of 75 years of driving experience. The vision of the project is to create safe roads with cars that can pick people up at the push of a button and deliver them safely to their destination, though they might delay a track-standing cyclist in the process.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Science News, Vol. 152, No. 11 (1997), pp. 168-169
Society for Science & the Public