News

Too Many Cooks, Or Too Many Robots?

Scientists quantify how to make crowded teams solve navigational problems

Key Takeaways

  • Harvard SEAS researchers show mathematically that when many robots share a space, adding a certain amount of randomness in their paths improves their efficiency.   
  • Their study exemplifies how simple local rules can lead to the emergence of complex, self-organized task completion.
  • Their formulas could guide the design of robot swarms or crowded public spaces. 

Picture a futuristic swarm of robots deployed on a time-sensitive task, like cleaning up an oil spill or assembling a machine. At first, adding robots is advantageous, since many hands make light work. But a tipping point comes when too many crowd the space, getting in each other’s way and slowing the whole task down. 

It’s a deceptively simple too-many-cooks problem: Given a fixed area, how many robots should you deploy to optimize a task? Harvard applied mathematicians think they have an elegant solution. 

A study from the lab of L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Organismic and Evolutionary Biology, and Physics, combines mathematics, computer simulations, and experiments to show that in crowded environments, adding just the right amount of randomness, or “noise,” to how individuals move, can ease gridlock and dramatically improve efficiency. It’s an example of how simple, local rules can lead to the emergence of complex task completion, with implications for the design of coordinated robotic fleets, crowded public spaces, and more. Published in Proceedings of the National Academy of Sciences, the study was led by applied mathematics Ph.D. student Lucy Liu. 

Mathematical analysis of crowd density is notoriously complex because there are so many possible paths and interactions to consider, Liu said. To get around this difficulty, the researchers embraced the idea of randomness – treating each individual as a simple agent with a tunable amount of “wiggle” in its path.   

“This might be counterintuitive, because how could randomness make things easier to work with?” said Liu. “But in this case, when you have a lot of randomness, it becomes possible to take averages – average distances, average times, average behaviors. This makes it a lot easier to make predictions.” 

To test their ideas, they made computer simulations of fleets of robots, or agents, with each starting at a random position and being given an equally random goal location. Once each agent reached its goal, it was immediately assigned a new destination; this setup was meant to mimic fleets of robots or workers deployed on tasks. 

Each agent headed toward its goal with an adjustable amount of wiggle in its path, or what the researchers called “noise.” With zero noise, the agents would march in straight lines; with high noise, they zigzagged aimlessly. The zigzagging, while inefficient, helped the agents slide around each other. 

By running large simulations, the team observed that if agents were allowed to beeline toward their goal locations, they formed dense traffic jams where everyone got stuck. If their movements were too random, traffic jams ceased, but the incessant wandering made them very inefficient. A Goldilocks zone of just the right amount of noise – agents bumping into each other and forming short-lived jams but still slipping past – kept the flow moving. 

The researchers used these observations to build mathematical formulas that could approximate “goal attainment rate” – how many goal destinations are reached per unit of time. Those formulas then allowed them to compute the optimal crowd density and noise levels to maximize output. 

To test whether their ideas would play out in the physical world, Liu and the team collaborated with physicist Federico Toschi at Eindhoven University of Technology in the Netherlands, where Liu helped set up swarms of small, wheeled robots in a lab outfitted with an overhead camera. 

Each robot carried a QR code so the camera could track their positions and help them get re-assigned to new positions. While the robots turned and moved more slowly and imperfectly than in the computer simulations, the key emergent behaviors persisted. 

The study confirmed a core theoretical insight: A powerful central computer or ultra-intelligent robots aren’t necessary to achieve coordinated tasks. A simple local set of navigational rules, at least up to certain densities, may be all you need. 

“Understanding how active matter, whether it is a swarm of ants, a herd of animals, or a group of robots, become functional and execute tasks in crowded environments using the principles of self-organization, is relevant to many questions in behavioral ecology,” Mahadevan said.  “Our study suggests strategies that might well be much broader than the instantiation we have focused on.”

Liu said she has always been drawn to research that focuses on the safe design of highly trafficked spaces. The study hints at a future where crowd dynamics could be mathematically predicted and tuned – whether the cooks in the kitchen are humans, robots, cars, or a mix of all.

Funding for the research came from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 2140743, along with grants from the Simons Foundation and the Henri Seydoux Fund.

Topics: Applied Mathematics, Research, Robotics

Scientist Profiles

L Mahadevan

Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics

Press Contact

Anne J. Manning | amanning@seas.harvard.edu