Saturday, September 1, 2007

swarm theory


swarm theory


A single ant or bee isn't smart, but their colonies are. The study of swarm intelligence is providing insights that can help humans manage complex systems, from truck routing to military robots.



I used to think ants knew what they were doing. The ones marching across my kitchen counter looked so confident, I just figured they had a plan, knew where they were going and what needed to be done. How else could ants organize highways, build elaborate nests, stage epic raids, and do all the other things ants do?


Turns out I was wrong. Ants aren't clever little engineers, architects, or warriors after all-at least not as individuals. When it comes to deciding what to do next, most ants don't have a clue. "If you watch an ant try to accomplish something, you'll be impressed by how inept it is," says Deborah M. Gordon, a biologist at Stanford University.


How do we explain, then, the success of Earth's 12,000 or so known ant species? They must have learned something in 140 million years.


"Ants aren't smart," Gordon says. "Ant colonies are." A colony can solve problems unthinkable for individual ants, such as finding the shortest path to the best food source, allocating workers to different tasks, or defending a territory from neighbors. As individuals, ants might be tiny dummies, but as colonies they respond quickly and effectively to their environment. They do it with something called swarm intelligence.


Where this intelligence comes from raises a fundamental question in nature: How do the simple actions of individuals add up to the complex behavior of a group? How do hundreds of honeybees make a critical decision about their hive if many of them disagree? What enables a school of herring to coordinate its movements so precisely it can change direction in a flash, like a single, silvery organism? The collective abilities of such animals-none of which grasps the big picture, but each of which contributes to the group's success-seem miraculous even to the biologists who know them best. Yet during the past few decades, researchers have come up with intriguing insights.


One key to an ant colony, for example, is that no one's in charge. No generals command ant warriors. No managers boss ant workers. The queen plays no role except to lay eggs. Even with half a million ants, a colony functions just fine with no management at all-at least none that we would recognize. It relies instead upon countless interactions between individual ants, each of which is following simple rules of thumb. Scientists describe such a system as self-organizing.


Consider the problem of job allocation. In the Arizona desert where Deborah Gordon studies red harvester ants (Pogonomyrmex barbatus), a colony calculates each morning how many workers to send out foraging for food. The number can change, depending on conditions. Have foragers recently discovered a bonanza of tasty seeds? More ants may be needed to haul the bounty home. Was the nest damaged by a storm last night? Additional maintenance workers may be held back to make repairs. An ant might be a nest worker one day, a trash collector the next. But how does a colony make such adjustments if no one's in charge? Gordon has a theory.


Ants communicate by touch and smell. When one ant bumps into another, it sniffs with its antennae to find out if the other belongs to the same nest and where it has been working. (Ants that work outside the nest smell different from those that stay inside.) Before they leave the nest each day, foragers normally wait for early morning patrollers to return. As patrollers enter the nest, they touch antennae briefly with foragers.


"When a forager has contact with a patroller, it's a stimulus for the forager to go out," Gordon says. "But the forager needs several contacts no more than ten seconds apart before it will go out."


To see how this works, Gordon and her collaborator Michael Greene of the University of Colorado at Denver captured patroller ants as they left a nest one morning. After waiting half an hour, they simulated the ants' return by dropping glass beads into the nest entrance at regular intervals-some coated with patroller scent, some with maintenance worker scent, some with no scent. Only the beads coated with patroller scent stimulated foragers to leave the nest. Their conclusion: Foragers use the rate of their encounters with patrollers to tell if it's safe to go out. (If you bump into patrollers at the right rate, it's time to go foraging. If not, better wait. It might be too windy, or there might be a hungry lizard waiting out there.) Once the ants start foraging and bringing back food, other ants join the effort, depending on the rate at which they encounter returning foragers.


"A forager won't come back until it finds something," Gordon says. "The less food there is, the longer it takes the forager to find it and get back. The more food there is, the faster it comes back. So nobody's deciding whether it's a good day to forage. The collective is, but no particular ant is."


That's how swarm intelligence works: simple creatures following simple rules, each one acting on local information. No ant sees the big picture. No ant tells any other ant what to do. Some ant species may go about this with more sophistication than others. (Temnothorax albipennis, for example, can rate the quality of a potential nest site using multiple criteria.) But the bottom line, says Iain Couzin, a biologist at Oxford and Princeton Universities, is that no leadership is required. "Even complex behavior may be coordinated by relatively simple interactions," he says.


Inspired by the elegance of this idea, Marco Dorigo, a computer scientist at the Université Libre in Brussels, used his knowledge of ant behavior in 1991 to create mathematical procedures for solving particularly complex human problems, such as routing trucks, scheduling airlines, or guiding military robots.


In Houston, for example, a company named American Air Liquide has been using an ant-based strategy to manage a complex business problem. The company produces industrial and medical gases, mostly nitrogen, oxygen, and hydrogen, at about a hundred locations in the United States and delivers them to 6,000 sites, using pipelines, railcars, and 400 trucks. Deregulated power markets in some regions (the price of electricity changes every 15 minutes in parts of Texas) add yet another layer of complexity.


"Right now in Houston, the price is $44 a megawatt for an industrial customer," says Charles N. Harper, who oversees the supply system at Air Liquide. "Last night the price went up to $64, and Monday when the cold front came through, it went up to $210." The company needed a way to pull it all together.


Working with the Bios Group (now NuTech Solutions), a firm that specialized in artificial intelligence, Air Liquide developed a computer model based on algorithms inspired by the foraging behavior of Argentine ants (Linepithema humile), a species that deposits chemical substances called pheromones.


"When these ants bring food back to the nest, they lay a pheromone trail that tells other ants to go get more food," Harper explains. "The pheromone trail gets reinforced every time an ant goes out and comes back, kind of like when you wear a trail in the forest to collect wood. So we developed a program that sends out billions of software ants to find out where the pheromone trails are strongest for our truck routes."


Ants had evolved an efficient method to find the best routes in their neighborhoods. Why not follow their example? So Air Liquide combined the ant approach with other artificial intelligence techniques to consider every permutation of plant scheduling, weather, and truck routing-millions of possible decisions and outcomes a day. Every night, forecasts of customer demand and manufacturing costs are fed into the model.


"It takes four hours to run, even with the biggest computers we have," Harper says. "But at six o'clock every morning we get a solution that says how we're going to manage our day."


For truck drivers, the new system took some getting used to. Instead of delivering gas from the plant closest to a customer, as they used to do, drivers were now asked to pick up shipments from whichever plant was making gas at the lowest delivered price, even if it was farther away.


"You want me to drive a hundred miles? To the drivers, it wasn't intuitive," Harper says. But for the company, the savings have been impressive. "It's huge. It's actually huge."


Other companies also have profited by imitating ants. In Italy and Switzerland, fleets of trucks carrying milk and dairy products, heating oil, and groceries all use ant-foraging rules to find the best routes for deliveries. In England and France, telephone companies have made calls go through faster on their networks by programming messages to deposit virtual pheromones at switching stations, just as ants leave signals for other ants to show them the best trails.


In the U.S., Southwest Airlines has tested an ant-based model to improve service at Sky Harbor International Airport in Phoenix. With about 200 aircraft a day taking off and landing on two runways and using gates at three concourses, the company wanted to make sure that each plane got in and out as quickly as possible, even if it arrived early or late.


"People don't like being only 500 yards away from a gate and having to sit out there until another aircraft leaves," says Doug Lawson of Southwest. So Lawson created a computer model of the airport, giving each aircraft the ability to remember how long it took to get into and away from each gate. Then he set the model in motion to simulate a day's activity.


"The planes are like ants searching for the best gate," he says. But rather than leaving virtual pheromones along the way, each aircraft remembers the faster gates and forgets the slower ones. After many simulations, using real data to vary arrival and departure times, each plane learned how to avoid an intolerable wait on the tarmac. Southwest was so pleased with the outcome, it may use a similar model to study the ticket counter area.



WHEN IT COMES TO SWARM intelligence, ants aren't the only insects with something useful to teach us. On a small, breezy island off the southern coast of Maine, Thomas Seeley, a biologist at Cornell University, has been looking into the uncanny ability of honeybees to make good decisions. With as many as 50,000 workers in a single hive, honeybees have evolved ways to work through individual differences of opinion to do what's best for the colony. If only people could be as effective in boardrooms, church committees, and town meetings, Seeley says, we could avoid problems making decisions in our own lives.


During the past decade, Seeley, Kirk Visscher of the University of California, Riverside, and others have been studying colonies of honeybees (Apis mellifera) to see how they choose a new home. In late spring, when a hive gets too crowded, a colony normally splits, and the queen, some drones, and about half the workers fly a short distance to cluster on a tree branch. There the bees bivouac while a small percentage of them go searching for new real estate. Ideally, the site will be a cavity in a tree, well off the ground, with a small entrance hole facing south, and lots of room inside for brood and honey. Once a colony selects a site, it usually won't move again, so it has to make the right choice.


To find out how, Seeley's team applied paint dots and tiny plastic tags to identify all 4,000 bees in each of several small swarms that they ferried to Appledore Island, home of the Shoals Marine Laboratory. There, in a series of experiments, they released each swarm to locate nest boxes they'd placed on one side of the half-mile-long (one kilometer) island, which has plenty of shrubs but almost no trees or other places for nests.


In one test they put out five nest boxes, four that weren't quite big enough and one that was just about perfect. Scout bees soon appeared at all five. When they returned to the swarm, each performed a waggle dance urging other scouts to go have a look. (These dances include a code giving directions to a box's location.) The strength of each dance reflected the scout's enthusiasm for the site. After a while, dozens of scouts were dancing their little feet off, some for one site, some for another, and a small cloud of bees was buzzing around each box.



The decisive moment didn't take place in the main cluster of bees, but out at the boxes, where scouts were building up. As soon as the number of scouts visible near the entrance to a box reached about 15-a threshold confirmed by other experiments-the bees at that box sensed that a quorum had been reached, and they returned to the swarm with the news.


"It was a race," Seeley says. "Which site was going to build up 15 bees first?"


Scouts from the chosen box then spread through the swarm, signaling that it was time to move. Once all the bees had warmed up, they lifted off for their new home, which, to no one's surprise, turned out to be the best of the five boxes.


The bees' rules for decision-making-seek a diversity of options, encourage a free competition among ideas, and use an effective mechanism to narrow choices-so impressed Seeley that he now uses them at Cornell as chairman of his department.


"I've applied what I've learned from the bees to run faculty meetings," he says. To avoid going into a meeting with his mind made up, hearing only what he wants to hear, and pressuring people to conform, Seeley asks his group to identify all the possibilities, kick their ideas around for a while, then vote by secret ballot. "It's exactly what the swarm bees do, which gives a group time to let the best ideas emerge and win. People are usually quite amenable to that."


In fact, almost any group that follows the bees' rules will make itself smarter, says James Surowiecki, author of The Wisdom of Crowds. "The analogy is really quite powerful. The bees are predicting which nest site will be best, and humans can do the same thing, even in the face of exceptionally complex decisions." Investors in the stock market, scientists on a research project, even kids at a county fair guessing the number of beans in a jar can be smart groups, he says, if their members are diverse, independent minded, and use a mechanism such as voting, auctioning, or averaging to reach a collective decision.


Take bettors at a horse race. Why are they so accurate at predicting the outcome of a race? At the moment the horses leave the starting gate, the odds posted on the pari-mutuel board, which are calculated from all bets put down, almost always predict the race's outcome: Horses with the lowest odds normally finish first, those with second lowest odds finish second, and so on. The reason, Surowiecki says, is that pari-mutuel betting is a nearly perfect machine for tapping into the wisdom of the crowd.


"If you ever go to the track, you find a really diverse group, experts who spend all day perusing daily race forms, people who know something about some kinds of horses, and others who are betting at random, like the woman who only likes black horses," he says. Like bees trying to make a decision, bettors gather all kinds of information, disagree with one another, and distill their collective judgment when they place their bets.


That's why it's so rare to win on a long shot.



THERE'S A SMALL PARK near the White House in Washington, D.C., where I like to watch flocks of pigeons swirl over the traffic and trees. Sooner or later, the birds come to rest on ledges of buildings surrounding the park. Then something disrupts them, and they're off again in synchronized flight.


The birds don't have a leader. No pigeon is telling the others what to do. Instead, they're each paying close attention to the pigeons next to them, each bird following simple rules as they wheel across the sky. These rules add up to another kind of swarm intelligence-one that has less to do with making decisions than with precisely coordinating movement.


Craig Reynolds, a computer graphics researcher, was curious about what these rules might be. So in 1986 he created a deceptively simple steering program called boids. In this simulation, generic birdlike objects, or boids, were each given three instructions: 1) avoid crowding nearby boids, 2) fly in the average direction of nearby boids, and 3) stay close to nearby boids. The result, when set in motion on a computer screen, was a convincing simulation of flocking, including lifelike and unpredictable movements.


At the time, Reynolds was looking for ways to depict animals realistically in TV shows and films. (Batman Returns in 1992 was the first movie to use his approach, portraying a swarm of bats and an army of penguins.) Today he works at Sony doing research for games, such as an algorithm that simulates in real time as many as 15,000 interacting birds, fish, or people.


By demonstrating the power of self-organizing models to mimic swarm behavior, Reynolds was also blazing the trail for robotics engineers. A team of robots that could coordinate its actions like a flock of birds could offer significant advantages over a solitary robot. Spread out over a large area, a group could function as a powerful mobile sensor net, gathering information about what's out there. If the group encountered something unexpected, it could adjust and respond quickly, even if the robots in the group weren't very sophisticated, just as ants are able to come up with various options by trial and error. If one member of the group were to break down, others could take its place. And, most important, control of the group could be decentralized, not dependent on a leader.


"In biology, if you look at groups with large numbers, there are very few examples where you have a central agent," says Vijay Kumar, a professor of mechanical engineering at the University of Pennsylvania. "Everything is very distributed: They don't all talk to each other. They act on local information. And they're all anonymous. I don't care who moves the chair, as long as somebody moves the chair. To go from one robot to multiple robots, you need all three of those ideas."


Within five years Kumar hopes to put a networked team of robotic vehicles in the field. One purpose might be as first responders. "Let's say there's a 911 call," he says. "The fire alarm goes off. You don't want humans to respond. You want machines to respond, to tell you what's happening. Before you send firemen into a burning building, why not send in a group of robots?"

Taking this idea one step further, Marco Dorigo's group in Brussels is leading a European effort to create a "swarmanoid," a group of cooperating robots with complementary abilities: "foot-bots" to transport things on the ground, "hand-bots" to climb walls and manipulate objects, and "eye-bots" to fly around, providing information to the other units.

The military is eager to acquire similar capabilities. On January 20, 2004, researchers released a swarm of 66 pint-size robots into an empty office building at Fort A. P. Hill, a training center near Fredericksburg, Virginia. The mission: Find targets hidden in the building.

Zipping down the main hallway, the foot-long (0.3 meter) red robots pivoted this way and that on their three wheels, resembling nothing so much as large insects. Eight sonars on each unit helped them avoid collisions with walls and other robots. As they spread out, entering one room after another, each robot searched for objects of interest with a small, Web-style camera. When one robot encountered another, it used wireless network gear to exchange information. ("Hey, I've already explored that part of the building. Look somewhere else.")

In the back of one room, a robot spotted something suspicious: a pink ball in an open closet (the swarm had been trained to look for anything pink). The robot froze, sending an image to its human supervisor. Soon several more robots arrived to form a perimeter around the pink intruder. Within half an hour, all six of the hidden objects had been found. The research team conducting the experiment declared the run a success. Then they started a new test.

The demonstration was part of the Centibots project, an investigation to see if as many as a hundred robots could collaborate on a mission. If they could, teams of robots might someday be sent into a hostile village to flush out terrorists or locate prisoners; into an earthquake-damaged building to find victims; onto chemical-spill sites to examine hazardous waste; or along borders to watch for intruders. Military agencies such as DARPA (Defense Advanced Research Projects Agency) have funded a number of robotics programs using collaborative flocks of helicopters and fixed-wing aircraft, schools of torpedo-shaped underwater gliders, and herds of unmanned ground vehicles. But at the time, this was the largest swarm of robots ever tested.

"When we started Centibots, we were all thinking, this is a crazy idea, it's impossible to do," says Régis Vincent, a researcher at SRI International in Menlo Park, California. "Now we're looking to see if we can do it with a thousand robots."


IN NATURE, OF COURSE, animals travel in even larger numbers. That's because, as members of a big group, whether it's a flock, school, or herd, individuals increase their chances of detecting predators, finding food, locating a mate, or following a migration route. For these animals, coordinating their movements with one another can be a matter of life or death.

"It's much harder for a predator to avoid being spotted by a thousand fish than it is to avoid being spotted by one," says Daniel Grünbaum, a biologist at the University of Washington. "News that a predator is approaching spreads quickly through a school because fish sense from their neighbors that something's going on."

When a predator strikes a school of fish, the group is capable of scattering in patterns that make it almost impossible to track any individual. It might explode in a flash, create a kind of moving bubble around the predator, or fracture into multiple blobs, before coming back together and swimming away.

Animals on land do much the same, as Karsten Heuer, a wildlife biologist, observed in 2003, when he and his wife, Leanne Allison, followed the vast Porcupine caribou herd (Rangifer tarandus granti) for five months. Traveling more than a thousand miles (1,600 kilometers) with the animals, they documented the migration from winter range in Canada's northern Yukon Territory to calving grounds in Alaska's Arctic National Wildlife Refuge.

"It's difficult to describe in words, but when the herd was on the move it looked very much like a cloud shadow passing over the landscape, or a mass of dominoes toppling over at the same time and changing direction," Karsten says. "It was as though every animal knew what its neighbor was going to do, and the neighbor beside that and beside that. There was no anticipation or reaction. No cause and effect. It just was."

One day, as the herd funneled through a gully at the tree line, Karsten and Leanne spotted a wolf creeping up. The herd responded with a classic swarm defense.


"As soon as the wolf got within a certain distance of the caribou, the herd's alertness just skyrocketed," Karsten says. "Now there was no movement. Every animal just stopped, completely vigilant and watching." A hundred yards (90 meters) closer, and the wolf crossed another threshold. "The nearest caribou turned and ran, and that response moved like a wave through the entire herd until they were all running. Reaction times shifted into another realm. Animals closest to the wolf at the back end of the herd looked like a blanket unraveling and tattering, which, from the wolf's perspective, must have been extremely confusing." The wolf chased one caribou after another, losing ground with each change of target. In the end, the herd escaped over the ridge, and the wolf was left panting and gulping snow.

For each caribou, the stakes couldn't have been higher, yet the herd's evasive maneuvers displayed not panic but precision. (Imagine the chaos if a hungry wolf were released into a crowd of people.) Every caribou knew when it was time to run and in which direction to go, even if it didn't know exactly why. No leader was responsible for coordinating the rest of the herd. Instead each animal was following simple rules evolved over thousands of years of wolf attacks.

That's the wonderful appeal of swarm intelligence. Whether we're talking about ants, bees, pigeons, or caribou, the ingredients of smart group behavior-decentralized control, response to local cues, simple rules of thumb-add up to a shrewd strategy to cope with complexity.

"We don't even know yet what else we can do with this," says Eric Bonabeau, a complexity theorist and the chief scientist at Icosystem Corporation in Cambridge, Massachusetts. "We're not used to solving decentralized problems in a decentralized way. We can't control an emergent phenomenon like traffic by putting stop signs and lights everywhere. But the idea of shaping traffic as a self-organizing system, that's very exciting."

Social and political groups have already adopted crude swarm tactics. During mass protests eight years ago in Seattle, anti-globalization activists used mobile communications devices to spread news quickly about police movements, turning an otherwise unruly crowd into a "smart mob" that was able to disperse and re-form like a school of fish.

The biggest changes may be on the Internet. Consider the way Google uses group smarts to find what you're looking for. When you type in a search query, Google surveys billions of Web pages on its index servers to identify the most relevant ones. It then ranks them by the number of pages that link to them, counting links as votes (the most popular sites get weighted votes, since they're more likely to be reliable). The pages that receive the most votes are listed first in the search results. In this way, Google says, it "uses the collective intelligence of the Web to determine a page's importance."

Wikipedia, a free collaborative encyclopedia, has also proved to be a big success, with millions of articles in more than 200 languages about everything under the sun, each of which can be contributed by anyone or edited by anyone. "It's now possible for huge numbers of people to think together in ways we never imagined a few decades ago," says Thomas Malone of MIT's new Center for Collective Intelligence. "No single person knows everything that's needed to deal with problems we face as a society, such as health care or climate change, but collectively we know far more than we've been able to tap so far."

Such thoughts underline an important truth about collective intelligence: Crowds tend to be wise only if individual members act responsibly and make their own decisions. A group won't be smart if its members imitate one another, slavishly follow fads, or wait for someone to tell them what to do. When a group is being intelligent, whether it's made up of ants or attorneys, it relies on its members to do their own part. For those of us who sometimes wonder if it's really worth recycling that extra bottle to lighten our impact on the planet, the bottom line is that our actions matter, even if we don't see how.

Think about a honeybee as she walks around inside the hive. If a cold wind hits the hive, she'll shiver to generate heat and, in the process, help to warm the nearby brood. She has no idea that hundreds of workers in other parts of the hive are doing the same thing at the same time to the benefit of the next generation.

"A honeybee never sees the big picture any more than you or I do," says Thomas Seeley, the bee expert. "None of us knows what society as a whole needs, but we look around and say, oh, they need someone to volunteer at school, or mow the church lawn, or help in a political campaign."

If you're looking for a role model in a world of complexity, you could do worse than to imitate a bee.








Technorati : ,
Del.icio.us : ,
Ice Rocket : ,
Flickr : ,
Zooomr : ,
Buzznet : ,

No comments: