The results we present here are from a simulation of mobile agents living in a network of interconnected nodes that work cooperatively to build a map of the network. We chose a network topology consistent with related work by our colleagues ; the nodes in the system are modeled as radio-frequency tranceivers distributed throughout a two-dimensional space. Because the nodes are relatively short-range tranceivers, this type of network is generically similar to many other common types. In particular, the average packet in such a system requires multiple hops to travel from source to destination and, relatedly, resident mobile agents need to be quite peripatetic in order to effectively gather connectivity data.
The goal of the agents is to map the network they are living in, to learn about the connectivity of the nodes and build a model of the network topology. An agent can map the system by simply traveling around the network and learning about connectivity first hand. In addition, agents can also cooperate: when two agents travel to the same node, they can share information with each other about their maps of the network and therefore collaborate in the mapping task.
Our model is implemented with a simple discrete event, time-step based simulation engine. Every step of simulated time an agent does three things. First, the agent learns about all of the edges on the node it is on. This information, which the agent itself has experienced, is added to the agent's store of firsthand knowledge. Second, it learns everything it can from all of the other agents currently on the same node. This information, learned from another agent is stored separately as ``rumored'' facts. Finally, the agent chooses another node and moves there.
Our experiments test three different algorithms governing the movement of agents. As a baseline we first implemented ``random'' agents, which simply move to a random adjacent node every update. ``Conscientious'' agents are more sophisticated, choosing at each time step to move to an adjacent node that they have never visited or have visited least recently. This algorithm is similar to a depth-first search of the network. Conscientious agents base their movement decisions only on first-hand knowledge, ignoring information learned from other agents when determining where to go. In contrast the third type of agents, ``superconscientious agents,'' also move preferentially to nodes that have not been explored, but they use both their own experience and learned data from their peers in deciding which nodes to move to.
Our model makes several important simplifications and assumptions. Among the most significant are:
Our simulation system consists of 1200 lines of Java code implementing a discrete event scheduler, a graphical view, a data-collection system, and the simulated objects themselves: network nodes and mobile agents. In order to compare results across population sizes and algorithms, we chose a single connected network consisting of 250 nodes with 2522 edges (figure 1) for all experiments. Each run of the simulator begins with a population of randomly-placed agents and continues until every agent in the system has perfect knowledge -- complete information about every node and edge in the communication graph. We measure the time when all agents have perfect knowledge rather than just one, to find when the entire system has converged. We believe this is the appropriate measure for this inherently distributed system. The time at which all of the agents have acquired perfect knowledge is called the ``finishing time''. We examine in detail the finishing times and the accumulation of knowledge about the map over time for each of the three kinds of agents.
Figure 1: Network used for all experiments