The world is full of rankings and orders. They appear in tennis-like the French Open, ending in the final ranking of champion players. They appear pandemics-as when public health authorities can record new infections and use contact tracing to sketch networks of COVID-19 infections. Systems of competition, conflict, and transmission can all create hierarchies.
However, these hierarchies are observed after the fact. Therefore, it is difficult to know the true ranking of the system. Who was really the best player? Who infected who? “We can’t go back in time to know exactly how this happened,” says George Cantwell, a postdoctoral fellow at the Santa Fe Institute. You can build a model of your network and compare all possible results, but such a blue force approach will soon become unacceptable. For example, if you try to rank a group with only 60 participants, the number of possible sequences will reach the number of known cosmic particles.
About recent papers published in Physical Review E, Cantwell worked with computer scientist and mathematician Professor Cris Moore of SFI to explain a new way to evaluate rankings. Their goal was not to find one true hierarchy, but to calculate the extent of all possible hierarchies. Each hierarchy is weighted with that probability.
“We wanted to be exactly incorrect, but we wanted to have some sense of how good they were and get a good answer,” says Kantwell. The new algorithm is inspired by physics. Ranks are modeled as interacting entities that can move up and down. Through that lens, the system behaves like a physical system that can be analyzed using the methods of spin glass theory.
Shortly after the COVID-19 pandemic began, Kantwell and Moore began thinking about models of how the disease spreads through the network. They quickly recognized the situation as a matter of order in which they emerged over time, unlike the spread of memes on social media and the emergence of championship rankings in professional sports. “How do you order if the information is incomplete?” Asks Kantwell.
They started by imagining a function that could get a ranking with accuracy. Example: A good ranking is a ranking that has a 98% chance of matching the result of the match. Rankings that match the results have a 10% chance of being poor. This is a worse result than a coin toss without prior knowledge.
One of the problems with ranking is that it is usually discrete. That is, it follows an integer such as 1, 2, 3, and so on. This order shows that the “distance” between the 1st and 2nd place members is the same as the “distance” between the 2nd and 3rd place members. But that’s not the case, says Kantwell. The differences between top-ranked players can be closer than they look, as the top players in games around the world are close to each other in terms of skill.
“Lower-ranked players can often beat higher-ranked players. The only way the model makes sense and the data fits is to put all the ranks together,” says Kantwell. ..
Cantwell and Moore described a system for rating rankings based on: continue Numbering system. Rankings allow you to assign any real number (integer, fraction, decimal number that repeats infinitely) to players in your network. “Continuous numbers are manageable,” says Kantwell, and those consecutive numbers can still be reverted to discrete rankings.
In addition, this new approach can be used to predict something about the future, such as the outcome of a tennis tournament, or to guess something about the past, such as how the disease spread. “These rankings tell us the highest to worst order of sports teams, but they also tell us the order in which people in the community got sick,” Moore says. “Before postdocs, George has been working on this issue as a way to improve contact tracing in epidemics. Just as you can predict which team will win the match, which of the two will contact. You can guess if you were infected with each other. “
In future research, researchers say they plan to investigate some of the deeper questions that have surfaced. For example, multiple rankings may match the data, but not fundamentally with the other rankings. Or, a ranking that seems incorrect may be highly uncertain, but not inaccurate. Cantwell also wants to compare the model’s predictions with the results of actual competition. Ultimately, he says, this model could be used to improve the predictions of a wide range of systems that lead to rankings, from infectious disease models to sports betting.
Kantwell says he will hold his money-for now. “I’m not ready to start betting on it,” he says.