There was just one problem: The theory was glued together with hopes and prayers. Only by using a technique dubbed “renormalization,” which involved carefully concealing infinite quantities, could researchers sidestep bogus predictions. The process worked, but even those developing the theory suspected it might be a house of cards resting on a tortured mathematical trick.

“It is what I would call a dippy process,” Richard Feynman later wrote. “Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent.”

Justification came decades later from a seemingly unrelated branch of physics. Researchers studying magnetization discovered that renormalization wasn’t about infinities at all. Instead, it spoke to the universe’s separation into kingdoms of independent sizes, a perspective that guides many corners of physics today.

Renormalization, writes David Tong, a theorist at the University of Cambridge, is “arguably the single most important advance in theoretical physics in the past 50 years.”

By some measures, field theories are the most successful theories in all of science. The theory of quantum electrodynamics (QED), which forms one pillar of the Standard Model of particle physics, has made theoretical predictions that match up with experimental results to an accuracy of one part in a billion.

But in the 1930s and 1940s, the theory’s future was far from assured. Approximating the complex behavior of fields often gave nonsensical, infinite answers that made some theorists think field theories might be a dead end.

Feynman and others sought whole new perspectives — perhaps even one that would return particles to center stage — but came back with a hack instead. The equations of QED made respectable predictions, they found, if patched with the inscrutable procedure of renormalization.

The exercise goes something like this. When a QED calculation leads to an infinite sum, cut it short. Stuff the part that wants to become infinite into a coefficient — a fixed number — in front of the sum. Replace that coefficient with a finite measurement from the lab. Finally, let the newly tamed sum go back to infinity.

To some, the prescription felt like a shell game. “This is just not sensible mathematics,” wrote Paul Dirac, a groundbreaking quantum theorist.

The core of the problem — and a seed of its eventual solution — can be seen in how physicists dealt with the charge of the electron.

In the scheme above, the electric charge comes from the coefficient — the value that swallows the infinity during the mathematical shuffling. To theorists puzzling over the physical meaning of renormalization, QED hinted that the electron had two charges: a theoretical charge, which was infinite, and the measured charge, which was not. Perhaps the core of the electron held infinite charge. But in practice, quantum field effects (which you might visualize as a virtual cloud of positive particles) cloaked the electron so that experimentalists measured only a modest net charge.

Two physicists, Murray Gell-Mann and Francis Low, fleshed out this idea in 1954. They connected the two electron charges with one “effective” charge that varied with distance. The closer you get (and the more you penetrate the electron’s positive cloak), the more charge you see.

Their work was the first to link renormalization with the idea of scale. It hinted that quantum physicists had hit on the right answer to the wrong question. Rather than fretting about infinites, they should have focused on connecting tiny with huge.

Renormalization is “the mathematical version of a microscope,” said Astrid Eichhorn, a physicist at the University of Southern Denmark who uses renormalization to search for theories of quantum gravity. “And conversely you can start with the microscopic system and zoom out. It’s a combination of a microscope and a telescope.”

A second clue emerged from the world of condensed matter, where physicists were puzzling over how a rough magnet model managed to nail the fine details of certain transformations. The Ising model consisted of little more than a grid of atomic arrows that could each point only up or down, yet it predicted the behaviors of real-life magnets with improbable perfection.

At low temperatures, most atoms align, magnetizing the material. At high temperatures they grow disordered and the lattice demagnetizes. But at a critical transition point, islands of aligned atoms of all sizes coexist. Crucially, the ways in which certain quantities vary around this “critical point” appeared identical in the Ising model, in real magnets of varying materials, and even in unrelated systems such as a high-pressure transition where water becomes indistinguishable from steam. The discovery of this phenomenon, which theorists called universality, was as bizarre as finding that elephants and egrets move at precisely the same top speed.

Physicists don’t usually deal with objects of different sizes at the same time. But the universal behavior around critical points forced them to reckon with all length scales at once.

Leo Kadanoff, a condensed matter researcher, figured out how to do so in 1966. He developed a “block spin” technique, breaking an Ising grid too complex to tackle head-on into modest blocks with a few arrows per side. He calculated the average orientation of a group of arrows and replaced the whole block with that value. Repeating the process, he smoothed the lattice’s fine details, zooming out to grok the system’s overall behavior.

Finally, Ken Wilson — a former graduate student of Gell-Mann with feet in the worlds of both particle physics and condensed matter — united the ideas of Gell-Mann and Low with those of Kadanoff. His “renormalization group,” which he first described in 1971, justified QED’s tortured calculations and supplied a ladder to climb the scales of universal systems. The work earned Wilson a Nobel Prize and changed physics forever.

The best way to conceptualize Wilson’s renormalization group, said Paul Fendley, a condensed matter theorist at the University of Oxford, is as a “theory of theories” connecting the microscopic with the macroscopic.

Consider the magnetic grid. At the microscopic level, it’s easy to write an equation linking two neighboring arrows. But taking that simple formula and extrapolating it to trillions of particles is effectively impossible. You’re thinking at the wrong scale.

Wilson’s renormalization group describes a transformation from a theory of building blocks into a theory of structures. You start with a theory of small pieces, say the atoms in a billiard ball. Turn Wilson’s mathematical crank, and you get a related theory describing groups of those pieces — perhaps billiard ball molecules. As you keep cranking, you zoom out to increasingly larger groupings — clusters of billiard ball molecules, sectors of billiard balls, and so on. Eventually you’ll be able to calculate something interesting, such as the path of a whole billiard ball.

This is the magic of the renormalization group: It helps identify which big-picture quantities are useful to measure and which convoluted microscopic details can be ignored. A surfer cares about wave heights, not the jostling of water molecules. Similarly, in subatomic physics, renormalization tells physicists when they can deal with a relatively simple proton as opposed to its tangle of interior quarks.

Wilson’s renormalization group also suggested that the woes of Feynman and his contemporaries came from trying to understand the electron from infinitely close up. “We don’t expect to be valid down to arbitrarily small scales,” said James Fraser, a philosopher of physics at Durham University in the U.K. Mathematically cutting the sums short and shuffling the infinity around, physicists now understand, is the right way to do a calculation when your theory has a built-in minimum grid size. “The cutoff is absorbing our ignorance of what’s going on” at lower levels, said Fraser.

In other words, QED and the Standard Model simply can’t say what the bare charge of the electron is from zero nanometers away. They are what physicists call “effective” theories. They work best over well-defined distance ranges. Finding out exactly what happens when particles get even cozier is a major goal of high-energy physics.

Today, Feynman’s “dippy process” has become as ubiquitous in physics as calculus, and its mechanics reveal the reasons for some of the discipline’s greatest successes and its current challenges. During renormalization, complicated submicroscopic capers tend to just disappear. They may be real, but they don’t affect the big picture. “Simplicity is a virtue,” Fendley said. “There is a god in this.”

That mathematical fact captures nature’s tendency to sort itself into essentially independent worlds. When engineers design a skyscraper, they ignore individual molecules in the steel. Chemists analyze molecular bonds but remain blissfully ignorant of quarks and gluons. The separation of phenomena by length, as quantified by the renormalization group, has allowed scientists to move gradually from big to small over the centuries, rather than cracking all scales at once.

Yet at the same time, renormalization’s hostility to microscopic details works against the efforts of modern physicists who are hungry for signs of the next realm down. The separation of scales suggests they’ll need to dig deep to overcome nature’s fondness for concealing its finer points from curious giants like us.

“Renormalization helps us simplify the problem,” said Nathan Seiberg, a theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey. But “it also hides what happens at short distances. You can’t have it both ways.”

]]>For decades computer scientists had been trying to develop a fast algorithm for determining when it’s possible to add edges to a graph so that it remains “planar,” meaning none of its edges cross each other. But the field had been unable to improve on an algorithm published over 20 years ago.

Holm and Rotenberg were surprised to find that their paper contained the insight needed to do a lot better. It “solved one of the major stumbling blocks we had with actually getting a real algorithm,” said Holm, a computer scientist at the University of Copenhagen. “We might have given the whole thing away.”

The two rushed to draft a new paper. They presented it in June at the ACM Symposium on Theory of Computing, where they detailed an exponentially better method for checking whether a graph is planar.

“The new algorithm is a remarkable tour de force,” said Giuseppe Italiano, a computer scientist at Luiss University and a co-author of the 1996 paper describing what is now the second-fastest algorithm. “When I co-authored that paper, I didn’t think that this could happen.”

Graphs are collections of nodes connected by edges. They can be used to represent everything from a social network to road systems to the electrical connections on a circuit board. In circuit boards, if the graph isn’t planar, it means that two wires cross each other and short-circuit.

As early as 1913, planar graphs came up in a brainteaser called the three-utilities problem, published in *The Strand Magazine. *It asked readers to connect three houses to three utilities — water, gas and electricity — without crossing any of the connections. It doesn’t take long to see that it can’t be done.

But it’s not always immediately obvious whether more complicated graphs are planar. And it’s even harder to tell whether a complicated planar graph stays planar when you start adding edges as you might when planning a new stretch of highway.

Computer scientists have been searching for an algorithm that can quickly determine whether you can make the desired change while keeping the graph planar and without checking every single part of the graph when only one small part is affected. The 1996 algorithm required a number of computational steps that was roughly proportional to the square root of the number of nodes in the graph.

“ much better than just doing it from scratch each time, but it’s not really good,” said Holm.

The new algorithm checks planarity in a number of steps proportional to the cube of the logarithm of the number of nodes in the graph — an exponential improvement. Holm and Rotenberg, a computer scientist at the Technical University of Denmark, achieved the speedup by taking advantage of a special property of planar graphs that they discovered last year.

To understand their method, the first thing to notice is that the same planar graph can be drawn multiple ways. In these different drawings, the connections remain the same, but the edges might be in different positions relative to one another.

For example, you can change drawing A into drawing B by flipping the triangle made by nodes 1, 2 and 3 over the edge connecting nodes 2 and 3. The top section of drawing B can also be reflected over nodes 4 and 5 to produce drawing C. The drawings look different, but they’re the same graph.

Now imagine that you want to insert a new edge connecting two nodes in a planar graph, say nodes 1 and 6 in the example below. To do so, you’re going to perform a series of flips. From the starting position on the left it takes two flips to move node 1 into a space where it can be connected to node 6 without crossing any other edges.

In their 2019 paper Holm and Rotenberg found that some drawings provide a more advantageous starting position for inserting an edge than others. These “good” drawings are only a few flips away from accepting the edge without breaking planarity.

What they belatedly recognized in October was that a flip that brings you closer to being able to add a new edge also brings the graph closer to resembling one of the good drawings they’d already identified. By showing that a series of flips inevitably moves a graph toward a favorable drawing, the new algorithm puts a backstop on the number of flips you could possibly need to perform before finding a way to insert an edge (provided the insertion is possible at all).

“We very quickly realized that with this new analysis, a conceptually very, very simple algorithm will solve the problem,” said Holm.

The new algorithm performs flips one at a time, searching for a solution. Eventually, one of two things happens: Either the algorithm finds a way to insert the desired edge, or the next flip undoes the previous flip — at which point the algorithm concludes there’s no way to add the edge.

“We call this the lazy-greedy ,” Rotenberg explained. “It only does the changes necessary to accommodate the edge.”

Their new method approaches — but doesn’t quite achieve — the performance of the best possible algorithm (or lower bound) for this kind of problem. The new algorithm also has to work through too many steps for most real-world applications, where the relevant graphs are usually simple enough to check with brute-force methods.

But for Holm and Rotenberg, the speed of the algorithm is less important than the insights that accelerated it. “Out of that understanding comes something fast,” said Rotenberg.

And Italiano thinks it may eventually help with real-world applications. “ likely to have, sooner or later, an impact also outside computer science and mathematics,” he said.

As for when an even faster algorithm will come along, no one knows. It could require a whole new breakthrough, or the secret ingredient may already be out there, waiting in a stack of old research papers.

**Correction: September 17, 2020
**

Recent analyses of global epidemiological data by several teams in the United States and in Israel found that in places with higher rates of bacillus Calmette-Guérin (BCG) tuberculosis vaccination, the spread of COVID-19 is slower and pandemic death rates are lower. And in a small study reported in a preprint on August 11, hospital workers who received a booster BCG vaccine in March had no cases of COVID-19 infection, while the infection rate was 8.6% in a comparable unvaccinated group.

Vaccines aren’t supposed to work like that, though, at least according to classical immunology. The tuberculosis bacterium and the SARS-CoV-2 pandemic virus are completely different pathogens, and vaccines are, by design, highly specific. Their specificity is related to their long-lasting effects, because vaccines engage the adaptive branch of the immune system — the B and T lymphocytes and antibodies that recognize a given pathogen. Some of these lymphocytes become “memory cells” that persist for months or years, equipping the body to mount faster, stronger responses if the pathogen ever returns.

“It was thought for a long time that this is the only way in which an immune response remembers an infection, by these memory lymphocytes,” said Mihai Netea, a clinician and infectious disease specialist at Radboud University in the Netherlands.

Netea is one of the scientists challenging that dogma. He has called attention to decades of evidence from epidemiological studies as well as laboratory research in mice, plants and invertebrates, all of which suggests that immunological memory can work in a way that he described in 2011 as “trained immunity.”

Trained immunity is a form of memory exhibited by the innate immune system — a less studied, much older branch of our defenses that evolved more than a half-billion years ago, before vertebrate animals and the adaptive immune system existed. In the past few years, researchers have begun to learn how the innate immune cells, which are fairly nonspecific and short-lived, remember old invaders. Recent work has also found evidence that pathological manifestations of trained immunity may be involved in some chronic inflammatory diseases and neurodegenerative disorders. And in an August 12 *Cell Host & Microbe* study, an international team that included Netea revealed how the BCG vaccine brings broader health benefits by triggering trained immunity.

Netea’s introduction to trained immunity came in 2010, when a student intern in his lab was studying how vaccines shape the immune response. Working with blood from volunteers collected before and after BCG shots, the student spiked the samples with the tuberculosis microbe, *Mycobacterium tuberculosis.* The samples from vaccinated people reacted positively, as expected. As a negative control, she also mixed some samples with the yeast *Candida albicans,* an irrelevant pathogen that the samples should have ignored.

Except they didn’t. Samples from the first five volunteers reacted to both tuberculosis and *Candida*. When Netea saw the indiscriminate responses of the first five samples, he told his student, “Maybe it’s a mistake. Just do the next five and take care not to put TB twice.”

But the same thing happened: The samples reacted to both pathogens. “This is crazy,” Netea recalls saying. “Something is wrong.”

Flummoxed, he scoured the scientific literature. To his surprise, he found quite a few reports describing this sort of immune cross-protection. Throughout the history of immunization, going back to the introduction of the smallpox vaccine in the 1800s, some scientists noted that immunizations seemed to guard against more than the disease they were designed for.

For example, in the 1920s it was relatively common for children in northern Sweden to die within their first few years. But among children who received the BCG vaccine at birth, the mortality rate was two-thirds lower — a curious outcome given that tuberculosis generally strikes later in life. The leader of the study, the physician Carl Näslund, speculated about this in a 1932 paper: “One is tempted to explain this very low mortality among vaccinated children by the idea that BCG vaccine provokes a nonspecific immunity,” he wrote.

That hunch found confirmation decades later. Starting in the 1970s and continuing into the early 2000s, epidemiological studies by the Danish researchers Peter Aaby and Christine Stabell Benn found that children vaccinated for measles in Guinea-Bissau and other developing countries had about 70% lower mortality than unvaccinated kids — even though measles itself didn’t cause more than 10-15% of deaths. Data gathered in West Africa and elsewhere during the 1990s also built a case that BCG vaccination, in addition to preventing tuberculosis, protected people from a broad set of infections.

By the late 1980s, researchers in Italy led by Antonio Cassone of the University of Perugia had started working out which cells were responsible for this cross-protection. Infecting mice with a weakened strain of yeast not only protected them against more pathogenic yeast but also helped them fight unrelated *Staphylococcus aureus* bacteria. Using drugs to selectively disable sets of immune cells in the animals, the researchers pinned the nonspecific protection to the white blood cells called macrophages. And that conclusion posed a real conundrum for immunologists.

Unlike B and T lymphocytes, which take weeks to deploy their high-precision weapons of adaptive immunity, macrophages are like shock troops that rush onto a battleground, waving clubs at all foes. Lymphocytes have receptors that respond to exquisite molecular details on specific pathogens, but macrophages, natural killer (NK) cells, neutrophils and other cells of the innate immune system rely on a blunter, more generic approach. They are equipped with sets of “pattern recognition receptors” that recognize molecular features common to many pathogens or damaged cells.

Because of these differences, innate immune cells can speedily pounce on unwelcome intruders and diseased tissue. This can buy time for B and T cells from the adaptive immune system to multiply into an army that can deliver a more precise and devastating assault if one is needed. Later, some of these lymphocytes stick around in the blood and lymph as memory cells, ready to renew the charge if the pathogen resurfaces months or years later. “This very strong memory is what we base vaccines on,” Netea said.

Because T and B cells exist only in vertebrates, scientists believed that immunological memory was unique to them too. It seemed that invertebrate species could get by with innate immune responses alone, since the animals generally didn’t live a long time and could breed rapidly enough to offset deaths from disease.

And there remained this mystery: If macrophages were undiscriminating cells that did little more than gobble up foreign material, how could they be responsible for the enduring and broadly protective effect that the Italian researchers were seeing in their experiments? It didn’t seem to make sense, especially since macrophages live for only a few days or weeks.

This riddle of immune memory that defied general perceptions sat unanswered in the scientific literature, Netea realized. “When we don’t understand something, we tend to forget it,” he said. “That’s why some of the studies were forgotten. But they were important.”

Netea also saw evidence that unorthodox immunological memory might pop up in even less likely places. The literature held reports of memory-like behavior in plants and invertebrates — organisms with no adaptive immune cells.

One of those reports was a landmark 2003 *Nature *paper by the evolutionary biologist Joachim Kurtz, then at the Max Planck Institute of Limnology in Germany, and his master’s student Karoline Franz. Kurtz and Franz found that tiny crustaceans called copepods got better at warding off parasitic tapeworm larvae with repeated exposure — but the results were inconsistent. The researchers realized that one variable was the source of the parasites. Could it be, Kurtz wondered, that copepods become more resistant to tapeworms from the same family?

The prevailing view at the time was that invertebrate immune systems were incapable of such discernment. Yet in a new round of experiments, copepods clearly resisted sibling tapeworms better than less related ones. “It was against the dogma,” said Kurtz, who now heads a research group at the University of Münster.

That 2003 paper, entitled “Evidence for Memory in Invertebrate Immunity,” irritated some immunologists. “They said ‘immune memory’ is only when you have an adaptive immune system, meaning that you have lymphocytes and antibodies,” said Kurtz. “We said, well, ‘memory’ is more like a broader term.”

Lewis Lanier, an immunologist at the University of California, San Francisco, can sympathize. His lab made headlines in 2009 by showing that in mice, NK cells can learn from past experience. Like Kurtz’s work, the UCSF paper turned heads by ascribing memory-like properties to simple immune cells that lack the diverse antigen receptors of B and T cells. Some researchers “would argue with me about the word ‘memory,’ but they were all convinced that the NK cell remembered its past and worked better when it encountered the virus a second time or third time,” Lanier said. “That they didn’t dispute.”

The seeming heresy of these reports of memory in invertebrate immune systems and mouse NK cells paved the way for Netea’s 2011 proposal in *Cell Host & Microbe* that the innate immune system exhibits trained immunity as a kind of memory of past infections. His paper in the *Proceedings of the National Academy of Sciences* the next year went further by showing that epigenetic changes are responsible for this training. When macrophages and other innate immune cells respond to pathogens, their DNA gets epigenetic modifications that make it easier to activate the genes that direct the cell to make pattern recognition receptors and disease-fighting proteins. The DNA alterations act like bookmarks that help cells to quickly retrieve those genomic instructions and carry them out — not only “for the infection you saw the first time but any infection,” Netea said.

So if the pathogen returns, the cell is already primed to respond faster. Moreover, when the innate immune cells divide, they pass on these epigenetic DNA bookmarks to their progeny. That is how trained memory can persist while relying on cells that seem so short-lived: The record of the pathogen-fighting experience is passed on from one generation of cells to the next.

Various kinds of immune memory, including some with mechanisms similar to trained immunity, likely also helped invertebrates to survive. And without the earlier studies in invertebrates, “people would probably not have looked for such effects of memory in the innate immune system,” said Kurtz. But researchers studying invertebrates “didn’t have the mechanisms. Vertebrate immunologists, once they realized there is such a phenomenon, have all the tools to study the mechanisms in far more detail than we could ever do it.”

Although trained immunity was originally proposed to describe how innate immune cells remember previous encounters with pathogens, the phenomenon is turning up in cells that aren’t traditionally seen as part of the immune system. In a 2017 mouse study, for example, wounds healed faster in animals that were previously exposed to an inflammatory stimulant. The protection was conferred by epithelial stem cells.

It’s also beginning to look as though trained immunity isn’t limited to offering purely generic protection to the body. This past June in *Science*, Martin Oberbarnscheidt and Fadi Lakkis of the University of Pittsburgh, Xian Li of the Houston Methodist Research Institute and their colleagues reported that macrophages and some other white blood cells can develop memories for infections keyed to specific major histocompatibility complex proteins, which the adaptive immune system uses to recognize the body’s own cells. The researchers proposed that trained immunity could be an overlooked factor in the rejection of transplanted tissues.

Their results and others point to a possible downside of trained immunity: Some scientists think this enhanced sensitivity in the innate immune system could raise an organism’s susceptibility to autoimmune and hyperproliferative disorders, such as cancer. (Netea, on the other hand, believes that BCG vaccine may offer some protection against cancer, so the jury is still out.) Other research suggests that trained immunity could also contribute to chronic inflammation associated with age-related neurodegeneration, and with chronic liver disease, Type 2 diabetes and other diseases linked to the Western diet.

The trained-immunity connection to possible COVID-19 protection through the BCG vaccine, however, is currently the real attention grabber. Last month, Netea and a team of researchers in Germany, Denmark, Australia and the Netherlands published the results of their research into how the BCG vaccine induces trained immunity. They found that the vaccination sets up epigenetic changes not only in white blood cells circulating through the body but in the progenitors of those cells in the bone marrow that churn out replacements.

What is still uncertain is whether this trained immunity from BCG (or other vaccines) can be harnessed to slow the COVID-19 pandemic. As Netea and Alberto Mantovani of Humanitas University noted in a commentary for *The New England Journal of Medicine* that appeared last week, it is still not recommended to use BCG vaccine to prevent or treat COVID-19 outside of clinical trials. Such trials are now in progress: Thousands of health care workers in the United States, the Netherlands, Australia and elsewhere are rolling up their sleeves to see if they become any less prone to catching the virus after getting the BCG vaccine. Those studies are scheduled for completion over about the next year and a half. By then, some vaccines specifically targeting the coronavirus may be available. But every bit of protection may still be valuable — and good to know about for future pandemics.

**Correction: September 15, 2020**

The studies in Italy during the 1980s by Antonio Cassone were originally misattributed to a different researcher.

People use the term “impossible” in a variety of ways. It can describe things that are merely improbable, like finding identical decks of shuffled cards. It can describe tasks that are practically impossible due to a lack of time, space or resources, such as copying all the books in the Library of Congress in longhand. Devices like perpetual-motion machines are physically impossible because their existence would contradict our understanding of physics.

Mathematical impossibility is different. We begin with unambiguous assumptions and use mathematical reasoning and logic to conclude that some outcome is impossible. No amount of luck, persistence, time or skill will make the task possible. The history of mathematics is rich in proofs of impossibility. Many are among the most celebrated results in mathematics. But it was not always so.

The punishment for what was perhaps the first proof of impossibility was severe. Historians believe that in the fifth century BCE, Hippasus of Metapontum, a follower of the cult leader Pythagoras, discovered that it is impossible to find a line segment that can be placed end-to-end to measure both the side and the diagonal of a regular pentagon. Today we say that the length of a diagonal of a regular pentagon with side length 1 — the golden ratio, $latex \phi$ = $latex \frac{1}{2}$ (1 + $latex \sqrt{5}$) — is “irrational.” Hippasus’ discovery flew in the face of the Pythagorean credo that “all is number,” so, according to legend, he was either drowned at sea or banished from the Pythagoreans.

More than a century later, Euclid elevated the line and the circle, considering them the fundamental curves in geometry. Thereafter, generations of geometers performed constructions — bisecting angles, drawing perpendicular bisectors, and so on — using only a compass and a straightedge. But certain seemingly simple constructions stymied the Greek geometers, eventually taking on a mythical status and vexing mathematicians for over 2,000 years: trisecting any given angle, producing the side of a cube with twice the volume of a given one, creating every regular polygon, and constructing a square with the same area as a given circle.

Although these problems are geometric in nature, the proofs of their impossibility are not. To show that they cannot be solved required new mathematics.

In the 17th century, René Descartes made a fundamental discovery: Assuming we restrict ourselves to the compass and straightedge, it’s impossible to construct segments of every length. If we begin with a segment of length 1, say, we can only construct a segment of another length if it can be expressed using the integers, addition, subtraction, multiplication, division and square roots (as the golden ratio can).

Thus, one strategy to prove that a geometric problem is impossible — that is, not constructible — is to show that the length of some segment in the final figure cannot be written in this way. But doing so rigorously required the nascent field of algebra.

Two centuries later, Descartes’ countryman Pierre Wantzel used polynomials (the sums of coefficients and variables raised to powers) and their roots (values that make the polynomials equal zero) to attack these classical problems. In the cube doubling problem, for example, the side length of a cube with twice the volume of the unit cube is $latex \sqrt{2}$, which is a root of the polynomial *x*³ − 2 because ($latex \sqrt{2}$)³ − 2 = 0.

In 1837, Wantzel proved that if a number is constructible, it must be a root of a polynomial that cannot be factored and whose degree (the largest power of *x*) is a power of 2. For instance, the golden ratio is a root of the degree-two polynomial *x*² − *x *− 1. But *x*³ − 2 is a degree-three polynomial, so ($latex \sqrt{2}$) is not constructible. Thus, Wantzel concluded, it is impossible to double the cube.

In a similar way, he proved that it is impossible to use the classical tools to trisect every angle or to construct certain regular polygons, such as one with seven sides. Remarkably, all three impossibility proofs appeared on the same page. Just as Isaac Newton and Albert Einstein each had their *annus mirabilis*, or miraculous years, perhaps we should call this the *pagina mirabilis* — the miraculous page.

Proving the impossibility of the remaining problem, squaring the circle, required something new. In 1882, Ferdinand von Lindemann proved the key result — that π is not constructible — by proving it is transcendental; that is, π isn’t the root of any polynomial.

These classical problems could go down in infamy as sirens whose songs lured mathematicians to crash on the rocky shores of impossibility. But I see them as muses who inspired generations of creative thinkers.

The same holds true for a more recent impossible problem, which arises from the simple act of crossing a bridge. Imagine you live in Pittsburgh, the “city of bridges,” as many of my students do. An adventurous bicyclist might wonder if it is possible to start from home, ride exactly once across each of the 22 bridges spanning Pittsburgh’s major rivers, and end up back home.

In 1735, a Prussian mayor posed the same problem to Leonhard Euler about Königsberg (now Kaliningrad), a city with seven bridges joining three riverbanks and an island. At first, Euler dismissed the problem as nonmathematical: “This type of solution bears little relationship to mathematics, and I do not understand why you expect a mathematician to produce it, rather than anyone else.”

Yet Euler soon proved it was impossible, and in so doing he created a field he called the geometry of position, which we now call topology. He recognized that the exact details — the precise locations of the bridges, the shapes of the landmasses, and so on — were unimportant. All that mattered were the connections. Later mathematicians streamlined Euler’s arguments using what we now call graphs or networks. This idea of connectedness is central to the study of social networks, the internet, epidemiology, linguistics, optimal route planning and more.

Euler’s proof is surprisingly simple. He reasoned that each time we enter and leave a region, we must cross two bridges. So every landmass must have an even number of bridges. Because every landmass in Königsberg had an odd number of bridges, no such round trip was possible. Likewise, the three bridges to Herrs Island in the Allegheny River make a bicycle circuit of Pittsburgh mathematically impossible.

As this problem demonstrates, impossibility results are not confined to the realm of abstract mathematics. They can have real-world implications — sometimes even political ones.

Recently, mathematicians have turned their attention to gerrymandering. In the United States, after every census, states must redraw their congressional districts, but sometimes the ruling party divides the state into ridiculous shapes to maximize its own seats and thus its political power.

Many states require that districts be “compact,” a term with no fixed mathematical definition. In 1991, Daniel Polsby and Robert Popper proposed 4π*A/P² *as a way to measure the compactness of a district with area *A* and perimeter *P*. Values range from 1, for a circular district, to close to zero, for misshapen districts with long perimeters.

Meanwhile, Nicholas Stephanopoulos and Eric McGhee introduced the “efficiency gap” in 2014 as a measure of the political fairness of a redistricting plan. Two gerrymandering strategies are to ensure that the opposition party stays below the 50% threshold in districts (called cracking), or near the 100% level (stacking). Either tactic forces the other party to waste votes on losing candidates or on winning candidates who don’t need the votes. The efficiency gap captures the relative numbers of wasted votes.

These are both useful measures for detecting gerrymandering. But in 2018, Boris Alexeev and Dustin Mixon proved that “sometimes, a small efficiency gap is only possible with bizarrely shaped districts.” That is, it is mathematically impossible to always draw districts that meet certain Polsby-Popper and efficiency-gap fairness targets.

But finding methods to detect and prevent partisan gerrymandering is an active scholarly area that’s attracting many talented researchers. As with the problems of antiquity and the Königsberg bridge problem, I’m sure the gerrymandering problem will inspire creativity and push mathematics forward.

]]>Part of this problem’s long-standing allure stems from the simplicity of the underlying concept: A number is perfect if it is a positive integer, *n*, whose divisors add up to exactly twice the number itself, 2*n*. The first and simplest example is 6, since its divisors — 1, 2, 3 and 6 — add up to 12, or 2 times 6. Then comes 28, whose divisors of 1, 2, 4, 7, 14 and 28 add up to 56. The next examples are 496 and 8,128.

Leonhard Euler formalized this definition in the 1700s with the introduction of his sigma (σ) function, which sums the divisors of a number. Thus, for perfect numbers, σ(*n*) = 2*n*.

But Pythagoras was aware of perfect numbers back in 500 BCE, and two centuries later Euclid devised a formula for generating even perfect numbers. He showed that if *p* and 2* ^{p}* − 1 are prime numbers (whose only divisors are 1 and themselves), then 2

Nielsen, now a professor at Brigham Young University (BYU), was ensnared by a related question: Do any odd perfect numbers (OPNs) exist? The Greek mathematician Nicomachus declared around 100 CE that all perfect numbers must be even, but no one has ever proved that claim.

Like many of his 21st-century peers, Nielsen thinks there probably aren’t any OPNs. And, also like his peers, he does not believe a proof is within immediate reach. But last June he hit upon a new way of approaching the problem that might lead to more progress. It involves the closest thing to OPNs yet discovered.

Nielsen first learned about perfect numbers during a high school math competition. He delved into the literature, coming across a 1974 paper by Carl Pomerance, a mathematician now at Dartmouth College, which proved that any OPN must have at least seven distinct prime factors.

“Seeing that progress could be made on this problem gave me hope, in my naiveté, that maybe I could do something,” Nielsen said. “That motivated me to study number theory in college and try to move things forward.” His first paper on OPNs, published in 2003, placed further restrictions on these hypothetical numbers. He showed not only that the number of OPNs with *k* distinct prime factors is finite, as had been established by Leonard Dickson in 1913, but that the size of the number must be smaller than 2^{4k}.

These were neither the first nor the last restrictions established for the hypothetical OPNs. In 1888, for instance, James Sylvester proved that no OPN could be divisible by 105. In 1960, Karl K. Norton proved that if an OPN is not divisible by 3, 5 or 7, it must have at least 27 prime factors. Paul Jenkins, also at BYU, proved in 2003 that the largest prime factor of an OPN must exceed 10,000,000. Pascal Ochem and Michaël Rao have determined more recently that any OPN must be greater than 10^{1500 }(and then later pushed that number to 10^{2000}). Nielsen, for his part, showed in 2015 that an OPN must have a minimum of 10 distinct prime factors.

Even in the 19th century, enough constraints were in place to prompt Sylvester to conclude that “the existence of — its escape, so to say, from the complex web of conditions which hem it in on all sides — would be little short of a miracle.” After more than a century of similar developments, the existence of OPNs looks even more dubious.

“Proving that something exists is easy if you can find just one example,” said John Voight, a professor of mathematics at Dartmouth. “But proving that something does not exist can be really hard.”

The main approach so far has been to look at all the conditions placed upon OPNs to see if at least two are incompatible — to show, in other words, that no number can satisfy both restriction A and restriction B. “The patchwork of conditions established so far makes it extremely unlikely that is out there,” Voight said, echoing Sylvester. “And Pace has, for a number of years, been adding to that list of conditions.”

Unfortunately, no incompatible properties have yet been found. So in addition to needing more restrictions on OPNs, mathematicians probably need new strategies, too.

To this end, Nielsen is already considering a new plan of attack based on a common tactic in mathematics: learning about one set of numbers by studying close relatives. With no OPNs to study directly, he and his team are instead analyzing “spoof” odd perfect numbers, which come very close to being OPNs but fall short in interesting ways.

The first spoof was found in 1638 by René Descartes — among the first prominent mathematicians to consider that OPNs might actually exist. “I believe that Descartes was trying to find an odd perfect number, and his calculations led him to the first spoof number,” said William Banks, a number theorist at the University of Missouri. Descartes apparently held out hope that the number he crafted could be modified to produce a genuine OPN.

But before we dive into Descartes’ spoof, it’s helpful to learn a little more about how mathematicians describe perfect numbers. A theorem dating back to Euclid states that any integer greater than 1 can be expressed as a product of prime factors, or bases, raised to the correct exponents. So we can write 1,260, for example, in terms of the following factorization: 1,260 = 2^{2} × 3^{2} × 5^{1} × 7^{1}, rather than listing all 36 individual divisors.

If a number takes this form, it becomes much easier to calculate Euler’s sigma function summing its divisors, thanks to two relationships also proved by Euler. First, he demonstrated that σ(*a* × *b*) = σ(*a*) × σ(*b*), if and only if *a* and *b* are relatively prime (or coprime), meaning that they share no prime factors; for example, 14 (2 × 7) and 15 (3 × 5) are coprime. Second, he showed that for any prime number *p* with a positive integer exponent *a*, σ(*p ^{a}*) = 1 +

So, returning to our previous example, σ(1,260) = σ(2^{2} × 3^{2} × 5^{1} × 7^{1}) = σ(2^{2}) × σ(3^{2}) × σ(5^{1}) × σ(7^{1}) = (1 + 2 + 2^{2})(1 + 3 + 3^{2})(1 + 5)(1 + 7) = 4,368. Note that σ(*n*), in this instance, is not 2*n*, which means 1,260 is not a perfect number.

Now we can examine Descartes’ spoof number, which is 198,585,576,189, or 3^{2} × 7^{2} × 11^{2} × 13^{2} × 22,021^{1}. Repeating the above calculations, we find that σ(198,585,576,189) = σ(3^{2} × 7^{2} × 11^{2} × 13^{2} × 22,021^{1}) = (1 + 3 + 3^{2})(1 + 7 + 7^{2})(1 + 11 + 11^{2})(1 + 13 + 13^{2})(1 + 22,021^{1}) = 397,171,152,378. This happens to be twice the original number, which means it appears to be a real, live OPN — except for the fact that 22,021 is not actually prime.

That’s why Descartes’ number is a spoof: If we pretend that 22,021 is prime and apply Euler’s rules for the sigma function, Descartes’ number behaves just like a perfect number. But 22,021 is actually the product of 19^{2} and 61. If Descartes’ number were correctly written as 3^{2 }× 7^{2} × 11^{2} ×13^{2} × 19^{2} × 61^{1}, then σ(*n*) would not equal 2*n*. By relaxing some of the normal rules, we end up with a number that appears to satisfy our requirements — and that’s the essence of a spoof.

It took 361 years for a second spoof OPN to come to light, this one thanks to Voight in 1999 (and published four years later). Why the long lag time? “Finding these spoof numbers is akin to finding odd perfect numbers; both are arithmetically complex in similar ways,” Banks said. Nor was it a priority for many mathematicians to look for them. But Voight was inspired by a passage in Richard Guy’s book *Unsolved Problems in Number Theory*, which sought more examples of spoofs. Voight gave it a try, eventually coming up with his spoof, 3^{4} × 7^{2} × 11^{2 }× 19^{2} × (−127)^{1}, or −22,017,975,903.

Unlike in Descartes’ example, all the divisors are prime numbers, but this time one of them is negative, which is what makes it a spoof rather than a true OPN.

After Voight gave a seminar at BYU in December 2016, he discussed this number with Nielsen, Jenkins and others. Shortly thereafter, the BYU team embarked on a systematic, computationally based search for more spoofs. They would choose the smallest base and exponent to start from, such as 3^{2}, and their computers would then sort through the options for any additional bases and exponents that would result in a spoof OPN. Nielsen assumed that the project would merely provide a stimulating research experience for students, but the analysis yielded more than he anticipated.

After employing 20 parallel processors for three years, the team found all possible spoof numbers with factorizations of six or fewer bases — 21 spoofs altogether, including the Descartes and Voight examples — along with two spoof factorizations with seven bases. Searching for spoofs with even more bases would have been impractical — and extremely time-consuming — from a computational standpoint. Nevertheless, the group amassed a sufficient sample to discover some previously unknown properties of spoofs.

The group observed that for any fixed number of bases, *k*, there is a finite number of spoofs, consistent with Dickson’s 1913 result for full-fledged OPNs. “But if you let *k* go to infinity, the number of spoofs goes to infinity too,” Nielsen said. That was a surprise, he added, given that he didn’t know going into the project that it would turn up a single new odd spoof — let alone show that the number of them is infinite.

Another surprise stemmed from a result first proved by Euler, showing that all the prime bases of an OPN are raised to an even power except for one — called the Euler power — which has an odd exponent. Most mathematicians believe that the Euler power for OPNs is always 1, but the BYU team showed it can be arbitrarily large for spoofs.

Some of the “bounty” obtained by this team came from relaxing the definition of a spoof, as there are no ironclad mathematical rules defining them, except that they must satisfy the Euler relation, σ(*n*) = 2*n*. The BYU researchers allowed non-prime bases (as with the Descartes example) and negative bases (as with the Voight example). But they also bent the rules in other ways, concocting spoofs whose bases share prime factors: One base could be 7^{2}, for instance, and another 7^{3}, which are written separately rather than combined as 7^{5}. Or they had bases that repeat, as occurs in the spoof 3^{2} × 7^{2} × 7^{2} × 13^{1} × (−19)^{2}. The 7^{2} × 7^{2} term could have been written as 7^{4}, but the latter would not have resulted in a spoof because the expansions of the modified sigma function are different.

Given the significant deviations between spoofs and OPNs, one might reasonably ask: How could the former prove helpful in the search for the latter?

In essence, spoof OPNs are generalizations of OPNs, Nielsen said. OPNs are a subset sitting within a broader family that includes spoofs, so an OPN must share every property of a spoof, while possessing additional properties that are even more restrictive (such as the stipulation that all bases must be prime).

“Any behavior of the larger set has to hold for the smaller subset,” Nielsen said. “So if we find any behaviors of spoofs that do not apply to the more restricted class, we can automatically rule out the possibility of an OPN.” If one could show, for instance, that spoofs must be divisible by 105 — which can’t be true for OPNs (as Sylvester demonstrated in 1888) — then that would be it. Problem solved.

So far, though, they’ve had no such luck. “We’ve discovered new facts about spoofs, but none of them undercut the existence of OPNs,” Nielsen said, “although that possibility still remains.” Through further analysis of currently known spoofs, and perhaps by adding to that list in the future — both avenues of research established by his work — Nielsen and other mathematicians might uncover new properties of spoofs.

Banks thinks this approach is worth pursuing. “Investigating odd spoof numbers could be useful in understanding the structure of odd perfect numbers, if they exist,” he said. “And if odd perfect numbers don’t exist, the study of odd spoof numbers might lead to a proof of their nonexistence.”

Other OPN experts, including Voight and Jenkins, are less sanguine. The BYU team did “a great job,” Voight said, “but I’m not sure we’re any closer to having a line of attack on the OPN problem. It is indeed a problem for the ages, perhaps it will remain so.”

Paul Pollack, a mathematician at the University of Georgia, is also cautious: “It would be great if we could stare at the list of spoofs and see some property and somehow prove there are no OPNs with that property. That would be a beautiful dream if it works, but it seems too good to be true.”

It is a long shot, Nielsen conceded, but if mathematicians are ever going to solve this ancient problem, they need to try everything. Besides, he said, the concerted study of spoofs is just getting started. His group took some early steps, and they already discovered unexpected properties of these numbers. That makes him optimistic about uncovering even more “hidden structure” within spoofs.

Already, Nielsen has identified one possible tactic, based on the fact that every spoof found to date, except for Descartes’ original example, has at least one negative base. Proving that all other spoofs must have a negative base would in turn prove that no OPNs exist — since the bases of OPNs, by definition, must be both positive and prime.

“That sounds like a harder problem to solve,” Nielsen said, because it pertains to a larger, more general category of numbers. “But sometimes when you convert a problem to a seemingly more difficult one, you can see a path to a solution.”

Patience is required in number theory, where the questions are often easy to state but difficult to solve. “You have to think about the problem, maybe for a long while, and care about it,” Nielsen said. “We are making progress. We’re chipping away at the mountain. And the hope is that if you keep chipping away, you might eventually find a diamond.”

*This article was reprinted on Wired.com.*