They do not live long. The shaken flies and the engineered flies both die swiftly — in fact, the engineered ones survive only half as long as well-rested controls. After days of sleeplessness, the flies’ numbers tumble, then crash. The tubes empty out. The lights shine on.

We all know that we need sleep to be at our best. But profound sleep loss has more serious and immediate effects: Animals completely deprived of sleep die. Yet scientists have found it oddly hard to say exactly why sleep loss is lethal.

Sleep is primarily seen as a neurological phenomenon, and yet when deprived creatures die, they have a puzzlingly diverse set of failures in the body outside the nervous system. Insufficient sleep in humans and lab animals, if chronic, sets up health problems that surface over time, such as heart disease, high blood pressure, obesity and diabetes. But those conditions are not what slays creatures that are 100% sleep deprived within days or weeks.

What does sleep do that makes it deadly to go without? Could answering that question explain why we need sleep in the first place? Under the pale light of the incubators in Dragana Rogulja’s lab at Harvard Medical School, sleepless flies have been living and dying as she pursues the answers.

On a cold morning this winter, Rogulja leaned over a tablet in her office, her close-cropped dark hair framing a face of elfin intensity, and flicked through figures to explain some of her conclusions. Rogulja is a developmental neuroscientist by training, but she is not convinced that the most fundamental effect of sleep deprivation starts in the brain. “It could come from anywhere,” she said, and it might not look like what most people expect.

She has findings to back up that intuition. Publishing today in the journal *Cell*, she and her colleagues offer evidence that when flies die of sleeplessness, lethal changes occur not in the brain but in the gut. The indigo labyrinths of the flies’ small intestines light up with fiery fuchsia in micrographs, betraying an ominous buildup of molecules that destroy DNA and cause cellular damage. The molecules appear soon after sleep deprivation starts, before any other warning signs; if the flies are allowed to sleep again, the rosy bloom fades away. Strikingly, if the flies are fed antioxidants that neutralize these molecules, it does not matter if they never sleep again. They live as long as their rested brethren.

The results suggest that one very fundamental job of sleep — perhaps underlying a network of other effects — is to regulate the ancient biochemical process of oxidation, by which individual electrons are snapped on and off molecules in service to everything from respiration to metabolism. Sleep, the researchers imply, is not solely the province of neuroscience, but something more deeply threaded into the biochemistry that knits together the animal kingdom.

The first studies to investigate total sleep deprivation had a maniacal quality to them. In Rome in 1894, Maria Mikhailovna Manaseina, a Russian biochemist, made a presentation at the International Congress of Medicine about her experiments on 10 puppies. She and her lab assistants had kept the dogs awake and in constant motion 24 hours a day; within about five days, all the puppies had died. Sleep deprivation seemed to kill puppies much more quickly than starvation, she reported: “The total absence of sleep is more fatal for the animals than the total absence of food.”

Autopsies revealed that the puppies’ tissues were in bad repair, particularly in the brain, which was rife with hemorrhages, damaged blood vessels and other gruesome features. Sleep, Manaseina concluded, is not a useless habit. It does something profound for brain health.

More all-day, all-night dog walking followed. In 1898 Lamberto Daddi, an Italian researcher, published detailed drawings of the brains of dogs that had been sleep-deprived; he reported apparent degenerative damage in the brain, similar to that seen in dogs that had faced other stressors. Around the same time, the psychiatrist Cesar Agostini kept dogs in cages rigged with bells that jangled horribly whenever they tried to lie down and sleep, and in the 1920s researchers in Japan did something similar with cages studded with nails.

The studies, aside from their consistent cruelty, had a similar weakness: They had no valid controls. The dogs had died and their tissues looked abnormal — but was that truly because they had not slept? Or was it because nonstop walks and stimulation are inherently stressful? Separating the effects of sleeplessness from being kept on your feet until it killed you seemed impossible.

It took decades for scientists to return to the question in a serious way. In the 1980s, Allan Rechtschaffen, a sleep researcher at the University of Chicago celebrated for his pioneering work on narcolepsy, began to design experiments that could separate the effects of overstimulation from those of sleeplessness. He devised a rat cage in the form of a turntable suspended over water. A divider ran down the middle, so animals could live on either side while the turntable floor beneath them spun freely. Into the device the experimenters put pairs of rats, one of which was destined to be denied sleep. Whenever that rat tried to rest, the scientists spun the table, knocking both rats into the water.

This setup ensured that although both rats fell into the water equally often, the control rat could still catch some winks whenever the rat denied sleep was active. In fact, control rats managed to sleep about 70% as much as they normally would, suffering only mild sleep deprivation. The unluckier experimental rats got less than 9%, almost total sleep loss.

Both sets of rats were disturbed the same number of times. Both suffered the stress of falling into the water and having to clamber back out, dripping. But only the severely sleep-deprived rats began to decline. Their fur grew rough and disheveled, and it went from white to a mangy yellow. They developed lesions on their skin. They lost weight. After around 15 days on average, they died. Rechtschaffen had discovered a way to show that sleep loss itself really did kill.

For the graduate students running these experiments, the days were long. “The lab was in an apartment building, so you’d have a bedroom next to an animal testing room,” said Ruth Benca, a professor of psychiatry at the University of California, Irvine who worked with Rechtschaffen for some years. “They had bedrooms next to the rooms where their animals were being deprived so they could monitor around the clock.”

The work was challenging in other ways as well. “They were tough, tough experiments to do, psychologically, to put an animal through that,” said Paul Shaw, one of Rechtschaffen’s later graduate students and now a professor of neuroscience at Washington University in St. Louis. “The last seven days of the experiment, you’re working with this cloud over your head.” When his rats were just a day or two from death, the experimental protocol called for him to let them sleep and observe their electroencephalograms, or EEGs. Shaw recalls that as the monitor exploded with life, announcing the animals’ long-awaited slumber, he felt a weight fall from his shoulders. “To this day I can see it,” he said, speaking of the EEG readout. “I could put it in a frame up on my wall, and it could make me happy every time.”

But the work was also thrilling. “You have to believe in the outcome to do it. There’s no other way,” Shaw said. He arrived at the lab after students who had pioneered these experiments received their degrees and left, but he still heard their stories at meetings, where they reminisced about the excitement. “No one wanted to get their Ph.D.,” he recalled, because if they could stay, “they all thought that tomorrow, they’d discover the function of sleep.”

Rechtschaffen’s experimental successes should have finally enabled scientists to see how insufficient sleep kills, which might have led to bigger insights into what makes sleep so indispensable. But when the researchers performed autopsies on the animals, what they found mostly just added to the confusion. There were few consistent differences between the control rats and those that died from lack of sleep, and no sign of what had killed them. The deprived rats were thin and had enlarged adrenal glands, but that was about it. “No anatomical cause of death was identified,” the researchers concluded.

Observations of the animals’ behavior showed something more interesting. “Animals sleep-deprived under these carefully controlled conditions would increase their food intake two and three times normal amounts, and lose weight,” said Carol Everson, a professor of medicine and neurobiology at the Medical College of Wisconsin who was one of Rechtschaffen’s graduate students. “We did all sorts of metabolic studies to try to find out if there was an impairment we could detect.”

There was a strong feeling in the sleep field, however, that answers about sleep’s most basic functions would be found in the brain. John Allan Hobson, a prominent Harvard Medical School sleep researcher, had just published a paper in *Nature* with the title “Sleep is of the brain, by the brain and for the brain.” As Shaw recalled, “This captured the zeitgeist of the entire sleep community.”

Indeed, the vast preponderance of sleep research today still centers on the brain and on subjects like cognitive impairment. Sleep loss does alter metabolism in humans — there are connections to diabetes and metabolic syndrome — but public health researchers are often the only ones who concern themselves with it. Those looking to understand the fundamental purpose of sleep rarely seek answers in metabolism or other chemical processes.

The neurons involved in regulating sleep are a focus of Rogulja’s work. But the fact that sleep loss impairs circulation, digestion, the immune system and metabolism made her curious about whether these were downstream effects of neurological problems, or if they were independent. “It seems like it can’t be all about the brain,” she said.

She knew about the Rechtschaffen experiments — “real classics” — and that there had been few follow-ups. Once it was established that total sleep loss kills, using deprivation to study sleep’s purpose had fallen by the wayside. In the intervening decades, however, fruit flies had become a major model organism in the sleep field, because their genetics are widely understood and easy to manipulate and they are inexpensive to keep in the lab. Many sleep discoveries first made in flies have been verified in mammals. With the rise of flies as proven test subjects, when Rogulja became curious about terminal sleep deprivation, it again seemed like a plausible thing to study.

When the postdoctoral researcher Alexandra Vaccaro arrived at Rogulja’s lab in 2016, the two came up with a plan. First, from other laboratories they obtained flies genetically engineered to have temperature-sensitive channels in certain neurons. Above 28 degrees Celsius, the channels opened and stayed open, keeping the neurons activated and the flies awake. With the channels closed, the flies enjoyed normal 110-day life spans. With the channels open, they started dying of total sleep deprivation after only 10 days or so, and they were all dead within 20 days.

Intriguing patterns emerged as Vaccaro performed tests. If she closed the channels and allowed the flies to sleep on day 10, they recovered and lived as long as controls. But if she deprived them again five or 10 days later, they died: Whatever damage had accrued during their initial sleeplessness had apparently not yet been repaired. It took a full 15 days of sleeping normally before they could be sleep-deprived again without immediately dying.

When Vaccaro dissected flies at various levels of deprivation, their tissues all seemed unharmed, with one very marked exception: Their guts were thick with reactive oxygen species (ROS), molecules with an oxygen atom that bears a spare electron. Some ROS are produced in the normal course of organisms’ respiration, metabolism and immunological defense, sometimes for specific functions and sometimes as byproducts. But if ROS are not swept up by antioxidant enzymes, they become extremely dangerous, because that unbalanced oxygen rips electrons away from DNA, proteins and lipids. Indeed, after ROS appeared a week into the flies’ sleep deprivation, markers of oxidative damage soared — a sign that cells were in crisis.

ROS levels peaked on the 10th day of deprivation. When flies were allowed to start sleeping normally, it took about 15 days for their ROS levels to get close to baseline again — the same time it took for flies to be able to withstand renewed deprivation.

Rogulja and Vaccaro had not expected such a clear result within mere months of starting the project. It was so easy to see that it made them instantly skeptical. When Rogulja showed preliminary data at a meeting of Pew Biomedical Scholars, their excitement unnerved her a little. “It’s never like that,” she said, preferring to be cautious about the findings.

As a result, over the last three years Vaccaro and Rogulja, along with the postdoctoral researcher Yosef Kaplan Dor, have been working to poke holes in this apparent connection between oxidation and sleep loss. They deprived flies of sleep by a more traditional method — shaking the tube containing them every two seconds — and checked to see whether levels of ROS correlated with levels of sleep loss; they did. The team looked at flies with mutations that promoted sleep or wakefulness; the sleep-deprived flies had ROS in their guts. Conversely, no ROS showed up in the guts of a strain of mutant flies known to tolerate a lack of sleep.

The strangest, most exciting period of the project may have been when the researchers decided that if oxidation from ROS was killing the flies, perhaps they should give the flies antioxidants. It sounded like a zany health food experiment, but Vaccaro searched out antioxidants known to work in flies, then fed them to the insects. To the researchers’ surprise, the lethally sleep-deprived flies reached a normal fly life span. The same thing happened when they raised levels of antioxidant enzymes in the gut (but, tellingly, not when they did it in the nervous system).

“I cannot imagine having more fun in science,” said Rogulja of that summer. “My whole family, and the whole lab, we would all gather around in the morning, once we started giving them these antioxidants: ‘They’re alive!’ And not only were they alive, they looked good.”

Vaccaro and a technician in the lab, Keishi Nambara, along with collaborators in the laboratory of Michael Greenberg at Harvard, performed a pared-down version of the fly experiment with mice. They kept the mice awake for up to five days in a cage with a rotating bar that gently pushed the animals to make them move. In the animals’ guts, the telltale glow of ROS appeared.

For Shaw, the team’s new paper is very interesting. “It’s super exciting to see they’ve harnessed the power of genetics,” he said. “We gave up on the whole project of sleep-depriving flies till they die because they’re long, hard experiments to do,” and it’s difficult to control for stress. Because the study uses both genetic and mechanical means of sleep deprivation, it sidesteps that issue. “It’s fantastic, fantastic. … I was very impressed,” he said. “I thought it was very well controlled.”

Just what the findings mean still needs to be explored. They suggest that sleep is vitally important to the body’s regulation of oxidation, particularly in the gut, and that this is likely to have widespread consequences in the body. As Rogulja and Vaccaro write in their new paper: “Prevention of death by a single means would argue that the gradual collapse of nearly all major bodily functions derives from a common origin.” In the flies they studied, antioxidants were the single means.

Their findings dovetail with a stream of previous reports that have linked oxidation and insufficient sleep, in particular those of Everson, who grew interested in metabolism while in Rechtschaffen’s lab. Everson felt early on that while the brain is a regulator of sleep, there’s more to sleep than neurology. In sleep-deprived rats, she observed signs of immunological failures and bacteria in tissues that should have been sterile. Then in 2016, she and her colleagues reported that they had found oxidation in the livers, lungs and small intestines of sleep-deprived rats. Markers of inflammation are often found floating around in tissues after sleep deprivation, Everson said, but their source has never been clear. If oxidation is out of control somewhere in the body, the resulting crisis of cellular damage could cause that boost.

Everson also found that the guts of sleep-deprived rats grew leaky, releasing bacteria into the animals’ bloodstreams. But from what Rogulja and her colleagues have seen, the flies’ guts do not seem to leak. ROS also did not seem to be rising in any of the other tissues they examined. And although the flies sometimes ate more when they were sleep-deprived, the ROS level in their guts looked the same regardless.

It’s unclear how all these puzzle pieces concerning oxidation in rats and flies might fit together, and Giorgio Gilestro, a sleep researcher at Imperial College London, notes that while these experiments make it clear that the ROS are killing the flies, that doesn’t necessarily mean the same thing killed the rats. A small study of humans who lost sleep showed that the makeup of their gut microbiomes, the bacteria that live in the intestines, shifted after insufficient sleep, an intriguing if preliminary finding drawing another link between sleep and the gut.

Still, perhaps the most pressing issue is that no one knows where the ROS are coming from, and why they accrue in the gut. What process — metabolic or otherwise — is generating them? Does sleep deprivation cause ROS to be overproduced? Or does it interfere with some process that normally clears them away? And why would ROS be linked to sleep anyway? Rogulja is planning experiments to explore some aspects of these questions.

Behind all this is the astonishing, baffling breadth of what sleep does for the body. The fact that learning, metabolism, memory, and myriad other functions and systems are affected makes an alteration as basic as the presence of ROS quite interesting. But even if ROS is behind the lethality of sleep loss, there is no evidence yet that sleep’s cognitive effects, for instance, come from the same source. And even if antioxidants prevent premature death in flies, they may not affect sleep’s other functions, or if they do, it may be for different reasons.

The flies that never sleep and their glowing guts remind us that sleep is profoundly a full-body experience, not merely a function of the mind and brain. In their deaths may lie some answers as to why sleeplessness kills and — potentially, tantalizingly — what sleep does to link disparate systems throughout the body. Shaw, for one, is interested to see what happens next in Rogulja’s lab. “It’s a super important question,” he said, “and they’ve come up with a way to address it.

]]>What changed his mind? Geometric deep learning: an emerging subfield of artificial intelligence that can learn patterns on curved surfaces.

Proteins interact by fitting their bumpy, irregular shapes together like three-dimensional puzzle pieces. Researchers have spent decades trying to figure out how they do so. The well-known protein folding problem, which has challenged scientists since the mid-20th century, attempts to understand protein interaction by decoding the link between a protein’s constituent amino acids and its final 3D shape. In 1999, IBM began developing its line of Blue Gene supercomputers to tackle the folding problem; 20 years later, DeepMind applied state-of-the-art deep learning algorithms to it.

Correia’s system, called MaSIF (short for molecular surface interaction fingerprinting), avoids the inherent complexity of a protein’s 3D shape by ignoring the molecules’ internal structure. Instead, the system scans the protein’s 2D surface for what the researchers call interaction fingerprints: features learned by a neural network that indicate that another protein could bind there. “The idea any two molecules come together, what they’re essentially presenting to one another is that surface. So that’s all you need,” said Mohammed AlQuraishi, a protein researcher at Harvard Medical School who also uses deep learning. “It’s very, very innovative.”

MaSIF’s surface-focused framework for predicting protein interactions could help accelerate so-called de novo protein design, which tries to synthesize useful proteins from scratch rather than relying on the naturally occurring variety. But it could also be used for basic biology, said Michael Bronstein, a geometric deep learning expert at Imperial College London who helped develop the system. “How does cancer affect protein properties?” he said. “You can ask whether mutations as a result of cancer destroy something in the protein that makes them work in a different way, by not binding to what they are supposed to. could answer fundamental questions.”

If you want to understand how deep learning can create protein fingerprints, Bronstein suggests looking at digital cameras from the early 2000s. Those models had face detection algorithms that did a relatively simple job. “You just need to detect that there is a face” — eyes, a nose, a mouth — “regardless of whether it has a long nose or a short nose, fat lips or thin lips,” he explained.

Modern cameras are more versatile. They can identify a particular person, allowing you to quickly search through your photo library to find all the photos they’re in.

This advance was made possible by deep neural networks, which gave computers a way to learn an individual’s subtle features from training data. The process involves feeding many instances of a particular face to the network and labeling them all as the same person. You don’t have to tell the computer in advance which exact mixture of attributes — green eyes, wide-set eyebrows, black hair — somehow adds up to your own face rather than another person’s. Instead, with enough properly labeled examples, the network learns the distinction itself.

MaSIF does the same thing for proteins. Previous approaches to interaction fingerprinting were like the basic face detection algorithms. They required researchers to define certain geometric patterns in advance — say, a bumpy patch on the surface of a protein with a specific shape and size — and then search for matches. MaSIF, by contrast, starts with a handful of basic surface features known to be associated with protein interactions: for instance, the surface’s physical curvature (into a knob or pocket), its electrical charge, and whether it repels or attracts water. Then, during training, the network learns how to combine these features into fingerprints that detect different higher-level patterns.

Until recently, this kind of machine learning couldn’t be used on the curved, irregular surfaces of proteins. The rise of geometric deep learning opened up the possibility. Correia credits Bronstein with bringing the method to his attention during a two-week collaboration at Bronstein’s home in February 2018. “It was totally him,” said Correia, who’s based at the École Polytechnique Fédérale de Lausanne. “Our handcrafted descriptors were going nowhere.”

One version of the system, called MaSIF-site, can examine the whole surface of a protein and predict where another protein is most likely to bind, an approach similar to painting a target on a curved canvas. “It’s what we like to call the one-body problem,” Correia said. “You can think about this as a way to understand where the functional sites on a particular protein are.” MaSIF-site performed roughly 25% better at this task than two leading site-interaction predictors.

Another version of the system, called MaSIF-search, tackles what Correia calls the many-to-many problem: Instead of predicting how one protein will fit together with one target molecule (as typically happens in docking simulations), the system compares the interaction fingerprints of many proteins to many others, looking for fits. (“In a cell you have 10,000 proteins, and many of them are bumping into each other all the time,” explained Correia.) On this task, MaSIF didn’t outperform a leading molecular-docking predictor; it found roughly half as many potential fits within a random set of 100 proteins. But the docking predictor needed nearly 100 days’ worth of computing time to perform its search. MaSIF took four minutes.

That massive speedup “opens interesting possibilities” for basic research, said Bronstein. After all, in the human body, proteins form functional networks comprising tens of thousands of interactions. “Constructing these graphs takes a lot of time,” Bronstein said. “With methods , it may only be an approximation, but it allows you to at least build some rough version of these protein-to-protein networks for any organism.”

AlQuraishi noted that while MaSIF’s skin-deep approach to predicting protein interactions made sense, it wasn’t able to capture a phenomenon called induced fit: the way molecular surfaces change shape (and chemistry) when they get close to each other. In other words, the surfaces of two proteins may not exhibit complementary fingerprints until they’re already almost touching — a factor MaSIF will miss, since induced fit depends on the structure beneath a protein’s surface. “What evolution is probably optimizing for is precisely this induced fit,” said AlQuraishi. “What’s surprising about is that even with this caveat, it still works pretty well.”

Incorporating induced fit and other surface dynamics into MaSIF is something Correia plans to explore. “To me it’s the last frontier of understanding function,” he said. “That’s probably how I’m going to be spending my next 10 years.” But at the moment he has other pressing business: using MaSIF to scan the spike-shaped proteins that stud the surface of SARS-CoV-2, the virus that causes COVID-19. “We are trying to see what fingerprints are in that virus,” he said. “It does seem like the virus has some places where we could try to attack it, besides the ones that we already knew.” Correia is already using this information about SARS-CoV-2 to synthesize antiviral proteins from scratch; he hopes to publish results this year. “If we could design new proteins based on the surface fingerprints of the viral protein in order to inhibit the way the virus invades host cells, that would be pretty exciting,” he said. “That’s what gets me out of bed.”

]]>As the name suggests, an invariant is an attribute that doesn’t vary as you change an object’s inessential features (where “inessential” means whatever you need it to in a particular context). An invariant is a distillation of some innate quality of the object, often in the form of a single number.

To take an example from topology, imagine covering a ball with stretchy netting that partitions the surface into shapes such as triangles and rectangles. The number of shapes will, of course, depend on the netting you use, as will the numbers of edges and corners. But mathematicians figured out centuries ago that a certain combination of these three numbers always comes out the same: the number of shapes plus the number of corners minus the number of edges.

If, for example, your netting partitions the sphere into a puffed-out tetrahedron (with four triangles, four corners and six edges), this number works out to 4 + 4 − 6 = 2. If your netting instead forms the pattern of a soccer ball (with a total of 32 hexagons and pentagons, 60 corners, and 90 edges), you again get 32 + 60 − 90 = 2. In some sense, the number 2 is an intrinsic feature of sphere-ness. This number (called the sphere’s Euler characteristic) doesn’t change if you stretch or distort the sphere, so it is what mathematicians call a topological invariant.

If you wrap a netting around a doughnut surface instead, you always get an Euler characteristic of 0. On a two-holed doughnut, you get −2. The Euler characteristic for surfaces belongs to a series of invariants that allow mathematicians to explore shapes in higher dimensions as well. It can help topologists distinguish between two shapes that are hard to visualize, since if they have different Euler characteristics, they cannot be the same topological shape.

Invariants are also used to study the 15-puzzle, a classic toy consisting of square tiles numbered 1 through 15 that you slide around in a 4-by-4 grid. The goal is to put a mixed-up arrangement of tiles in numerical order from left to right, starting from the top row. If you’d like to know whether a particular arrangement is solvable, there’s an invariant that gives you the answer. It outputs either “even” or “odd” depending on the sum of two numbers: the number of slides required to carry the blank square to the bottom right corner and the number of tile pairings that are in reverse numerical order (with the blank square representing tile 16).

Whenever you slide a tile into the empty square, both these numbers switch parity (evenness or oddness). So the parity of their sum never changes, meaning that it is an invariant of the sliding process. For the solved configuration this invariant is even, since both numbers are zero. So any configuration with an odd invariant is utterly hopeless.

When it comes to knot theory, distinguishing between knots is a tricky business, since you can make a knot unrecognizable just by moving the strands of the loop around (mathematicians think of knots as occurring in closed loops rather than open strings, so they can’t be undone). Here, invariants are indispensable, and mathematicians have come up with dozens that distill different features of knots. But these invariants tend to have blind spots.

Take, for example, an invariant called tricolorability. A knot diagram is tricolorable if there’s a way to color its strands red, blue and green so that at every crossing, the three strands that meet are either all the same color or all different colors. Mathematicians have shown that even when you move the strands of a knot around, its tricolorability (or lack thereof) is unchanged. In other words, tricolorability is an innate feature of a knot.

The three-crossing knot known as the trefoil is tricolorable. But the “unknot” (a loop that has no actual knots, even if it appears tangled) is not tricolorable, providing an instant proof that the trefoil is not just the unknot in disguise. But while tricolorability enables us to distinguish some knots from the unknot, it’s not a perfect tool for this purpose: Knots that are tricolorable are definitely knotted, but knots that aren’t tricolorable aren’t definitely unknotted. For instance, the figure-eight knot is not tricolorable, but it is genuinely knotted. This knot falls into tricolorability’s blind spot — it’s as if the invariant is saying, “The figure-eight knot is unknotted as far as I can tell.”

The Conway knot, an 11-crossing knot discovered by John Horton Conway more than 50 years ago, is extraordinarily skilled at fooling knot invariants — especially the ones designed to detect the quality Piccirillo was interested in, called sliceness. Sliceness means that the knot is a slice of some smooth but knotted sphere in four-dimensional space.

“Every time there’s a new invariant, people look to see what happens on the Conway knot,” said Shelly Harvey of Rice University. So far, the Conway knot has fallen in the blind spot of every invariant mathematicians have come up with to study sliceness.

When Piccirillo finally succeeded in showing that the Conway knot is not “slice,” she did so not by devising a new invariant but by finding a clever way to leverage an existing one called Rasmussen’s *s*-invariant. The Conway knot fools this invariant along with all the others. But in her paper, Piccirillo came up with a different knot that she could prove to have the same slice status as the Conway knot. For this new knot, Rasmussen’s *s*-invariant proves that it is not slice. Therefore, the Conway knot cannot be slice either.

Rasmussen’s *s*-invariant is one of a collection of physics-related knot invariants discovered in recent decades. It has taken mathematicians a while to absorb what these invariants have to offer, said Elisenda Grigsby of Boston College.

Piccirillo is part of a “new guard of low-dimensional topologists that have grown up knowing in their bones,” Grigsby said. “To me, that’s what’s exciting about this paper.

]]>

Mathematicians are often in the same situation as da Vinci: They have big dreams, but mathematical knowledge may not be advanced enough to fulfill them.

Depending on who you ask, for example, present-day mathematicians have nearly as much chance of solving the Riemann hypothesis — the most famous unsolved problem in math — as da Vinci had of building a machine that could actually fly.

“As of yet there’s not been a proposed strategy for handling the Riemann hypothesis that’s even semi-plausible,” said Jacob Tsimerman of the University of Toronto.

But while it may have been obvious in da Vinci’s time that a functional version of the aerial screw would have to wait, often in math it’s not clear what’s possible and what’s not.

Sometimes a problem can seem hopeless, only for a mathematician to realize that the ingredients of a solution have been hiding in plain sight. This is what happened with Vesselin Dimitrov’s recent proof of a problem called the Schinzel-Zassenhaus conjecture, which *Quanta* covered in our article “Mathematician Measures the Repulsive Force Within Polynomials.”

Mathematicians had long failed to prove the conjecture, and many believed that it would take a new mathematical invention to get there. But Dimitrov cracked the problem by finding a novel way of combining techniques that have been around for more than 40 years.

“Mathematicians are sometimes too quick to dismiss the possibility that we can solve something,” Tsimerman said. “Math is really hard, and people sometimes overlook things.”

So how do mathematicians know if a problem is currently impossible or just really hard? Obviously, there’s no clear way to tell, so they have to rely on clues. And the biggest hint that a problem is out of reach is simply that lots of people have failed to solve it.

Another way to tell is to see whether a problem resembles another. If mathematicians have solved one problem, it boosts their confidence that they can solve another that looks kind of like it.

“Some problems are naturally linked to one another, and you have techniques that can pass between them,” said James Maynard of the University of Oxford. If you’ve figured out how to build a table, you might reasonably suspect you can build a chair.

But some problems look entirely unlike any solved problems. For example, two of the biggest open problems in the field of number theory are the twin primes conjecture and the Goldbach conjecture. They look a lot like each other, but they’re also distinct from anything else mathematicians have managed to prove.

Maynard thinks of them as a pair of islands — a remote archipelago. Their distance from the shores of mathematical knowledge implies that it’s going to take a big discovery to get there.

“You need a much more developed idea to cross an ocean,” he said.

But the resemblance between Goldbach and twin primes suggests they might both yield to the same idea. “It’s my belief that we might solve both at the same time, even if they seem to be quite far from any island I know how to reach with my math techniques,” Maynard said.

Sometimes, mathematicians have learned enough to know what they don’t know. The twin primes and Goldbach conjectures are both questions about prime numbers. Currently, mathematicians lack an all-purpose method for determining whether a whole number has an odd or even number of prime factors. If they can’t distinguish between numbers with an even number of prime factors and ones with an odd number, then they can’t reliably identify the primes themselves — because all primes have an odd number of prime factors. This is known as the parity problem.

“You can be wrong on these things, but I really view the parity problem as the main obstruction to Goldbach or twin primes. If you can do that, I think you’re well on your way to solving” both problems, Maynard said.

Other times, though, it’s not even clear what it would take to solve a problem — it’s only evident mathematicians can’t do it. The Riemann hypothesis is like this. It’s a problem about the distribution of prime numbers, and it’s entirely mysterious.

“It’s hard for me to speculate on how the Riemann hypothesis will be solved, but I think it’s important to acknowledge that we don’t know,” said Curtis McMullen of Harvard University.

When problems are so far off the map that mathematicians can’t even imagine how to reach them, the challenge is more than coming up with a better boat — it’s coming up with a better map. If you don’t know where an island is located, no amount of ingenuity will get you there. But once you’ve located it, you might find a surprising route that will bring you to its shores.

This was the case with the most celebrated mathematical result of the 21st century — Grigori Perelman’s 2003 proof of the Poincaré conjecture, a problem about determining when a three-dimensional shape is equivalent to the three-dimensional sphere. The problem had stymied mathematicians for a century. Then in the early 1980s, William Thurston placed the Poincaré conjecture in a broader theoretical landscape — and from there, mathematicians began to discover new ways to approach it.

“I think one of the reasons we were stonewalled was not because we didn’t have the right techniques, but because the problem wasn’t put in the right conceptual framework,” McMullen said. “The changed question suggested the changed techniques.”

In other words, if a new map reveals a surprising sea route to your destination, it might occur to you to build a ship.

Yet there’s also no guarantee that a problem can be solved at all. For example, a certain conjecture suggests that the digits of pi are uniformly distributed, so the numbers 0-9 each appear with the same frequency overall. Experimental computations back up the conjecture, but mathematicians have no idea how to prove it — and they may never.

“There’s a very high probability that the conjecture is true, but its truth might be an accident that’s very hard to access by pure logic,” McMullen said.

Unfortunately for mathematicians, many conjectures concerning basic mathematical phenomena — including the fundamental behavior of prime numbers — may not be resolvable.

“There’s this vast collection of problems that are just true because they probably should be, and we may never know the answers to them because the phenomena we’re seeing don’t have a logical explanation,” McMullen said. “It’s almost like a secret from the public, that we can easily write down hundreds of mathematical problems that will almost certainly never be solved in the next thousand years.”

Given all this uncertainty, mathematicians work to develop a sense for what kinds of problems they have a chance of solving using the centuries of techniques available to them.

“It’s important for you to have a highly developed intuition about how these ideas fit together and how you can use some combination of existing techniques,” Maynard said.

One counterintuitive way Maynard does this is by setting aside time to remind himself why existing techniques haven’t worked against math’s biggest open problems.

“I often spend Friday afternoons just thinking about trying to directly attack some famous problem,” he said. “This is much less because I think there’s a realistic way of solving the problem, but more because I think it’s important for me to understand where plausible techniques fail.”

Of course, even the most carefully developed intuition about what’s possible in mathematics will miss things — maybe many things. The best evidence for this is that it’s not uncommon to have proofs like Dimitrov’s that unexpectedly settle hard questions using older mathematical tools.

And to mathematicians, this is sometimes the best kind of result of all. After all, it was surely an achievement in the early 20th century when human beings finally figured out how to build a helicopter. But imagine how poetic it would have been if the technology for constructing such a machine had been available to da Vinci all along.

“Sometimes this can be more exciting, because techniques the mathematical community understood pretty well end up being maybe more powerful than was appreciated,” Maynard said.

]]>For decades, black holes have played the headlining role in the thought experiments that physicists use to probe nature’s extremes. These invisible spheres form when matter becomes so concentrated that everything within a certain distance, even light, gets trapped by its gravity. Albert Einstein equated the force of gravity with curves in the space-time continuum, but the curvature grows so extreme near a black hole’s center that Einstein’s equations break. Thus generations of physicists have looked to black holes for clues about the true, quantum origin of gravity, which must fully reveal itself in their hearts and match Einstein’s approximate picture everywhere else.

Plumbing black holes for knowledge of quantum gravity originated with Stephen Hawking. In 1974, the British physicist calculated that quantum jitter at the surfaces of black holes cause them to evaporate, slowly shrinking as they radiate heat. Black hole evaporation has informed quantum gravity research ever since.

More recently, physicists have considered the extreme of the extreme — entities called extremal black holes — and found a fruitful new problem.

Black holes become electrically charged when charged stuff falls into them. Physicists calculate that black holes have an “extremal limit,” a saturation point where they store as much electric charge as possible for their size. When a charged black hole evaporates and shrinks in the manner described by Hawking, it will eventually reach this extremal limit. It’s then as small as it can get, given how charged it is. It can’t evaporate further.

But the idea that an extremal black hole “stops radiating and just sits there” is implausible, said Grant Remmen, a physicist at the University of California, Berkeley. In that case, the universe of the far future will be littered with tiny, indestructible black hole remnants — the remains of any black holes that carry even a touch of charge, since they’ll all become extremal after evaporating enough. There’s no fundamental principle protecting these black holes, so physicists don’t think they should last forever.

So “there is a question,” said Sera Cremonini of Lehigh University: “What happens to all these extremal black holes?”

Physicists strongly suspect that extremal black holes must decay, resolving the paradox, but by some other route than Hawking evaporation. Investigating the possibilities has led researchers in recent years to major clues about quantum gravity.

Four physicists realized in 2006 that if extremal black holes can decay, this implies that gravity must be the weakest force in any possible universe, a powerful statement about quantum gravity’s relationship to the other quantum forces. This conclusion brought greater scrutiny to extremal black holes’ fates.

Then, two years ago, Remmen and collaborators Clifford Cheung and Junyu Liu of the California Institute of Technology discovered that whether extremal black holes can decay depends directly on another key property of black holes: their entropy — a measure of how many different ways an object’s constituent parts can be rearranged. Entropy is one of the most studied features of black holes, but it wasn’t thought to have anything to do with their extremal limit. “It’s like, wow, OK, two very cool things are connected,” Cheung said.

In the latest surprise, that link turns out to exemplify a general fact about nature. In a paper published in March in *Physical Review Letters*, Goon and Riccardo Penco broadened the lessons of the earlier work by proving a simple, universal formula relating energy and entropy. The newfound formula applies to a system such as a gas as well as a black hole.

With the recent calculations, “you really are learning about quantum gravity,” Goon said. “But maybe even more interesting, you’re learning something about more everyday stuff.”

Physicists see very easily that charged black holes reach an extremal limit. When they combine Einstein’s gravity equations and the equations of electromagnetism, they calculate that a black hole’s charge, *Q*, can never surpass its mass, *M*, when both are converted into the same fundamental units. Together, the black hole’s mass and charge determine its size — the radius of the event horizon. Meanwhile, the black hole’s charge also creates a second, “inner” horizon, hidden behind the event horizon. As *Q* increases, the black hole’s inner horizon expands while the event horizon contracts until, at *Q* = *M*, the two horizons coincide.

If *Q* increased further, the radius of the event horizon would become a complex number (involving the square root of a negative number), rather than a real one. This is unphysical. So, according to a simple mashup of James Clerk Maxwell’s 19th-century theory of electromagnetism and Einsteinian gravity, *Q* = *M* must be the limit.

When a black hole hits this point, a simple option for further decay would be to split into two smaller black holes. Yet in order for such splitting to happen, the laws of conservation of energy and conservation of charge require that one of the daughter objects must end up with more charge than mass. This, according to Einstein-Maxwell, is impossible.

But there might be a way for extremal black holes to split in two after all, as Nima Arkani-Hamed, Lubos Motl, Alberto Nicolis and Cumrun Vafa pointed out in 2006. They noted that the combined equations of Einstein and Maxwell don’t work well for small, strongly curved black holes. At smaller scales, additional details related to the quantum mechanical properties of gravity become more important. These details contribute corrections to the Einstein-Maxwell equations, changing the prediction of the extremal limit. The four physicists showed that the smaller the black hole, the more important the corrections become, causing the extremal limit to move farther and farther away from *Q* = *M*.

The researchers also pointed out that if the corrections have the right sign — positive rather than negative — then small black holes can pack more charge than mass. For them, *Q* > *M*, which is exactly what’s needed for big extremal black holes to decay.

If this is the case, then not only can black holes decay, but Arkani-Hamed, Motl, Nicolis and Vafa showed that another fact about nature also follows: Gravity must be the weakest force. An object’s charge, *Q*, is its sensitivity to any force other than gravity. Its mass, *M*, is its sensitivity to gravity. So *Q* > *M* means gravity is the weaker of the two.

From their assumption that black holes ought to be able to decay, the four physicists made a more sweeping conjecture that gravity must be the weakest force in any viable universe. In other words, objects with *Q* > *M* will always exist, for any kind of charge *Q*, whether the objects are particles like electrons (which, indeed, have far more electric charge than mass) or small black holes.

This “weak gravity conjecture” has become hugely influential, lending support to a number of other ideas about quantum gravity. But Arkani-Hamed, Motl, Nicolis and Vafa didn’t prove that *Q* > *M*, or that extremal black holes can decay. The quantum gravity corrections to the extremal limit might be negative, in which case small black holes can carry even less charge per unit mass than large ones. Extremal black holes wouldn’t decay, and the weak gravity conjecture wouldn’t hold.

This all meant that researchers needed to figure out what the sign of the quantum gravity corrections actually is.

The issue of quantum gravity corrections has come up before, in another, seemingly unrelated line of black hole study.

Almost 50 years ago, the late physicists Jacob Bekenstein and Stephen Hawking independently discovered that a black hole’s entropy is directly proportional to its surface area. Entropy, commonly thought of as a measure of disorder, counts the number of ways an object’s internal parts can be rearranged without any change to its overall state. (If a room is messy, or high entropy, for instance, you can move objects around at random and it will stay messy; by contrast, if a room is tidy, or low entropy, moving things around will make it less tidy.) By building a bridge between a black hole’s entropy, which concerns its inner microscopic ingredients, and its geometric surface area, Bekenstein and Hawking’s entropy area law has become one of physicists’ strongest footholds for studying black holes and quantum gravity.

Bekenstein and Hawking deduced their law by applying Einstein’s gravity equations (together with the laws of thermodynamics) to the black hole’s surface. They treated this surface as smooth and ignored any structure that exists over short distances.

In 1993, the physicist Robert Wald of the University of Chicago showed that it’s possible to do better. Wald found clever tricks for inferring the small effects that emanate from more microscopic levels of reality, without knowing what the complete description of that deeper level of reality is. His tactic, pioneered in a different context by the condensed matter physicist Kenneth Wilson, was to write down every possible physical effect. To Einstein’s equations, Wald showed how to add a series of extra terms — any terms that have the right dimensions and units, constructed of all physically relevant variables — that might describe the unknown short-distance properties of a black hole’s surface. “You can write down the most general set of terms that you could have in principle that describe curvatures of a certain size,” said Cremonini.

Fortunately, the series can be truncated after the first several terms, since increasingly complicated composites of many variables contribute little to the final answer. Even many of the leading terms in the series can be crossed out because they have the wrong symmetries or violate consistency conditions. This leaves just a few terms of any significance that modify Einstein’s gravity equations. Solving these new, more complicated equations yields more exact black hole properties.

Wald went through these steps in 1993, calculating how short-distance quantum gravitational effects correct the Bekenstein-Hawking entropy area law. These corrections shift a black hole’s entropy so that it’s not exactly proportional to area. And while it’s not possible to calculate the entropy shift outright — variables with unknown values are involved — what’s clear is that the corrections grow more significant the smaller the black hole, and therefore so does the entropy shift.

Three years ago, Cheung, Liu and Remmen applied Wald’s same basic approach to the study of charged black holes and the extremal limit. They modified the Einstein-Maxwell equations with a series of extra terms coming from short-distance effects, and they solved the new equations to calculate the new, corrected extremal limit. To their surprise, they recognized the answer: The corrections to the extremal limit of a charged black hole exactly match the corrections to its entropy, as calculated from Wald’s formula; quantum gravity unexpectedly shifts both quantities in the same way.

Remmen remembers the date when they completed the calculation — November 30, 2017 — “because it was that exciting,” he said. “That’s a very deep and exciting thing that we proved, that these terms give a shift in entropy and extremality that are equal to each other.”

But do the matching shifts go in the right direction? Both corrections depend on undetermined variables, so they could in principle be either positive or negative. In their 2018 paper, Cheung and company calculated that the entropy shift is positive in a large class of scenarios and models of quantum gravity. They argue that it also makes intuitive sense that the entropy shift should be positive. Recall that entropy measures all the different possible internal states of a black hole. It seems reasonable that accounting for more microscopic details of a black hole’s surface would reveal new possible states and thus lead to more entropy rather than less. “The truer theory will have more microstates,” Remmen said.

If so, then the shift in the extremal limit is also positive, allowing smaller black holes to store more charge per mass. In that case, “black holes can always decay to lighter ones,” Cheung said, and “the weak gravity conjecture is true.”

But other researchers stress that these findings do not constitute an outright proof of the weak gravity conjecture. Gary Shiu, a theoretical physicist at the University of Wisconsin, Madison, said the belief that entropy should always increase when you take quantum gravity into account is “an intuition that some might have, but it’s not always true.”

Shiu has identified counterexamples: unrealistic models of quantum gravity in which, through cancellations, short-distance effects decrease black holes’ entropy. These models violate causality or other principles, but the point, according to Shiu, is that the newfound connection to entropy doesn’t prove all by itself that extremal black holes can always decay, or that gravity is always the weakest force.

“To be able to prove would be fantastic,” Shiu said. “That’s a lot of why we’re still thinking about this problem.”

Gravity is the weakest of the four fundamental forces in our universe. The weak gravity conjecture says it couldn’t have been otherwise. Aside from our universe, the conjecture also appears to hold in all possible theoretical universes derived from string theory. A candidate for the quantum theory of gravity, string theory posits that particles aren’t points but rather extended objects (nicknamed strings), and that space-time, close-up, also has extra dimensions. When string theorists write down different sets of strings that might define a universe, they invariably find that gravity — which arises from a type of string — is the weakest force in these model universes. “Seeing how this ends up panning out in case after case after case after case is very striking,” said Jorge Santos, a physicist at the Institute for Advanced Study in Princeton, New Jersey, and the University of Cambridge.

The weak gravity conjecture is one of the most important in a network of “swampland conjectures” posed by physicists in the last two decades — speculative statements, based on thought experiments and examples, about what kinds of universes are and are not possible. By ruling out possibilities (putting impossible universes in a no-go “swampland”), swampland theorists aim to clarify why our universe is the way it is.

If researchers could prove that gravity is inevitably weakest (and that black holes can always decay), the most important implication, according to Santos, is that it means quantum gravity “has to be a theory of unification.” That is, if *Q* and *M* must have a fixed ratio, their associated forces must be part of the same unified mathematical framework. Santos noted that “the only theory out there” that unifies the fundamental forces in a single framework is string theory. Rival approaches such as loop quantum gravity attempt to quantize gravity by dividing space-time into pieces, without connecting gravity with the other forces. “If the weak gravity conjecture is correct, things like loop quantum gravity are dead,” said Santos.

Jorge Pullin, a loop quantum gravity theorist at Louisiana State University, sees “dead” as far too strong a word. The approach could itself be part of a larger unified theory, he said: “Loop quantum gravity doesn’t rule out a unification structure, but we haven’t pursued it yet.”

The weak gravity conjecture also mutually reinforces several other swampland conjectures, including statements about the roles of symmetry and distance in quantum gravity. According to Shiu, the logical connection between these conjectures “gives us some confidence that even though these statements are made on a conjectural sense, there may be universal truth behind them.”

Shiu compared our current, conjectural understanding of quantum gravity to the early days of quantum mechanics. “There were a lot of conjectures, a lot of leaps of faith about what is the right theory of the subatomic world,” he said. “Eventually many of these guesses turned out to be part of this much bigger picture.”

The recent research might have implications beyond black holes and quantum gravity.

In their March paper, Goon and Penco redid the calculation of the black hole entropy and extremality corrections. Rather than using the language of gravity and black hole surface geometry, they calculated the corrections purely in terms of universal thermodynamic quantities like energy and temperature. This allowed them to discover a thermodynamic relation between energy and entropy that applies generally in nature.

“It’s a beautiful relation,” said Santos.

In the case of black holes, the duo’s formula says what Cheung, Remmen and Liu already proved: that quantum gravity shifts the extremal limit of black holes (allowing them to store more charge per mass), and it shifts their entropy by a proportional amount. Another way of describing the extra storage capacity coming from quantum gravity is that a black hole of fixed charge can have less mass. Mass is a form of energy, and so this drop in mass can be thought of more generally as a shift in energy — one that is inversely proportional to a shift in entropy.

Whereas for a black hole, the equal and opposite shifts in energy and entropy come from unknown details of quantum gravity, an equivalent situation exists for any physical system near its extremal limit.

A gas, for instance, becomes extremal when cooled to absolute zero. Goon and Penco’s thermodynamic formula says that any changes to the microscopic physics of the gas, such as the type of atoms that comprise it, produce equal and opposite shifts in its energy and entropy. Goon speculated that the relation between energy and entropy might be useful in studies of ultracold gases and other cryogenic experiments, “because sometimes one is easier to calculate than the other.”

Whether this entropy-energy relation ever proves useful in earthly domains of physics, researchers still have plenty more work to do to explore the newfound link in the context of black holes and what it means for the nature of gravity.

“Being able to answer, ‘Why is gravity weak?’” Cheung said. “The fact that that question is even on the board, the fact that that’s a question that one can legitimately answer outside the realm of philosophy, and the fact that it’s connected through this long path to entropy, which is like the tried-and-true, most fascinating thing about black holes, … seems crazy.”

]]>