For decades, black holes have played the headlining role in the thought experiments that physicists use to probe nature’s extremes. These invisible spheres form when matter becomes so concentrated that everything within a certain distance, even light, gets trapped by its gravity. Albert Einstein equated the force of gravity with curves in the space-time continuum, but the curvature grows so extreme near a black hole’s center that Einstein’s equations break. Thus generations of physicists have looked to black holes for clues about the true, quantum origin of gravity, which must fully reveal itself in their hearts and match Einstein’s approximate picture everywhere else.

Plumbing black holes for knowledge of quantum gravity originated with Stephen Hawking. In 1974, the British physicist calculated that quantum jitter at the surfaces of black holes cause them to evaporate, slowly shrinking as they radiate heat. Black hole evaporation has informed quantum gravity research ever since.

More recently, physicists have considered the extreme of the extreme — entities called extremal black holes — and found a fruitful new problem.

Black holes become electrically charged when charged stuff falls into them. Physicists calculate that black holes have an “extremal limit,” a saturation point where they store as much electric charge as possible for their size. When a charged black hole evaporates and shrinks in the manner described by Hawking, it will eventually reach this extremal limit. It’s then as small as it can get, given how charged it is. It can’t evaporate further.

But the idea that an extremal black hole “stops radiating and just sits there” is implausible, said Grant Remmen, a physicist at the University of California, Berkeley. In that case, the universe of the far future will be littered with tiny, indestructible black hole remnants — the remains of any black holes that carry even a touch of charge, since they’ll all become extremal after evaporating enough. There’s no fundamental principle protecting these black holes, so physicists don’t think they should last forever.

So “there is a question,” said Sera Cremonini of Lehigh University: “What happens to all these extremal black holes?”

Physicists strongly suspect that extremal black holes must decay, resolving the paradox, but by some other route than Hawking evaporation. Investigating the possibilities has led researchers in recent years to major clues about quantum gravity.

Four physicists realized in 2006 that if extremal black holes can decay, this implies that gravity must be the weakest force in any possible universe, a powerful statement about quantum gravity’s relationship to the other quantum forces. This conclusion brought greater scrutiny to extremal black holes’ fates.

Then, two years ago, Remmen and collaborators Clifford Cheung and Junyu Liu of the California Institute of Technology discovered that whether extremal black holes can decay depends directly on another key property of black holes: their entropy — a measure of how many different ways an object’s constituent parts can be rearranged. Entropy is one of the most studied features of black holes, but it wasn’t thought to have anything to do with their extremal limit. “It’s like, wow, OK, two very cool things are connected,” Cheung said.

In the latest surprise, that link turns out to exemplify a general fact about nature. In a paper published in March in *Physical Review Letters*, Goon and Riccardo Penco broadened the lessons of the earlier work by proving a simple, universal formula relating energy and entropy. The newfound formula applies to a system such as a gas as well as a black hole.

With the recent calculations, “you really are learning about quantum gravity,” Goon said. “But maybe even more interesting, you’re learning something about more everyday stuff.”

Physicists see very easily that charged black holes reach an extremal limit. When they combine Einstein’s gravity equations and the equations of electromagnetism, they calculate that a black hole’s charge, *Q*, can never surpass its mass, *M*, when both are converted into the same fundamental units. Together, the black hole’s mass and charge determine its size — the radius of the event horizon. Meanwhile, the black hole’s charge also creates a second, “inner” horizon, hidden behind the event horizon. As *Q* increases, the black hole’s inner horizon expands while the event horizon contracts until, at *Q* = *M*, the two horizons coincide.

If *Q* increased further, the radius of the event horizon would become a complex number (involving the square root of a negative number), rather than a real one. This is unphysical. So, according to a simple mashup of James Clerk Maxwell’s 19th-century theory of electromagnetism and Einsteinian gravity, *Q* = *M* must be the limit.

When a black hole hits this point, a simple option for further decay would be to split into two smaller black holes. Yet in order for such splitting to happen, the laws of conservation of energy and conservation of charge require that one of the daughter objects must end up with more charge than mass. This, according to Einstein-Maxwell, is impossible.

But there might be a way for extremal black holes to split in two after all, as Nima Arkani-Hamed, Lubos Motl, Alberto Nicolis and Cumrun Vafa pointed out in 2006. They noted that the combined equations of Einstein and Maxwell don’t work well for small, strongly curved black holes. At smaller scales, additional details related to the quantum mechanical properties of gravity become more important. These details contribute corrections to the Einstein-Maxwell equations, changing the prediction of the extremal limit. The four physicists showed that the smaller the black hole, the more important the corrections become, causing the extremal limit to move farther and farther away from *Q* = *M*.

The researchers also pointed out that if the corrections have the right sign — positive rather than negative — then small black holes can pack more charge than mass. For them, *Q* > *M*, which is exactly what’s needed for big extremal black holes to decay.

If this is the case, then not only can black holes decay, but Arkani-Hamed, Motl, Nicolis and Vafa showed that another fact about nature also follows: Gravity must be the weakest force. An object’s charge, *Q*, is its sensitivity to any force other than gravity. Its mass, *M*, is its sensitivity to gravity. So *Q* > *M* means gravity is the weaker of the two.

From their assumption that black holes ought to be able to decay, the four physicists made a more sweeping conjecture that gravity must be the weakest force in any viable universe. In other words, objects with *Q* > *M* will always exist, for any kind of charge *Q*, whether the objects are particles like electrons (which, indeed, have far more electric charge than mass) or small black holes.

This “weak gravity conjecture” has become hugely influential, lending support to a number of other ideas about quantum gravity. But Arkani-Hamed, Motl, Nicolis and Vafa didn’t prove that *Q* > *M*, or that extremal black holes can decay. The quantum gravity corrections to the extremal limit might be negative, in which case small black holes can carry even less charge per unit mass than large ones. Extremal black holes wouldn’t decay, and the weak gravity conjecture wouldn’t hold.

This all meant that researchers needed to figure out what the sign of the quantum gravity corrections actually is.

The issue of quantum gravity corrections has come up before, in another, seemingly unrelated line of black hole study.

Almost 50 years ago, the late physicists Jacob Bekenstein and Stephen Hawking independently discovered that a black hole’s entropy is directly proportional to its surface area. Entropy, commonly thought of as a measure of disorder, counts the number of ways an object’s internal parts can be rearranged without any change to its overall state. (If a room is messy, or high entropy, for instance, you can move objects around at random and it will stay messy; by contrast, if a room is tidy, or low entropy, moving things around will make it less tidy.) By building a bridge between a black hole’s entropy, which concerns its inner microscopic ingredients, and its geometric surface area, Bekenstein and Hawking’s entropy area law has become one of physicists’ strongest footholds for studying black holes and quantum gravity.

Bekenstein and Hawking deduced their law by applying Einstein’s gravity equations (together with the laws of thermodynamics) to the black hole’s surface. They treated this surface as smooth and ignored any structure that exists over short distances.

In 1993, the physicist Robert Wald of the University of Chicago showed that it’s possible to do better. Wald found clever tricks for inferring the small effects that emanate from more microscopic levels of reality, without knowing what the complete description of that deeper level of reality is. His tactic, pioneered in a different context by the condensed matter physicist Kenneth Wilson, was to write down every possible physical effect. To Einstein’s equations, Wald showed how to add a series of extra terms — any terms that have the right dimensions and units, constructed of all physically relevant variables — that might describe the unknown short-distance properties of a black hole’s surface. “You can write down the most general set of terms that you could have in principle that describe curvatures of a certain size,” said Cremonini.

Fortunately, the series can be truncated after the first several terms, since increasingly complicated composites of many variables contribute little to the final answer. Even many of the leading terms in the series can be crossed out because they have the wrong symmetries or violate consistency conditions. This leaves just a few terms of any significance that modify Einstein’s gravity equations. Solving these new, more complicated equations yields more exact black hole properties.

Wald went through these steps in 1993, calculating how short-distance quantum gravitational effects correct the Bekenstein-Hawking entropy area law. These corrections shift a black hole’s entropy so that it’s not exactly proportional to area. And while it’s not possible to calculate the entropy shift outright — variables with unknown values are involved — what’s clear is that the corrections grow more significant the smaller the black hole, and therefore so does the entropy shift.

Three years ago, Cheung, Liu and Remmen applied Wald’s same basic approach to the study of charged black holes and the extremal limit. They modified the Einstein-Maxwell equations with a series of extra terms coming from short-distance effects, and they solved the new equations to calculate the new, corrected extremal limit. To their surprise, they recognized the answer: The corrections to the extremal limit of a charged black hole exactly match the corrections to its entropy, as calculated from Wald’s formula; quantum gravity unexpectedly shifts both quantities in the same way.

Remmen remembers the date when they completed the calculation — November 30, 2017 — “because it was that exciting,” he said. “That’s a very deep and exciting thing that we proved, that these terms give a shift in entropy and extremality that are equal to each other.”

But do the matching shifts go in the right direction? Both corrections depend on undetermined variables, so they could in principle be either positive or negative. In their 2018 paper, Cheung and company calculated that the entropy shift is positive in a large class of scenarios and models of quantum gravity. They argue that it also makes intuitive sense that the entropy shift should be positive. Recall that entropy measures all the different possible internal states of a black hole. It seems reasonable that accounting for more microscopic details of a black hole’s surface would reveal new possible states and thus lead to more entropy rather than less. “The truer theory will have more microstates,” Remmen said.

If so, then the shift in the extremal limit is also positive, allowing smaller black holes to store more charge per mass. In that case, “black holes can always decay to lighter ones,” Cheung said, and “the weak gravity conjecture is true.”

But other researchers stress that these findings do not constitute an outright proof of the weak gravity conjecture. Gary Shiu, a theoretical physicist at the University of Wisconsin, Madison, said the belief that entropy should always increase when you take quantum gravity into account is “an intuition that some might have, but it’s not always true.”

Shiu has identified counterexamples: unrealistic models of quantum gravity in which, through cancellations, short-distance effects decrease black holes’ entropy. These models violate causality or other principles, but the point, according to Shiu, is that the newfound connection to entropy doesn’t prove all by itself that extremal black holes can always decay, or that gravity is always the weakest force.

“To be able to prove would be fantastic,” Shiu said. “That’s a lot of why we’re still thinking about this problem.”

Gravity is the weakest of the four fundamental forces in our universe. The weak gravity conjecture says it couldn’t have been otherwise. Aside from our universe, the conjecture also appears to hold in all possible theoretical universes derived from string theory. A candidate for the quantum theory of gravity, string theory posits that particles aren’t points but rather extended objects (nicknamed strings), and that space-time, close-up, also has extra dimensions. When string theorists write down different sets of strings that might define a universe, they invariably find that gravity — which arises from a type of string — is the weakest force in these model universes. “Seeing how this ends up panning out in case after case after case after case is very striking,” said Jorge Santos, a physicist at the Institute for Advanced Study in Princeton, New Jersey, and the University of Cambridge.

The weak gravity conjecture is one of the most important in a network of “swampland conjectures” posed by physicists in the last two decades — speculative statements, based on thought experiments and examples, about what kinds of universes are and are not possible. By ruling out possibilities (putting impossible universes in a no-go “swampland”), swampland theorists aim to clarify why our universe is the way it is.

If researchers could prove that gravity is inevitably weakest (and that black holes can always decay), the most important implication, according to Santos, is that it means quantum gravity “has to be a theory of unification.” That is, if *Q* and *M* must have a fixed ratio, their associated forces must be part of the same unified mathematical framework. Santos noted that “the only theory out there” that unifies the fundamental forces in a single framework is string theory. Rival approaches such as loop quantum gravity attempt to quantize gravity by dividing space-time into pieces, without connecting gravity with the other forces. “If the weak gravity conjecture is correct, things like loop quantum gravity are dead,” said Santos.

Jorge Pullin, a loop quantum gravity theorist at Louisiana State University, sees “dead” as far too strong a word. The approach could itself be part of a larger unified theory, he said: “Loop quantum gravity doesn’t rule out a unification structure, but we haven’t pursued it yet.”

The weak gravity conjecture also mutually reinforces several other swampland conjectures, including statements about the roles of symmetry and distance in quantum gravity. According to Shiu, the logical connection between these conjectures “gives us some confidence that even though these statements are made on a conjectural sense, there may be universal truth behind them.”

Shiu compared our current, conjectural understanding of quantum gravity to the early days of quantum mechanics. “There were a lot of conjectures, a lot of leaps of faith about what is the right theory of the subatomic world,” he said. “Eventually many of these guesses turned out to be part of this much bigger picture.”

The recent research might have implications beyond black holes and quantum gravity.

In their March paper, Goon and Penco redid the calculation of the black hole entropy and extremality corrections. Rather than using the language of gravity and black hole surface geometry, they calculated the corrections purely in terms of universal thermodynamic quantities like energy and temperature. This allowed them to discover a thermodynamic relation between energy and entropy that applies generally in nature.

“It’s a beautiful relation,” said Santos.

In the case of black holes, the duo’s formula says what Cheung, Remmen and Liu already proved: that quantum gravity shifts the extremal limit of black holes (allowing them to store more charge per mass), and it shifts their entropy by a proportional amount. Another way of describing the extra storage capacity coming from quantum gravity is that a black hole of fixed charge can have less mass. Mass is a form of energy, and so this drop in mass can be thought of more generally as a shift in energy — one that is inversely proportional to a shift in entropy.

Whereas for a black hole, the equal and opposite shifts in energy and entropy come from unknown details of quantum gravity, an equivalent situation exists for any physical system near its extremal limit.

A gas, for instance, becomes extremal when cooled to absolute zero. Goon and Penco’s thermodynamic formula says that any changes to the microscopic physics of the gas, such as the type of atoms that comprise it, produce equal and opposite shifts in its energy and entropy. Goon speculated that the relation between energy and entropy might be useful in studies of ultracold gases and other cryogenic experiments, “because sometimes one is easier to calculate than the other.”

Whether this entropy-energy relation ever proves useful in earthly domains of physics, researchers still have plenty more work to do to explore the newfound link in the context of black holes and what it means for the nature of gravity.

“Being able to answer, ‘Why is gravity weak?’” Cheung said. “The fact that that question is even on the board, the fact that that’s a question that one can legitimately answer outside the realm of philosophy, and the fact that it’s connected through this long path to entropy, which is like the tried-and-true, most fascinating thing about black holes, … seems crazy.”

]]>In 1887, Richard Caton announced his discovery of brain waves at a scientific meeting. “Read my paper on the electrical currents of the brain,” he wrote in his personal diary. “It was well received but not understood by most of the audience.” Even though Caton’s observations of brain waves were correct, his thinking was too unorthodox for others to take seriously. Faced with such a lack of interest, he abandoned his research and the discovery was forgotten for decades.

Flash forward to October 2019. At a gathering of scientists that I helped organize at the annual meeting of the Society for Neuroscience in Chicago, I asked if anyone knew of recent research by neuroscientists at the Massachusetts Institute of Technology who had found a new way to treat Alzheimer’s disease by manipulating microglia and brain waves. No one replied.

I understood: Scientists must specialize to succeed. Biologists studying microglia don’t tend to read papers about brain waves, and brain wave researchers are generally unaware of glial research. A study that bridges these two traditionally separate disciplines may fail to gain traction. But this study needed attention: Incredible as it may sound, the researchers improved the brains of animals with Alzheimer’s simply by using LED lights that flashed 40 times a second. Even sound played at this charmed frequency, 40 hertz, had a similar effect.

Today, brain waves are a vital part of neuroscience research and medical diagnosis, though doctors have never manipulated them to treat degenerative disease before now. These oscillating electromagnetic fields are produced by neurons in the cerebral cortex firing electrical impulses as they process information. Much as people clapping their hands in synchrony generate thunderous rhythmic applause, the combined activity of thousands of neurons firing together produces brain waves.

These waves come in various forms and in many different frequencies. Alpha waves, for example, oscillate at frequencies of 8 to 12 hertz. They surge when we close our eyes and shut out external stimulation that energizes higher-frequency brain wave activity. Rapidly oscillating gamma waves, which reverberate at frequencies of 30 to 120 hertz, are of particular interest in Alzheimer’s research, because their period of oscillation is well matched to the hundredth-of-a-second time frame of synaptic signaling in neural circuits. Brain waves are important in information processing because they can influence neuronal firing. Neurons fire an electrical impulse when the voltage difference between the inside and outside of the neuron reaches a certain trigger point. The peaks and troughs of voltage oscillations in brain waves nudge the neuron closer to the trigger point or farther away from it, thereby boosting or inhibiting its tendency to fire. The rhythmic voltage surging also groups neurons together, making them fire in synchrony as they “ride” on different frequencies of brain waves.

I already knew that much, so to better understand the new work and its origins, I sought out Li-Huei Tsai, a neuroscientist at MIT. She said the idea of using one of these frequencies to treat Alzheimer’s came from a curious observation. “We had noticed in our own data, and in that of other groups, that 40-hertz rhythm power and synchrony are reduced in mouse models of Alzheimer’s disease,” she said, as well as in patients with the disease. Apparently, if you have Alzheimer’s, your brain doesn’t produce strong brain waves in that particular frequency. In 2016, her graduate student Hannah Iaccarino reasoned that perhaps boosting the power of these weakened gamma waves would be helpful in treating this severe and irreversible dementia.

To increase gamma wave power, the team turned to optogenetic stimulation, a novel technique that allows researchers to control how and when individual neurons fire by shining lasers directly into them, via fiber-optic cables implanted in the brain. Tsai’s team stimulated neurons in the visual cortex of mice with Alzheimer’s, making them fire impulses at 40 hertz. The results, published in 2016 in *Nature*, showed a marked reduction in amyloid plaques, a hallmark of the disease.

It was a good indication that these brain waves might help, but Tsai’s team knew that an optogenetic approach wasn’t an option for humans with the disease, because of ethical concerns. They began to look for other ways of increasing the brain’s gamma wave activity. Tsai’s MIT colleague Emery Brown pointed her to an older paper showing that you can boost the power of gamma waves in a cat’s brain simply by having it stare at a screen illuminated by a strobe light flickering at certain frequencies, which included 40 hertz. “Hannah and our collaborators built a system to try that sensory stimulation in mice, and it worked,” Tsai told me. The thinking is that the flashing lights whip up gamma waves because the rhythmic sensory input sets neural circuits “rocking” at this frequency, like when people rock a stuck car out of a rut by pushing together in rhythm.

In fact, the strobe lights had an additional effect on mice: They also cleared out amyloid plaques. But it wasn’t clear exactly how the optogenetic stimulation or the flashing-light therapy could do that.

Following a clue from Alois Alzheimer himself, the researchers quickly shifted their attention from neurons to microglia. In Alzheimer’s first description of brain tissue taken from patients with “presenile dementia,” which he examined under a microscope near the turn of the 20th century, he noted that the deposits of amyloid plaques were surrounded by these immune cells. Subsequent research confirmed that microglia engulf the plaques pockmarking these patients’ brains.

Tsai and colleagues decided to check out these immune cells in the animals whose brain waves they’d boosted. They observed that microglia in all the treated animals had bulked up in size, and more of them were digesting amyloid plaques.

How did these cells know to do this? Unlike immune cells in the bloodstream, which are unaware of neuronal transmissions, the brain’s microglia are tuned in to the rhythms of electrical activity in the brain. While immune cells in the bloodstream and microglia in the brain both have cellular sensors to detect disease and injury, microglia can also detect neurons firing electrical impulses. That’s because they have the same neurotransmitter receptors that neurons use to transmit signals through synapses. This gives microglia the ability to “listen in” on information flowing through neural networks and, when those transmissions are disturbed, to take action to repair the circuitry. Thus, the right brain waves can drive microglia to consume the toxic protein deposits.

“I find this intersection to be one of the most exciting and intriguing results of our work,” Tsai told me. Her team reported last year in *Neuron* that prolonging the LED strobe-light flashing for three to six weeks not only cleared out the toxic plaques in mice brains but also prevented neurons from dying and even preserved synapses, which dementia can destroy.

The team wanted to know if other types of rhythmic sensory input could also rock the neural circuits like a stuck car, producing gamma waves that resulted in fewer amyloid plaques. In an expanded study in *Cell*, they reported that just as seeing flashes at 40 hertz resulted in fewer plaques in the visual cortex, sound stimulation at 40 hertz reduced amyloid protein in the auditory cortex. Other regions were similarly affected, including the hippocampus — crucial for learning and memory — and the treated mice performed better on memory tests. Exposing the mice to both stimuli, a light show synchronized with pulsating sound, had an even more powerful effect, reducing amyloid plaques in regions throughout the cerebral cortex, including the prefrontal region, which carries out higher-level executive functions that are impaired in Alzheimer’s.

I was amazed, so just to make sure I wasn’t getting unduly excited about the possibility of using flashing lights and sounds to treat humans, I talked to Hiroaki Wake, a neuroscientist at Kobe University in Japan who was not involved with the work. “It would be fantastic!” he said. “The treatment may also be effective for a number of neurodegenerative disorders like Parkinson’s disease and ALS,” where microglia also play a role. He notes, however, that while the link between microglia and brain oscillations is well founded, the biological mechanism by which 40-hertz stimulation prods microglia into removing the plaques and rescuing neurons from destruction remains unknown.

Tsai said the mystery may be solved soon. A team of researchers at the Georgia Institute of Technology, including Tsai lab veteran Annabelle Singer, laid out a possibility in a February paper. They reported that in normal mice, gamma stimulation with LED lights rapidly induced microglia to generate cytokines, proteins that neurons (and immune cells generally) use to signal one another. They’re one of the main regulators of neuroinflammation in response to brain injury and disease, and the microglia released them surprisingly quickly, within just 15 to 60 minutes of the stimulation. “These effects are faster than you see with many drugs that target immune signaling or inflammation,” Singer said.

Cytokines come in many forms, and the study found that getting the microglia to produce different kinds required specific frequencies. “Neural stimulation doesn’t just turn immune signaling on,” Singer said. It took a particular rhythm to produce these particular proteins. “Different types of stimulation could be used to tune immune signaling as desired.”

That means doctors could potentially treat different diseases just by varying the light and sound rhythms they use. The different stimuli would rock the neurons into producing appropriate brain wave frequencies, causing nearby microglia to release specific types of cytokines, which tell microglia in general how to go to work repairing the brain.

Of course, it may still be a while before such treatments are available for patients. And even then, there may be side effects. “Rhythmic sensory stimulation likely affects many types of cells in brain tissue,” Tsai said. “How each of them senses and responds to gamma oscillations is unknown.” Wake also pointed out that rhythmic stimulation could do more harm than good, because such stimuli could induce seizures, common in many psychiatric and neurodegenerative disorders.

Still, the potential benefits are great. Tsai’s team has just begun assessing their strobe-light method on patients, and they’re sure to be joined by others as more researchers learn of this promising work. (Most experts I talked to were not aware of this research until I asked.)

Just as new species spring up at the boundaries between ecosystems, new science can flourish at the interface between disciplines. It takes a sharp eye to spot it, but as Richard Caton found, it can also require a bit of persuasion to convince others.

]]>

All eyes are on the B meson, a yoked pair of quark particles. Having caught whiffs of unexpected B meson behavior before, researchers with the Large Hadron Collider beauty experiment (LHCb) have spent years documenting rare collision events featuring the particles, in hopes of conclusively proving that some novel fundamental particle or effect is meddling with them.

In their latest analysis, first presented at a seminar in March, the LHCb physicists found that several measurements involving the decay of B mesons conflict slightly with the predictions of the Standard Model of particle physics — the reigning set of equations describing the subatomic world. Taken alone, each oddity looks like a statistical fluctuation, and they may all evaporate with additional data, as has happened before. But their collective drift suggests that the aberrations may be breadcrumbs leading beyond the Standard Model to a more complete theory.

“For the first time in certainly my working life, there are a confluence of different decays that are showing anomalies that match up,” said Mitesh Patel, a particle physicist at Imperial College London who is part of LHCb.

The B meson is so named because it contains a bottom quark, one of six fundamental quark particles that account for most of the universe’s visible matter. For unknown reasons, the quarks break down into three generations: heavy, medium and light, each with quarks of opposite electric charge. Heavier quarks decay into their lighter variations, almost always switching their charge, too. For instance, when the negatively charged heavy bottom quark in a B meson drops a generation, it usually becomes a middleweight, positively charged “charm” quark.

The LHCb collaboration scours the wreckage of particle pileups for exceptions to this rule. For every million B meson decays they see, one fringe event showcases a rebellious bottom quark metamorphosing into a “strange” quark instead, dropping a generation but keeping its negative charge. The Standard Model predicts the exceedingly low rate of these events and how they will play out. But because they are so rare, any tweaks coming from undiscovered particles or effects should be obvious.

LHCb’s new analysis covered about 4,500 rare B meson decays, roughly doubling the data from their previous study in 2015. Each transformation ends with four outbound particles hitting a ring-shaped detector. When experimentalists compared the various angles between the particles with the angles predicted by the Standard Model, they found a deviation from the expected pattern. The collective significance of the anomalous angles grew slightly since the last analysis, and researchers say the new measurements also tell a more unified story. “Suddenly the consistency between the different angular observables got much better,” said Felix Kress, an LHCb researcher who helped crunch the numbers.

Statistically, the deviation in the angular pattern is equivalent to flipping a coin 100 times and getting 66 heads, rather than the usual 50 or so. For a fair coin, the odds of such a deviation are about 1 in 1,000.

But amid oodles of particle collisions, statistical fluctuations are bound to arise, so a 1-in-1,000 deviation doesn’t count as hard proof of a break with the Standard Model. For that, the physicists will need to accumulate enough B meson decays to demonstrate a deviation of 1 in 1.7 million, akin to flipping 75 heads. “If this is new physics,” Jure Zupan, a theoretical physicist at the University of Cincinnati, said of the current update, “it’s not significant enough.”

Still, the observed pattern hints that something is off with B meson decay products in the lepton family, the other category of matter particles aside from quarks. Like quarks, leptons come in heavy, medium and light generations (called tau particles, muons and electrons, respectively); the Standard Model says they’re all identical except for their mass. Each B meson decay ends by shooting off a twin pair of any of the three types of leptons. LHCb’s latest update focused on the anomalous angular pattern produced by muon events, which are easiest to detect.

The experiment also logs a smaller number of B meson decays ending with electrons. The Standard Model demands that both types of decays should play out in exactly the same way, but a 2014 analysis by the LHCb team uncovered a possible difference between the muon events and the electron events. Taken together, the anomalies could mean that the novelty may lie not only with muons, but with electrons as well.

Patel’s group is currently working on an update to the electron-versus-muon measurement, which he said makes for a much “cleaner,” unambiguous observation than the muon angle measurements alone. “This is a Standard Model killer,” he said.

If the B meson anomalies are real, physicists have two leading theories to explain them.

A new, hypothetical force-carrying particle called the Zʹ boson would resemble the standard weak force that turns one matter particle into another, except that it would influence electrons and muons differently. As a bonus, the Zʹ boson would also imply the existence of an additional massive particle that could make up the universe’s missing dark matter. “We are moving to the next step, which is trying not just to explain the anomaly, but to connect the anomaly to other problems,” said Joaquim Matias, a theoretical physicist at the Autonomous University of Barcelona.

The more exotic possibility is that LHCb researchers are detecting hints of a fabled particle — the leptoquark — that can turn a quark into a lepton and vice versa. Theorists have long contemplated the possibility of leptoquarks, but the idea has grown less popular as experiments have ruled out the simplest kinds. Still, the three-generation quark family tree looks suspiciously like the lepton family tree, and neither pattern is well understood. Decaying B mesons may be revealing a leptoquark link between them. “That’s the dream,” said Zupan.

As theorists consider these possibilities, the LHCb team will have to see if they can flip enough heads to prove that their coin is definitely not standard — an endeavor that may take the rest of the decade.

Ultimately, however, the particle physics community will hold out for confirmation from a different apparatus, such as the Belle II experiment in Japan, or one of the LHC’s two main detectors. Either proving or eliminating the B meson anomalies will be a herculean endeavor, but researchers have all the tools they need. “With four experiments that can chip in,” Zupan said, “the future is bright.”

]]>The first problem described an object that I claimed could produce energy forever, in defiance of the second law of thermodynamics. The object is sketched and briefly described below:

*E*_{1} and *E*_{2} are sections of two concentric ellipses with foci located at points *A* and *B*. *S*_{1} and *S*_{2} are arcs of a circle with center *B*. Since they are arcs of the same circle, every straight line from *B* to *S*_{1} or *S*_{2} is a radius and therefore normal to the inner surface. This entire figure is a cross section of a hollow object that is created by the solid of revolution of the plane figure. The inside surface of this object is silvered and 100% reflective (or as close to it as practically possible). At *A* and *B* are small spherical blackbodies made of thermoelectric materials. They each have thin wires leading to battery terminals outside. The whole structure is completely sealed.

For more details on how this object is supposed to work, check out the original puzzle column. In brief, the geometry dictates that 100% of the radiation originating from the blackbody at *A* will land on and be absorbed by the blackbody at *B* because both *A* and *B* are the foci of ellipses *E*_{1} and *E*_{2}. However, a significant proportion of rays that start from *B* will land on *S _{1}* or

This “ellipsoid paradox,” which can take the form of various ellipsoidal constructions, has been known since 1959. It doesn’t work, of course, but despite several articles debunking it through the years, it has occasionally been taken seriously.

Paradoxes illustrate faulty habits of thought, and that’s a good place to start here. I believe the key mental error in this case is our implicit trust in abstract geometry to solve real-world problems. Though it works most of the time, geometry, like the rest of mathematics, involves idealizations like the concept of a dimensionless point, which makes its way into physics in the form of “point particles” or “point masses.” These abstract constructs have proved very powerful — in the theory of gravitation, for instance. We know that real-world masses don’t behave exactly like point masses, but they approximately do. Real-world projectiles do not trace out perfect parabolas, but for all practical purposes, they come close enough. We have been spoiled by these “almost exact” successes, and so it seems paradoxical when we encounter rare instances where the approximations don’t work and the idealizations give us results that are qualitatively wrong, as in this case. The ellipsoid object we described works perfectly well in the universe of point particles, where rays from one focus of an ellipsoid are reflected precisely to the other. But the moment you replace the point particle with a finite real-world object, no matter how small, it no longer works — the approximation fails. There is a yawning discontinuity between zero size and any finite object. To see why, look at the illustration below.

At *A* and *B* (the point foci of the two ellipses) are small round objects of the same size. The dashed lines *AP* and *BP* join point *P* on *E _{1}* to the two point foci. Now consider a ray

OK, many of the rays from the two objects miss each other, but what happens to them? They bounce about in the interior of the object, and most eventually end up on either *A* or *B*, though a few might even keep bouncing around forever, something that Spotted Snifflegrub alluded to. If both objects are at the same temperature, there will be exactly as many rays going from *A* to *B* as there are going from *B* to *A*. How do we know this? We can construct a computer simulation and count a large number of rays, as was done in this paper, and show that the numbers actually even out.

What mandates such a result? One simple principle is reversibility. If a ray starts at *A*, reflects back and forth, say, 100 times, and lands on *B* at a certain angle, then a ray emitted from that same spot on *B* at the same angle will follow the same pattern in reverse and land on *A*. Therefore, if radiation and absorption are taking place in all directions, *A* and *B* will exchange the same number of rays on average. We do not have to worry about rays that bounce off *S*_{1} or *S*_{2 }and return to *B*, or rays that keep bouncing off the walls forever. It is ironic that the integrity of the second law of thermodynamics, which is responsible for irreversibility in the world, is safeguarded at the micro level by this reversibility.

Several readers correctly pointed out that the paradox does not work for finite objects, including FosterBoondoggle, Laurence Cox, James Ough and Manuel Fortin. Manuel Fortin also used a diagram to show how some rays from *A* would miss *B* if *A* and *B* were spherical objects.

Some readers pointed out that even if the device worked, both objects would cool down over time and the temperature difference would decrease to zero. If the device worked, that would be true. But then you could construct the device using two locking halves that you could take apart, allow to reheat to room temperature, and then reuse perpetually.

Other readers mentioned quantum effects and the impossibility of having perfect mirrors. Both of these issues would perhaps reduce the efficiency of the device, but they are not the reason it doesn’t work.

Consider the following stipulations:

- The solution to this
*Insights*column will definitely be published in*Quanta*in May. - If, for the purposes of this problem, we define a week as starting on a Monday and ending on a Sunday, then every day in May will fall in one of five separate weeks (one partial week followed by four full weeks). I declare with certainty that you will not be able to predict in which of these five separate weeks the solution column will be published.

Now, as we know, *Quanta* readers are a brilliant bunch. Suppose a reader reasons as follows: “If the column is not published by the end of the fourth week, I will be able to predict with certainty that it will be published in the fifth week. Therefore, it cannot be published in the fifth week. But if it cannot be published in the fifth week, then, if it is still unpublished at the end of the third week, I can be certain that it will be published in the fourth week. Therefore, it cannot be published in the fourth week. Now I can apply the same serial logic to prove that it cannot be published in the third, second or first weeks. Therefore, the solution cannot be published at all!”

OMG, this puts me in a terrible bind! My editor will certainly not be pleased. Is this reasoning valid? Why or why not? What if there is a tiny probability (say 0.001) that the column will not be published at all (we all get COVID-19, or the financial system collapses and no business activities can take place)? Assume that “certainty” now means “99% or greater probability of being right.” Does that change the conclusion?

This is a version of a classic paradox variously called the unexpected hanging or the surprise test, both of which are about a future event whose exact date the protagonist cannot know. The language of these paradoxes has been criticized as being ambiguous, so I thought I would remedy the situation by introducing the idea of certainty.

Unfortunately, I don’t think I was successful. Several readers still found the statement ambiguous, and this ambiguity was explored in excruciating detail by Steve Taylor. Mea culpa. I think a better statement of what I was getting at would have been “I declare that you will not be able to predict correctly with logical certainty which of these five separate weeks the solution column will be published in.” Some readers also seized on my failure to specify that the publication would be in May 2020. Well, while this was not explicitly stated, the next year with the pattern of weeks I described would be 2026. Surely a long time to wait for the solution of a puzzle, as Tommy pointed out!

Neither of these objections invalidates the problem, though. There are always ambiguities that need to be clarified with these kinds of problems, not just because of the language used but also because of assumptions that are not spelled out. If every possible valid and invalid interpretation has to be explicitly specified, a puzzle statement will look like a legal document. That said, here are two explicit clarifications, without which the problem loses meaning.

- By “certainty,” we mean objective, logical certainty — an inference based on an assumption that all declarations are true, and not just a “feeling of certainty.”
- You only get one chance to express your certainty — it has to be about one particular week. If you express certainty that I will post the article in a particular week, and your prediction fails, then the game is over. You cannot again express certainty for a different week. Without this stipulation, the problem becomes absurd.

There is another implicit assumption in the problem’s classical versions, the unexpected hanging and the surprise test: that there is a “neutral block of time” at the end of the day during which the hanging or test cannot take place. This is implied by the nature of business or school hours. There is no such limitation in our problem as *Quanta* publishes online, but let us assume for a moment that there’s a cutoff time, say 8 p.m., after which publication will not occur that day. This gives a reader time to make a prediction about the next week, without fear that the publication will happen at the same time as the prediction.

In this version of the paradox, which is essentially the classical one, the general consensus is that the first inference is correct: If the solution is not published by 8 p.m. at the end of the fourth week, a reader can be certain that it will be published in the fifth week. However, the follow-up inductive inferences for the third and earlier weeks are not sound. You cannot make an inference about the past based on what might be conditionally true in the future. In other words, while it’s true that if the puzzle solution hasn’t been published by the end of the fourth week, it has to be published in the fifth week, that knowledge has absolutely no bearing on what you can know at the start of the month or even after the first, second or third weeks. You can only be certain it hasn’t been published in the first four weeks after the end of the fourth week. So all the author has to do to avert the paradox is to choose a publication week using some random algorithm that the readers do not know about. That’s exactly what I did. Of course I can’t tell you what it was!

Gert-Jan Schouten and Tommy considered a version of the problem in which there are no constraints on the time of the publication, and they came up with a creative solution. I think they have a valid argument, with one caveat. Their point is that the article can be published at the last instant of the fourth week, or the first instant of the fifth week. In the first case, the reader would be wrong to make a prediction of a fifth-week publication no matter how late the prediction is made in the fourth week. Similarly, a prediction for a fifth-week publication made on the first instant of the fifth week would not be valid as a prediction because the publication will have been completed at the same time. Very clever! The caveat is that there must be an agreed-upon last and first instant of time for every week — a kind of discrete time “bin” similar to that of a computer clock — whether it be a minute, a second, a millisecond or, as Gert-Jan Schouten suggested, a nanosecond. We cannot treat time as continuous, because if we do, then we have the prospect of a midnight shootout between the reader and the author, similar to a Wild West gun duel, with both parties attempting to complete their job (the prediction or the publication) closer to midnight than the other party. In fact, the shootout at the end of the fourth week will be less of a gun duel and more of a game of chicken, with each participant waiting to be as agonizingly close to midnight as possible. It’s fun to visualize, but thankfully it didn’t have to materialize!

(Note: Tucker the third raised the objection that if the article is published exactly at midnight between Sunday and Monday, then the week of publication is ambiguous. This is incorrect: by convention, midnight is the start of the subsequent day. Also, time zone differences are immaterial because *Quanta* gives the day of publication in its own time zone only.)

As for the last part of the problem about certainty being taken as greater than 99% probability, well, I added that because in real life we know that nothing can be 100% certain. For practical purposes, we ignore the possibility of unexpected low-probability events when we say we are certain. In this case, it doesn’t make a difference because the probability I gave of the solution not being published — 0.001 — is much too small. The probability of publication was still clearly above the given certainty threshold.

The prize for this month goes to Manuel Fortin. Congratulations! Thank you to all who contributed. See you next month for new *Insights*!

Yet there are always individuals that don’t participate in the collective behavior — the odd bird or insect or mammal that remains just a little out of sync with the rest; the stray cell or bacterium that seems to have missed some call to arms. Researchers usually pay them little heed, dismissing them as insignificant outliers.

But a handful of scientists have started to suspect otherwise. Their hunch is that these individuals are signs of something deeper, a broader evolutionary strategy at work. Now, new research validating that hypothesis has opened up a very different way of thinking about the study of collective behavior.

Early clues emerged after Corina Tarnita, a mathematical biologist at Princeton University, and her colleagues turned their attention to the cellular slime mold *Dictyostelium discoideum*. Typically, it lives as a collection of solitary amoebas, with each cell eating and dividing on its own. But when threatened with starvation, up to a million of those cells can coalesce into a mushroom-like tower. Around 20% of them create a stalk, sacrificing themselves so that the rest can move to the top of the structure and form spores, which can last for months without food. Ultimately, water and wind disperse those spores to new and potentially more nutrient-rich environments.

Scientists have used slime molds to experimentally investigate the emergence and maintenance of social behavior, identifying mechanisms that ensure cooperation among the amoebas. But they’ve always focused on the aggregated cells. Tarnita and her team wanted to investigate whether the cells that stayed behind — the “loners,” as they called them — also played an important role.

As they reported in the *Proceedings of the National Academy of Sciences* in 2015, those loners turned out to be perfectly functional, eating and dividing regularly in the presence of nutrients. Their offspring could aggregate normally when starved — and they always left behind some loners of their own. Their presence seemed to be a consistent aspect of slime mold behavior.

That left the possibility that the loners were simply the inevitable stragglers falling behind in a synchronization process involving hundreds of thousands of cells. In that case, the researchers expected the number of non-aggregating cells to vary randomly from one experiment to another.

But because the irregular shapes of the individual amoebas made them extremely difficult to count, it would take a few more years for Tarnita and her colleagues to test this — after graduate student Fernando Rossine joined her lab and figured out a way to precisely enumerate the cells. “That was a game changer,” Tarnita said.

And it immediately led to surprises.

One shock was that the loners constituted up to 30% of the original population, sometimes exceeding the number of cells in the aggregate’s stalk.

But that wasn’t all. The researchers had predicted that a constant fraction of cells would stay behind in each test. That would have meant that each cell was in effect independently flipping a (weighted) coin about whether to participate in the collective behavior. “We totally thought it was going to be a coin flip,” Rossine said. “We were convinced.”

As the scientists reported in March in *PLOS Biology*, however, instead of a constant fraction of loners, they found a constant number of them. “There is some sort of a set point that the cells have memorized,” said Thomas Gregor, a biophysicist at Princeton and one of the study’s co-authors. Different strains of the slime mold had different set points. As Tarnita put it, “Some of them seemed like extraordinarily good aggregators, leaving behind some 10,000 loners. Others were so bad at aggregating that they could leave behind 50,000 or 100,000 loners.”

That natural variation between strains means that loner behavior is a heritable trait that natural selection can act on. Further experiments and simulations showed that this number is also influenced by environmental factors, which affect how the cells’ chemical signals diffuse and interact to facilitate — or impede — aggregation.

That’s because when cells start to starve, they send chemical signals to their neighbors; when enough cells sound the alarm, the aggregation process begins. Tarnita and her colleagues built a simple model to show that the observed loner patterns could be explained by individual cells transforming into their actively aggregating form at different rates. As more cells aggregated, their starvation signals degraded. At a certain point, “there will be a few cells left behind that just don’t hear anyone screaming ‘danger’ anymore, because everyone who had been screaming ‘danger’ has already left,” Tarnita said. Those remaining cells are the loners.

“But what if that inability to fully synchronize can actually be harnessed by evolution to turn it into an interesting strategy?” Tarnita asked. Since evolution could potentially act on that process, “that might actually be really meaningful as a behavior.”

She and her colleagues posit that it’s a form of bet hedging. The aggregated cell body comes with its own risks: It could get eaten by a predator or be overrun by “cheater” cells that take advantage of the slime mold’s collective behavior for their own selfish gain. And if nutrients return abruptly to the environment, the amoebas can’t reverse the aggregation process to access that food.

The loner cells might therefore serve as a form of insurance in case any of those situations transpire. By staying out of the group, “you leave behind these seeds,” Tarnita said — seeds that could regenerate the population and its multicellular dynamics all on their own.

Bet hedging is not a new concept. Noisy gene expression can create diversity among genetically identical organisms, for instance — enabling small numbers of bacteria to resist antibiotics or certain individuals among clonal fish to be more aggressive. But those forms of bet hedging happen on an individual level.

Tarnita and her team suspect that the bet hedging they observed occurs at the level of the collective. “Each cell is not making the decision to become a loner in isolation,” she said. “It’s actually a social decision in some sense. It’s a decision that depends on the rest of the world” — on the chatter of surrounding cells and the physical nature of the environment.

The scientists still need to definitively prove that the slime mold gains a fitness benefit from this bet-hedging strategy. But previous work in game theory has shown that when individuals can “opt out” of a collective activity for a few rounds, it can help maintain cooperation and diversity in a population and protect the group against parasitic individuals. Ongoing modeling work in Tarnita’s lab has hinted at something similar.

“If this is what’s happening, then it’s a really interesting strategy — to preserve not any particular aggregate but to preserve the social behavior itself,” Tarnita said.

Moreover, the fact that “even this asocial part has a social component,” Rossine said, could help illuminate organisms’ evolutionary transition to multicellularity and social cooperation.

The researchers hope to pin down what’s happening at the molecular level to enable this strategy in the slime molds. But they’re most excited by the prospect of studying loners in other systems. “The theoretical idea of the loner as something that stabilizes the existence of the group is a very powerful one,” Rossine said. Perhaps it could also apply to the migrating wildebeests or flowering bamboo — or even to humans (a direction some members of Tarnita’s lab are now pursuing).

Although the scientists caution against stretching the analogy too far without further experimental work, Iain Couzin, a director of the Max Planck Institute of Animal Behavior in Germany, noted direct parallels to another system he has studied: locusts, which, like slime molds, have to deal with wildly fluctuating environments and sudden changes in food availability. Locusts also transition from a solitary state to an aggregated swarm when times get hard — and they seem to leave loners behind, too. Tarnita’s work “has made me think about how this type of process might be occurring at other scales of biological organization,” Couzin said.

There are other contexts in which loner behavior might prove evolutionarily crucial as well. Couzin and others have found, for instance, that some forms of loner behavior can lead to the emergence of leaders in groups. “Are these differences predetermined?” Couzin said. Or are they products of “a decision-making strategy that depends on both the physical and the biotic environment around the animals?”

Finding answers to these questions will be difficult. But in the meantime, the work demonstrates that to truly understand how collective and cooperative behaviors evolved, and how they continue to operate, researchers may need to study the seeming misfits that don’t participate.

“We’ve spent a long time trying to understand how things synchronize,” Tarnita said. “No one has really been interested in the single cells that don’t seem to do anything, or the lazy ants, or the wildebeests that for some reason decide not to migrate, or the locusts that peel off. We’ve just never really paid attention.”

Maybe now we are.

**Correction:** May 21, 2020

*A bracketed word was inserted into the final quotation from Tarnita to clarify her meaning.*

*This article was reprinted on **TheAtlantic.com**.*