If, for example, you cut open a pressurized scuba tank, the air molecules inside will spew out and spread throughout the room. Place an ice cube in hot water and the water molecules frozen in the ordered, crystalline lattice will break their bonds and disperse. In mixing and spreading, a system strives toward equilibrium with its environment, a process called thermalization.

It’s common and intuitive, and precisely what a team of physicists expected to see when they lined up 51 rubidium atoms in a row, holding them in place with lasers. The atoms started in an orderly pattern, alternating between the lowest-energy “ground” state and an excited energy state. The researchers assumed the system would quickly thermalize: The pattern of ground and excited states would settle almost immediately into a jumbled sequence.

And at first, the pattern did jumble. But then, shockingly, it reverted to the original alternating sequence. After some more mixing, it returned yet again to that initial configuration. Back and forth it went, oscillating a few times in under a microsecond — long after it should have thermalized.

It was as if you dropped an ice cube in hot water and it didn’t just melt away, said Mikhail Lukin, a physicist at Harvard University and a leader of the group. “What you see is the ice melts and crystallizes, melts and crystallizes,” he said. “It’s something really unusual.”

Physicists have dubbed this bizarre behavior “quantum many-body scarring.” As if scarred, the atoms seem to bear an imprint of the past that draws them back to their original configuration over and over again.

In the 16 months since the result was published in *Nature*, several groups of physicists have tried to understand the nature of these quantum scars. Some believe that the discovery might herald a new category for how quantum particles interact and behave — one that defies physicists’ assumptions that such a system follows an inexorable march toward thermalization. In addition, this scarring effect could lead to new kinds of longer-lasting quantum bits, key ingredients in any future quantum computer.

Indeed, when the physicists built their 51-atom system, they had their eye on quantum computing. The system was, in fact, a quantum simulator, a machine designed to simulate quantum processes that are otherwise impossible to investigate with a classical computer. At the time, the machine was the largest quantum simulator ever built.

The atoms in the Harvard machine serve as qubits, their on-off states being either the ground state or an excited state called a Rydberg state. The system allows researchers to tune it as they wish, for example by adjusting how strongly the atoms interact with one another.

The researchers prepared several initial configurations of ground and excited states. Because the atoms strongly interact with one another, they should thermalize. Instead of intermingling like gas molecules, though, the atoms in this kind of quantum system develop deep quantum connections with one another, called entanglement. “Then entanglement will just spread,” Lukin said. “That’s how thermalization occurs.”

Usually, entanglement did grow in the simulator. Yet when the researchers started the experiment in a configuration of alternating excited and ground states, the particles became entangled and then disentangled, oscillating as they moved in and out of their original configuration.

The behavior seemed unlikely to the point of being impossible. Once the atoms start interacting, their alternating pattern should quickly become forgotten, since the atoms can transition into an enormous number of possible sequences of excited and ground states. It’s like the case of the scuba tank, in which the air molecules escape their initial configuration inside the tank and disperse through the room. There are so many possible places for the molecules to explore that the probability that all of them will spontaneously squeeze back into the tank is effectively zero.

“The available quantum system can exist in so many possible states that it would be extremely hard for them to come back and find where they came from,” said Zlatko Papić, a physicist at the University of Leeds in England.

Yet that’s exactly what Lukin said they observed. The system seems to be imbued with some special physics that allows it to retrace its path, Papić said. “It leaves bread crumbs and goes back to where it came from.”

“This was the first real discovery which was made with a quantum machine,” Lukin said.

Lukin and colleagues started to write up the experiment, but before the paper was published, Lukin described it at a conference in Trieste, Italy, in July 2017. “We didn’t know what to make of it,” said Papić, who was in the audience that day. “I don’t think anyone in the audience had an idea for what could be the reason for this.”

Soon enough, however, Papić and colleagues realized this behavior was reminiscent of a phenomenon discovered roughly 30 years ago.

In the 1980s, the physicist Eric Heller of Harvard was exploring quantum chaos: What happens when you apply quantum mechanics to chaotic systems? In particular, Heller considered how a billiard ball bounces inside a “Bunimovich stadium” — a rectangle capped by semicircles. The system is chaotic; given enough time, the ball will cover every possible trajectory inside the stadium. But if you start the ball at a certain angle, it will instead retrace the same path forever.

In a thought experiment, Heller replaced the ball with a quantum particle. “The naive expectation is that if your classical system is already chaotic,” Papić said, when you add the rules of quantum mechanics “you should expect even more chaotic behavior.” The particle’s wave function — the abstract mathematical encapsulation of its quantum properties — should get smeared across the stadium just as water waves undulate throughout a pond. The likelihood that you’ll find the particle in a given place should be the same everywhere in the stadium.

But Heller discovered that instead of spreading out, the particle’s wave function congregates along the same paths as in the special classical case where the ball retraces itself. It’s as if the waves develop a memory of this special trajectory. “It’s like going home for them,” Heller said. “They really want to go back to where they’re born. It’s that simple.”

While hanging out along this trajectory, the particle’s wave constructively interferes with itself — crests add to crests and troughs add to troughs. As a result, the particle is most likely to reside somewhere along this path. On a graph, the particle’s probability distribution resembles fuzzier versions of those classical periodic trajectories. “They look to me like scars,” Heller said. So, in his 1984 paper, that’s what he called them.

Maybe a similar phenomenon could explain why the 51-atom system kept coming back to its initial configuration, Papić thought. Maybe it, too, was homesick.

To find out, Papić and his colleagues analyzed the quantum states of a model of the 51-atom system. Its weird oscillating behavior, they found, did in fact seem similar to Heller’s quantum scarring. They identified states that resembled the special states that correspond to scarred trajectories. By periodically returning to those states, the system could avoid thermalization. The connection to quantum scarring was suggestive enough that, in a paper last year in *Nature Physics*, they dubbed the phenomenon “quantum many-body scarring.”

Despite some initial skepticism of Papić’s analysis, Lukin, along with Wen Wei Ho, a physicist at Harvard, and others, then made the link to quantum scarring more explicit in a paper published in January. They identified a classical way to describe the state of the 51-atom system as a point in abstract space. As the system’s state changes, the point moves around. The researchers found that when the system undergoes its weird oscillations, the point sloshes back and forth in a manner akin to a ball’s special periodic paths across the stadium billiard table.

By finding a classical analogy, the researchers strengthened the case that Heller’s single-body phenomenon does in fact apply to a many-body system. “These guys are on to something,” Heller said. “They really are.”

What’s clear is that the experiment has triggered the interest of researchers around the world. One group at the California Institute of Technology has identified the mathematical expressions that represent some of the special scarring states of the 51-atom system. Another at Princeton University has suggested that scars could be a broader phenomenon with applications in different areas of condensed matter physics. “We kind of think we understand what’s happening in this system,” Ho said. “But we still don’t have a general recipe for when you can find other scarred trajectories.”

And deeper questions remain. “‘Scars’ is a useful description of the problem,” said Vedika Khemani, a physicist at Harvard who is not involved with the experiment. “But I still don’t think we have any understanding of what causes the scar.”

Despite these unknowns, many-body scarring excites physicists because it could represent a new class of quantum system.

Over the past few years, physicists have been exploring another such class, called a many-body localization, where random impurities prevent a system from thermalizing. As an analogy, consider a herd of cows wandering around a flat landscape. The cows should eventually spread out — call it bovine thermalization. But if the landscape features random hills, the cows will instead find themselves stuck in the valleys.

Similarly, the quantum many-body scarred system isn’t a chaotic, thermalizing system. Yet it doesn’t have anything like hills. “This work suggests there’s a new class of system that’s in between these two things,” Papić said.

To explain the scarring effect, Khemani’s recent analysis suggests that the 51-atom system might be (at least close to) what’s called an integrable system. Such a system is a special, isolated case, with many constraints and features that are tuned in a way that prevents it from ever thermalizing. So if the scarring system is integrable, it might be a unique case of a broader class of phenomena.

Physicists have been studying integrable systems for decades, and if this system is integrable, Papić said, the implications are less compelling than if it is a unique quantum system. Papić, Ho and Lukin have written a paper that argues against this possibility.

Regardless of whether many-body scarring is indeed a new class of quantum behavior, the discovery points to the tantalizing prospect that it might eventually be used to improve quantum computers.

One of the challenges of building a quantum computer is the need to protect its fragile qubits. Any disturbance or perturbation from the outside environment can cause the qubits to thermalize, erasing any stored information and rendering the computer useless. “If you can find a generic way to induce scarring in other systems, then maybe you can protect quantum information for a long time,” Ho said.

Scarring, then, might offer a way for the computer to cling onto memory, preserving the past before the chaos of thermalization wipes it away.

“There is some beautiful structure that somehow coexists with a totally random environment,” Papić said. “What kind of physics allows this to happen? This is a kind of deep and profound question that runs through many areas of physics, and I think this is another incarnation.”

]]>Uhlenbeck’s work has “led to some of the most dramatic advances in mathematics in the last 40 years,” the prize citation reads.

Her research “inspired a generation of mathematicians,” said François Labourie of the University of Côte d’Azur in France. “She wanders around and finds new things that nobody has found before.”

In a phone interview today, Uhlenbeck said that she’s “a bit overwhelmed,” adding that after she learned about the prize on Sunday, she thought, “I hope I can hold myself together for this.”

Uhlenbeck, who was born in 1942 in Cleveland, was a voracious reader as a child, but she didn’t become deeply interested in mathematics until she enrolled in the freshman honors math course at the University of Michigan. “The structure, elegance and beauty of mathematics struck me immediately, and I lost my heart to it,” she wrote in the book *Mathematicians: An Outer View of the Inner World*.

Mathematics research had another feature that appealed to her at the time: It is something you can work on in solitude, if you wish. In her early life, she said in 1997, “I regarded anything to do with people as being sort of a horrible profession.”

In the mid-1960s Uhlenbeck attended graduate school at Brandeis University, where she chose Richard Palais for her adviser. Palais was exploring what was then mostly uncharted territory lying between analysis (a generalization of calculus) and topology and geometry (which study the structure of shapes). “I was attracted to that — the area in between things,” Uhlenbeck said in an interview last year in *Celebratio Mathematica*. “It was like jumping off a deck where you didn’t know what was going to happen.”

Palais and the mathematician Stephen Smale (who won a Fields Medal for his topology research soon after) had just made an advance regarding “harmonic maps” that would form the springboard for some of Uhlenbeck’s own most important results. The study of harmonic maps can be traced back to a centuries-old field of mathematics called the calculus of variations, which looks for shapes that are in equilibrium with respect to some natural physical measurement, such as energy, length or area. For instance, one of the oldest and most famous problems in the calculus of variations is the “brachistochrone” problem, posed by Johann Bernoulli in 1696, which asks for the curve down which a ball will roll most quickly from one point to another.

To understand what it means for a map to be harmonic, imagine some compact shape made of rubber — a rubber band, say, or a rubbery sphere. Next, choose a particular way to situate this shape inside a given space (such as an infinite three-dimensional space or a three-dimensional doughnut shape). This positioning of the shape is called a harmonic map if, roughly speaking, it puts the shape in equilibrium, meaning that the rubber won’t snap into some different configuration that has lower elastic potential energy (what mathematicians call Dirichlet energy).

When the space you’re mapping the rubbery shape into is a complicated object with holes (such as a doughnut surface or its higher-dimensional counterparts), a variety of harmonic maps may emerge. For instance, if you wrap a rubber band around the central hole of a doughnut surface, the band cannot shrink all the way down to a point without leaving the surface of the doughnut — instead it will contract down to the shortest route around the hole.

When the rubbery shape or the target space is some complicated, possibly high-dimensional object, figuring out the range of possibilities for harmonic maps can be tricky, since we can’t simply build a physical model and then see what the rubber does. Intuitively, we might try to build a harmonic map by starting with any map and then looking for ways to deform it, little by little, to bring it closer to equilibrium. But it’s not always clear whether such a process will eventually converge and reach equilibrium.

This question makes sense for other energy-like measures besides the Dirichlet energy, and Palais and Smale came up with a condition on energy measures which, when satisfied, guarantees that at least some of these deformation processes will indeed converge. The Palais-Smale condition was perfect for the one-dimensional case of harmonic maps, where we’re mapping an (infinitely narrow) rubber band into some compact space like a sphere or doughnut surface. But when the rubbery shape has dimension greater than one — if it’s a surface, for instance, or some higher-dimensional object — the Dirichlet energy does not always satisfy Palais and Smale’s condition, meaning that the process of gradually deforming a mapping to reduce its Dirichlet energy may sometimes fail to converge to a harmonic map.

In the mid-1970s, while a professor at the University of Illinois, Urbana-Champaign, Uhlenbeck set out to understand what this failure to converge can look like. Her five years at Urbana-Champaign were not especially happy ones — she and her then husband were both professors there, and she felt as if she was seen primarily as a “faculty wife” — but while there she met a postdoctoral fellow named Jonathan Sacks. Together, they explored a sequence of different energy-like measures on two-dimensional surfaces that each satisfies the Palais-Smale condition, and that approach the Dirichlet energy. In each of these alternative energy measures, the Palais-Smale condition guarantees that there is an energy-minimizing map. As the energy in question gets closer and closer to the Dirichlet energy, Uhlenbeck and Sacks asked, do these maps converge to a harmonic map?

The answer, they showed in the late 1970s and early 1980s, is “almost.” At nearly every point of the surface, these maps do converge to a harmonic map. But at a finite collection of points on the surface, the maps may start to form a very specific kind of singularity called a bubble where there’s no way to make sense of a mapping.

To envision a bubble singularity, imagine that you’re chewing gum, and you blow a bubble but then gradually pull more and more of the gum inside your mouth, while still maintaining the bubble at the same size. The gum forming the bubble will get stretched thinner and thinner, but the bubble will remain viable throughout this process (at least in an idealized setting in which the gum is infinitely stretchable). But at the end of this process, the bubble will pop, since you’ve pulled essentially all the gum inside your mouth.

In a similar way, Sacks and Uhlenbeck showed, the maps that minimize the alternative forms of energy converge to a harmonic map nearly everywhere, but near a handful of points on the surface they start to form bubbles. As the energy gets closer and closer to the Dirichlet energy, these bubbles will be built from smaller and smaller patches of the surface. At the end of the process, when we reach the actual Dirichlet energy, the map will want to make an entire bubble out of only a single point, so we’ll hit a singularity. Since the compact space the surface is getting mapped into — something like a sphere or a doughnut surface — has only a finite number of holes around which bubbles can form, there are only finitely many of these bubbling singularities.

Uhlenbeck and Sacks’ work showed that the topology of this space informs what singularities a harmonic map can have, since bubbles can form only around holes. And conversely, the existence of harmonic maps can illuminate the geometry and topology of the space. Their work was instrumental to the birth of a new field of mathematics: modern “geometric analysis.”

The bubbling analysis “has been revolutionary, in a sense,” Labourie said. “There was before the Sacks-Uhlenbeck paper, and after.”

Since that time, bubbling phenomena have been discovered in a wide range of settings in mathematics and physics, said Sun-Yung Alice Chang, a mathematician at Princeton University. “ influence crosses the different branches of mathematics,” she said.

Uhlenbeck has called her early, isolationist approach to mathematics an advantage when it came to her work with Sacks. “When Jonathan Sacks taught me the problem, I didn’t have any built-in machinery to think about it,” she said in the *Celebratio Mathematica* interview. “So I was able to think about it on my own.”

Her work with Sacks was transformative, but Uhlenbeck was just getting started. In the early 1980s (by which time she was a professor at the University of Illinois, Chicago), she grew interested in gauge theory, an outgrowth of the theory of electromagnetism that provides the mathematical foundation for many physical theories, including the Standard Model of particle physics. As with harmonic maps, a major aspect of gauge theory involves finding objects that are in equilibrium with regard to a certain definition of energy. In this case, the objects in question are solutions to the “Yang-Mills” equations, which are analogous to Maxwell’s equations for electromagnetism.

In what Labourie called a “quantum leap,” Uhlenbeck attacked these equations from an analytic point of view. She identified a new coordinate system in which the equations could be studied more easily, and then proved her celebrated “removable singularities” theorem, which showed that for four-dimensional shapes, bubbling cannot occur around isolated points. In that setting, she showed, any finite-energy solution to the Yang-Mills equations that is well-defined in the neighborhood of a point will also extend smoothly to the point itself.

Uhlenbeck’s gauge-theory results “underpin most subsequent work in this area,” wrote Simon Donaldson of Imperial College London in a survey of her work earlier this month. Donaldson won a Fields Medal in 1986 for work that built on Uhlenbeck’s.

Uhlenbeck is the first woman to receive the Abel Prize in the award’s 17-year history. It’s far from the first time she has broken through a glass ceiling; in 1990, for instance, she was only the second woman ever to give a plenary lecture at the International Congress of Mathematicians, ending a 58-year dry spell. (The first woman to give such a lecture was Emmy Noether in 1932.) Over the years she has become an aspirational figure for a generation of female mathematicians. “We were all inspired by her,” said Chang, who earned her doctorate six years after Uhlenbeck.

As her career progressed, Uhlenbeck — who as a young mathematician had been drawn to the solitude of mathematical thought — embraced her role as a model for female mathematicians. In the early 1990s, she started co-leading a mentoring program for women in mathematics at the Institute for Advanced Study in Princeton, New Jersey. Being a role model is challenging, she wrote in 1996, because “what you really need to do is show students how imperfect people can be and still succeed. … I may be a wonderful mathematician and famous because of it, but I’m also very human.”

In 2007, as a professor at the University of Texas, Austin (where she is now a professor emeritus), Uhlenbeck reflected on her long career. “All in all, I have found great delight and pleasure in the pursuit of mathematics,” she wrote in accepting the Leroy P. Steele Prize from the American Mathematical Society. “Along the way I have made great friends and worked with a number of creative and interesting people. I have been saved from boredom, dourness, and self-absorption. One cannot ask for more.”

*This article was updated on March 19, 2019, with additional quotes from the prize winner.*

But that simple picture of mitochondria is turning out to be shockingly incomplete.

Mitochondria may look static and uniform in textbooks, but as researchers recognized early on, in reality the organelles change shape constantly through cycles of fusion (in which they combine and elongate) and fission (in which they split and shrink). They form highly dynamic, short-lived tubular networks threading throughout a cell. Recently, it has become clear that mitochondria also perform signaling and regulatory functions that are only indirectly related to their job as energy providers. In the past few years, research has revealed that one of their key roles is in controlling the development and ultimate role of stem cells.

Now scientists at the University of Ottawa in Canada have provided evidence that the morphing shapes of mitochondria powerfully influence neurogenesis, the development of neurons. In making this discovery, the scientists have pieced together a connection between the organelle’s shape transitions and how it carries out its signaling functions.

The first hints that mitochondria had a broader repertoire emerged in the mid-1990s. In one early study, researchers at Emory University and the University of Minnesota investigated apoptosis, the process of programmed cell death that eliminates cells from tissues as a normal part of growth and development. They found that cytochrome c — a protein essential to ATP production — was crucial to this process. Their work also indicated that, at least in principle, mitochondria might be able to trigger cell death by releasing the cytochrome c they housed into the surrounding cytoplasm.

According to Navdeep Chandel, a professor of biochemistry and molecular biology at Northwestern University, this was an aha! moment for mitochondrial biology, because it suggested that the organelles could generate signals to control other cellular processes.

The study propelled Chandel, then at the University of Chicago, and his colleagues to examine whether mitochondria could release other signals as well. Those investigations led to a discovery a couple of years later involving the reactive oxygen species (ROS) — unstable molecules containing oxygen, such as peroxides, singlet oxygen and hydroxyl radicals — that mitochondria release while making ATP. Under oxygen-deficient conditions, they observed, mitochondria produced higher levels of ROS, and the excess molecules exited into the cytoplasm, where they would aid in the expression of proteins that helped the cells survive.

Since then, Chandel and others have shown that mitochondrial ROS signaling is important in diverse processes. One crucial role that has emerged is in promoting the differentiation of various types of stem cell, including those for blood and fat cells — and, most recently, for neurons.

For stem cells, the primary means of producing energy is glycolysis, a process that generates ATP in the cytoplasm, rather than oxidative phosphorylation, the mitochondria-dependent method preferred by most mature, specialized cells. Why the cells differ in this way is not known: It may have something to do with the rate or byproducts of each process. But whatever the reasons, for a long time that difference obscured the role of mitochondria in stem cells, says Mireille Khacho, a cell biologist at the University of Ottawa.

Stem cells can perpetually “self-renew” or make younger replacements for themselves. But if they instead differentiate into specific lineages, they shift their primary source of fuel from glycolysis to oxidative phosphorylation. Because the latter process generates more ATP, scientists initially believed that the cellular transformation must have high energy requirements that mandate the transition.

This thinking began to change in the early 2010s, however, when findings from a handful of papers suggested that the mode of metabolism can directly influence decisions about cell fate.

In one key paper from 2011, researchers studied how to reprogram adult cells to become induced pluripotent stem cells, which, like embryonic stem cells, can proliferate and mature into almost any cell type. They revealed that for this transformation to occur, the cells had to shift from oxidative phosphorylation to glycolysis. Moreover, they observed that the expression of proteins involved in mitochondrial energy production decreased before the expression of those involved in pluripotency increased — an indication that the metabolic switch might be what initiates the cells’ transformation.

Until that revelation, most stem cell biologists had been focused on the genetic and epigenetic modifications that control the cell identity transitions, says Clifford Folmes, a mitochondrial researcher at the Mayo Clinic in Phoenix, Arizona, who was one of the co-authors of that study. But that paper and others like it have made a case that changes in mitochondrial function may actually be key drivers of the process.

The discovery that mitochondria might control the reprogramming of cells drove Khacho and Ruth Slack, her postdoctoral adviser at the University of Ottawa, to further investigate the role of the organelles in neuronal stem cells.

Ample evidence already suggested that mitochondria are important for brain function: Not only are neurodevelopmental problems common consequences of many mitochondrial disorders, but several studies in both humans and animals have linked defects in mitochondrial fusion and fission with neurodevelopmental disorders, such as autism, and with neurodegenerative diseases, such as Alzheimer’s and amyotrophic lateral sclerosis (ALS).

In 2016, Slack, Khacho and their colleagues reported the first evidence that mitochondrial shape-shifting is a key regulator of neural stem cell fate, the decision to self-renew or differentiate. By deleting genes that encoded key proteins for the fusion and fission machinery in mice, they discovered that a deficiency in fusion proteins reduced neural stem cells’ capacity to replenish themselves and encouraged the cells to become neurons. A loss of fission proteins, on the other hand, stimulated the stem cells to self-renew.

Their work showed that changes in the shape and architecture of mitochondria are among the earliest, most “upstream” signals to determine which way neural stem cells will go.

Given the previously established link between alterations in the fission and fusion machinery and neurodegenerative disorders, the team also investigated whether disrupting mitochondrial dynamics could alter the production of new neurons. When they knocked out fusion proteins in the brains of fully grown mice, they discovered that this disruption of the shape-shifting process reduced the number of new neurons produced in the animals’ brains and led to impairments in memory and learning.

Genetic defects are also known to alter mitochondrial fission and fusion in humans, but the idea that they might particularly influence stem cells hasn’t really been explored yet, Slack said. “What we’re working on now is trying to find new ways, through dietary or pharmacological means, to improve mitochondrial function in stem cells so we can maintain optimum learning and memory for as long as we can.”

Alessandro Prigione, a stem cell scientist at the Max Delbrück Center for Molecular Medicine in Germany, acknowledges that several studies — including his own — point to the importance of mitochondria in neuronal cell fate. However, he adds, it’s too early to tell exactly how mitochondrial shape controls neurogenesis. “I think fission and fusion matter,” he said, but mitochondrial morphology is just “one piece of the puzzle.”

Prigione also advises caution in drawing conclusions about humans based on results from studies of neurogenesis in rodents. This is a particularly important consideration in studies conducted in mature animals, he says, because the question of whether the adult human brain generates new neurons at all is still a matter of debate.

Other research groups have also found that mitochondrial shape-shifting controls the fate of stem cells, but there seem to be notable dissimilarities across the array of stem cell varieties and experimental conditions. Studies on most types of stem cells show that their mitochondria are sparse and fragmented, but that they progressively elongate as the cells differentiate. Prigione’s experiments, for example, found this to be the case with human neuronal cells in culture. But Slack and Khacho saw the opposite in neural stem cells from rodents: In their work, mitochondria start off elongated in the stem cells, then become fragmented in progenitor cells (which are more committed to a specific cell fate) before becoming elongated again as they differentiate into neurons.

The real significance of Slack and Khacho’s work in neural stem cells might be that the mitochondria’s role in neurogenesis relates to something more dynamic than shape alone. According to Khacho, it’s likely that what matters isn’t the organelles’ form in a cell at a given moment, but rather their ability to morph through fission and fusion. Fission and fusion are happening all the time, and so far, scientists have only been looking at snapshots of this process. “Perhaps it’s the plasticity, the ability to change,” Khacho said. “That’s the important thing.”

Mitochondrial dynamics are clearly important for stem cell function in general, according to David Chan, who leads a lab that studies them at the California Institute of Technology — but the dynamics are especially complicated in neural stem cells. “I guess right now, the simple answer would be that neuronal cells are just different,” he said.

Exactly how mitochondrial shape-shifting can control decisions about cell fate is an open question.

Findings from Slack, Khacho and their colleagues suggest that changes in mitochondrial structure could modify the amount of ROS in cells. They’ve shown that fission and fusion can control levels of ROS, which can in turn regulate the decisions of stem cells to proliferate or differentiate.

“What they found is something interesting,” Chandel said. “The same ROS signaling that we’ve been talking about for 20 years happens in neurons, and mitochondrial dynamics can control that.”

But ROS is probably only part of the answer. Mitochondria can communicate with the cell in many ways, such as through the generation of other metabolites, the release and uptake of calcium, and changes in membrane potential. “Any signaling molecules that result from metabolic changes — and there are many, many molecules — could be important,” Slack said.

Moreover, it’s unlikely that the same mitochondrial signals control the fate of different stem cell types. “We know that participate in a number of differentiation processes,” said Luca Scorrano, a biochemist at the University of Padua in Italy. But “as soon as we look into the specificity of the mitochondrial participation … we see that the signaling cascades which are regulated by mitochondrial dynamics are not necessarily the same.”

Both Slack and Khacho are searching for other mitochondrial metabolites that might be involved in stem cell fate. Khacho, who now leads her own lab at the University of Ottawa, has moved on from neural stem cells to ones for muscle, and she hopes to identify similarities in mitochondrial dynamics and ROS signaling in another cell type. “I wanted to see: Is there another stem cell population that is utilizing mitochondria the same way?” she said. “Then I’m hoping to take it beyond that and try to identify the mechanisms of how that’s happening.”

“I think it’s an area that’s going to get a lot more interest,” Slack said. “The fact that mitochondria can signal to the nucleus and alter the fate of the cell, I think, really is important.” And because so many of the signals themselves seem to be metabolite molecules, scientists should potentially be able to manipulate them easily to alter the fates of cells or to revitalize depleted stem cell populations. “That’s why we’re excited.”

]]>1, 2, 4, 8

Here’s another number, in case you need a little more data before deciding.

1, 2, 4, 8, 16

The next number has to be 32, right? The pattern is clear: To find the next number, double the current one. We have 1 × 2 = 2; 2 × 2 = 4; 4 × 2 = 8; 8 × 2 = 16. The next number should be 16 × 2 = 32. How much more evidence do we need?

While it’s perfectly reasonable to believe the next number is 32, it happens to be wrong. Consider the following sequence.

Here we are counting the regions formed by connecting points on a circle. One point yields one region (the interior of the circle); two points yields two regions; three points yields four regions. Four and five points yield eight and 16 regions, respectively. This gives us the sequence:

1, 2, 4, 8, 16

So, how many regions are created by connecting six points on a circle?

You’d be forgiven for thinking, like everyone else who first meets this problem, that the answer is 32. But it’s not. The answer is an annoying 31 regions! Count them yourself. And count them again to be sure.

Of course, there are patterns that go 1, 2, 4, 8, 16, 32, 64, and so on, doubling each term. But there are also patterns, like the maximum number of regions formed by connecting points on a circle, that go 1, 2, 4, 8, 16, 31, 57, 99, and so on. When we see the sequence 1, 2, 4, 8, 16, we might think all the evidence points to the next term being 32, but it could be something else.

Mathematics has a long history of defying expectations and forcing us to expand our imaginations. That’s one reason mathematicians strive for proof, not just evidence. It’s proof that establishes mathematical truth. All available evidence might point to 32 as the next number in our sequence, but without a proof, we can’t be certain.

Still, evidence is important and useful in mathematics. Often, before proving something, we play around, explore, consider examples and collect data. We examine and weigh the evidence and decide what comes next. These results shape our opinions, suggesting that we should try to prove some theorems but disprove others.

The twin primes conjecture is one example where evidence, as much as proof, guides our mathematical thinking. Twin primes are pairs of prime numbers that differ by 2 — for example, 3 and 5, 11 and 13, and 101 and 103 are all twin prime pairs. The twin primes conjecture hypothesizes that there is no largest pair of twin primes, that the pairs keep appearing as we make our way toward infinity on the number line.

The twin primes conjecture is not the Twin Primes Theorem, because, despite being one of the most famous problems in number theory, no one has been able to prove it. Yet almost everyone believes it is true, because there is lots of evidence that supports it.

For example, as we search for large primes, we continue to find extremely large twin prime pairs. The largest currently known pair of twin primes have nearly 400,000 digits each. And results similar to the twin primes conjecture have been proved. In 2013, Yitang Zhang shocked the mathematical world by proving that there are infinitely many prime number pairs that differ by 70 million or less. Thanks to a subsequent public “Polymath” project, we now know that there are infinitely many pairs of primes that differ by no more than 246. We still haven’t proved that there are infinitely many pairs of primes that differ by 2 — the twin primes conjecture — but 2 is a lot closer to 246 than it is to infinity.

For these reasons and more, believing that the twin primes conjecture is true, even though it hasn’t been proved, isn’t very controversial. But there are other areas of math where evidence is being used to inform opinion in more controversial ways.

In the study of elliptic curves, the “rank” of a curve, roughly speaking, is a numeric measure of how complex that curve’s solutions can be. For many years the consensus view has been that the ranks of elliptic curves are unbounded, meaning there is no limit to how high a curve’s rank, or how complex its solutions, can be.

But recent work has some mathematicians thinking that ranks may be bounded after all. The work presents evidence suggesting that, just maybe, there are only finitely many elliptic curves whose rank is greater than 21.

Still, there are reasons to be cautious. The compelling evidence they’ve collected doesn’t come from the world of elliptic curves. It comes from the world of matrices, which the researchers used to model elliptic curves. Mathematical models are used everywhere in science and can even be turned inward to study mathematics itself. They are incredibly powerful tools that allow us to trade a problem we don’t fully understand for one we have a better handle on.

But using models is inherently tricky. We can never be certain that our model behaves enough like the thing we are actually trying to understand to draw conclusions about it. Nor can we be sure that our model is similar enough in the ways that really matter. So it can be hard to know that the evidence we collect from the model is truly evidence about the thing we want to know about. Let’s explore some of these issues using a simple model of a simple conjecture.

Imagine we want to investigate the following claim: *Any two lines either intersect or are parallel.*

By “intersect” we mean the lines share a point in common, and by “parallel” we mean they go off in the same direction but do not intersect. (There are different ways to define parallelism, but we’ll go with this for simplicity).

To investigate this claim we will create a model. We’ll imagine each line to be in “slope-intercept” form, which you may remember from algebra class. That is, we’ll assume that every line can be written as an equation:

*y* = *mx* + *b*

where *m *is the slope of the line (essentially its steepness) and *b* is the y-intercept (where it passes through the vertical axis).

Modeling lines in this way gives us a convenient way to experiment with them. The model lets us create a random line by picking a pair of random numbers, *m* and *b*. Thus, we can pick a pair of random lines and test them: Do they intersect? Do they point in the same direction? Or does something else happen?

Here are some examples of what such experimentation might look like.

In each example above, we see that the randomly selected lines intersect. If we tried this experiment 1,000 times — or 10,000 times or 1 million times — we would that find that, in all cases, the lines would either intersect or be parallel. (In fact, all pairs of lines would probably intersect, since it’s unlikely that the exact same slope will be chosen for both lines.)

So after looking at 1 million examples, you might come to the conclusion that the conjecture is probably true. All the evidence overwhelmingly supports the claim that any pair of lines either intersects or is parallel.

But evidence is only as good as the model, and modeling can be dangerous business. Let’s see what danger we’ve created for ourselves.

One problem is that certain kinds of lines seem more likely to be chosen than others. Here’s a graph showing 50 lines with *b* = 0 and 0 ≤ *m* ≤ 1.

And here is a graph showing 50 lines with *b* = 0 and *m *≥ 1.

It appears that a quarter of the plane is covered by lines with slopes between 0 and 1, and another quarter of the plane is covered by lines with slope greater than 1. Choosing a number larger than 1 seems much more likely than choosing a number between 0 and 1, thus a line is much more likely to be selected from the second region than from the first. This means certain kinds of lines — those with slopes between 0 and 1 — could be vastly underrepresented in our model. If strange things are happening with lines in that region of the plane, our model is very unlikely to tell us about it.

A closer look at the second graph suggests another problem. As *m* gets larger, the lines get steeper. The steepest possible line is vertical. What is the slope of a vertical line? By definition, the slope of a vertical line is undefined: There is no number *m* we could choose to create a vertical line. That means these lines don’t exist in our model, and so we will never be able to experiment with them. Before we even begin collecting evidence, we have excluded these possibilities by design.

And this speaks to the heart of the most serious issue with our model. Anyone comfortable thinking three-dimensionally probably noticed right away that our conjecture is false. Lines do not only have to either intersect or be parallel. Imagine two hallways running in different directions on different floors of a building. These are examples of “skew” lines — lines that do not intersect and are not parallel.

The important fact about skew lines is that they must lie in different planes. But since our model identifies every line with an equation *y* = *mx* + *b*, we automatically imagine every line in the same plane. Our model will only generate evidence supporting our conjecture, because if two lines lie in the same plane, it is true that they must either intersect or be parallel. We’ll never see any evidence suggesting otherwise: Skew lines don’t exist in our model. Just as we saw with vertical lines, our model has excluded what we failed to imagine.

This is a simple example using a silly model with lots of issues, including pesky questions about how we choose random numbers from infinite sets. The professional mathematicians exploring the rank of elliptic curves would never make the kind of simplistic and obvious errors highlighted here.

Those mathematicians know to be cautious when working with their models. Because they know that no matter how useful and interesting their model, no matter how compelling the evidence they collect, there might be something out there about elliptic curves that they didn’t quite imagine. And if you can’t imagine it, your model can’t capture it, and that means the evidence won’t reflect it.

But right or wrong, this new model has mathematicians thinking productively about elliptic curves. If the model really does reflect the truth, insight from the world of matrices might explain why elliptic curves behave the way they do. If it doesn’t, figuring out why elliptic curves can’t all be modeled this way might also lead to a deeper understanding of the problem. The evidence we collect may lead us closer to proof, one way or another.

*Corrected on March 14, 2019: A previous version of this article mischaracterized the ratio of twin prime pairs to the overall number of primes.*

Since then, scientists have been trying to understand what goes into making this blueprint, and how instructive it is. (Driesch himself, frustrated at his inability to come up with a solution, threw up his hands and left the field entirely.) It’s now known that some form of positional information makes genes variously switch on and off throughout the embryo, giving cells distinct identities based on their location. But the signals carrying that information seem to fluctuate wildly and chaotically — the opposite of what you might expect for an important guiding influence.

“The is a noisy environment,” said Robert Brewster, a systems biologist at the University of Massachusetts Medical School. “But somehow it comes together to give you a reproducible, crisp body plan.”

The same precision and reproducibility emerge from a sea of noise again and again in a range of cellular processes. That mounting evidence is leading some biologists to a bold hypothesis: that where information is concerned, cells might often find solutions to life’s challenges that are not just good but optimal — that cells extract as much useful information from their complex surroundings as is theoretically possible. Questions about optimal decoding, according to Aleksandra Walczak, a biophysicist at the École Normale Supérieure in Paris, “are everywhere in biology.”

Biologists haven’t traditionally cast analyses of living systems as optimization problems because the complexity of those systems makes them hard to quantify, and because it can be difficult to discern what would be getting optimized. Moreover, while evolutionary theory suggests that evolving systems can improve over time, nothing guarantees that they should be driven to an optimal level.

Yet when researchers have been able to appropriately determine what cells are doing, many have been surprised to see clear indications of optimization. Hints have turned up in how the brain responds to external stimuli and how microbes respond to chemicals in their environments. Now some of the best evidence has emerged from a new study of fly larva development, reported recently in *Cell*.

For decades, scientists have been studying fruit fly larvae for clues about how development unfolds. Some details became apparent early on: A cascade of genetic signals establishes a pattern along the larva’s head-to-tail axis. Signaling molecules called morphogens then diffuse through the embryonic tissues, eventually defining the formation of body parts.

Particularly important in the fly are four “gap” genes, which are expressed separately in broad, overlapping domains along the axis. The proteins they make in turn help regulate the expression of “pair-rule” genes, which create an extremely precise, periodic striped pattern along the embryo. The stripes establish the groundwork for the later division of the body into segments.

How cells make sense of these diffusion gradients has always been a mystery. The widespread assumption was that after being pointed in roughly the right direction (so to speak) by the protein levels, cells would continuously monitor their changing surroundings and make small corrective adjustments as development proceeded, locking in on their planned identity relatively late. That model harks back to the “developmental landscape” proposed by Conrad Waddington in 1956. He likened the process of a cell homing in on its fate to a ball rolling down a series of ever-steepening valleys and forked paths. Cells had to acquire more and more information to refine their positional knowledge over time — as if zeroing in on where and what they were through “the 20 questions game,” according to Jané Kondev, a physicist at Brandeis University.

Such a system could be accident prone, however: Some cells would inevitably take the wrong paths and be unable to get back on track. In contrast, comparisons of fly embryos revealed that the placement of pair-rule stripes was incredibly precise, to within 1 percent of the embryo’s length — that is, to single-cell accuracy.

That prompted a group at Princeton University, led by the biophysicists Thomas Gregor and William Bialek__,__ to suspect something else: that the cells could instead get all the information they needed to define the positions of pair-rule stripes from the expression levels of the gap genes alone, even though those are not periodic and therefore not an obvious source for such precise instructions.

And that’s just what they found.

Over the course of 12 years, they measured morphogen and gap-gene protein concentrations, cell by cell, from one embryo to the next, to determine how all four gap genes were most likely to be expressed at every position along the head-to-tail axis. From those probability distributions, they built a “dictionary,” or decoder — an explicit map that could spit out a probabilistic estimate of a cell’s position based on its gap-gene protein concentration levels.

Around five years ago, the researchers — including Mariela Petkova, who started the measurement work as an undergraduate at Princeton (and is currently pursuing a doctorate in biophysics at Harvard University), and Gašper Tkačik, now at the Institute of Science and Technology Austria — determined this mapping by assuming it worked like what’s known as an optimal Bayesian decoder (that is, the decoder used Bayes’ rule for inferring the likelihood of an event from prior conditional probabilities). The Bayesian framework allowed them to flip the “unknowns,” the conditions of probability: Their measurements of gap gene expression, given position, could be used to generate a “best guess” of position, given only gap gene expression.

The team found that the fluctuations of the four gap genes could indeed be used to predict the locations of cells with single-cell precision. No less than maximal information about all four would do, however: When the activity of only two or three gap genes was provided, the decoder’s location predictions were not nearly so accurate. Versions of the decoder that used less of the information from all four gap genes — that, for instance, responded only to whether each gene was on or off — made worse predictions, too.

According to Walczak, “No one has ever measured or shown how well reading out the concentration of these molecular gradients … actually pinpoints a specific position along the axis.”

Now they had: Even given the limited number of molecules and underlying noise of the system, the varying concentrations of the gap genes was sufficient to differentiate two neighboring cells in the head-to-tail axis — and the rest of the gene network seemed to be transmitting that information optimally.

“But the question always remained open: Does the biology actually care?” Gregor said. “Or is this just something that we measure?” Could the regulatory regions of DNA that responded to the gap genes really be wired up in such a way that they could decode the positional information those genes contained?

The biophysicists teamed up with the Nobel Prize-winning biologist Eric Wieschaus to test whether the cells were actually making use of the information potentially at their disposal. They created mutant embryos by modifying the gradients of morphogens in the very young fly embryos, which in turn altered the expression patterns of the gap genes and ultimately caused pair-rule stripes to shift, disappear, get duplicated or have fuzzy edges. Even so, the researchers found that their decoder could predict the changes in mutated pair-rule expression with surprising accuracy. “They show that the map is broken in mutants, but in a way that the decoder predicts,” Walczak said.

“You could imagine that if it was getting information from other sources, you couldn’t trick like that,” Brewster added. “Your decoder would fail.”

These findings represent “a signpost,” according to Kondev, who was not involved with the study. They suggest that there’s “some physical reality” to the inferred decoder, he said. “Through evolution, these cells have figured out how to implement Bayes’ trick using regulatory DNA.”

How the cells do it remains a mystery. Right now, “the whole thing is kind of wonderful and magical,” said John Reinitz, a systems biologist at the University of Chicago.

Even so, the work provides a new way of thinking about early development, gene regulation and, perhaps, evolution in general.

The findings provide a fresh perspective on Waddington’s idea of a developmental landscape. According to Gregor, their work indicates that there’s no need for 20 questions or a gradual refinement of knowledge after all. The landscape “is steep from the beginning,” he said. All the information is already there.

“Natural selection pushing the system hard enough so that it … reaches a point where the cells are performing at the limit of what physics allows,” said Manuel Razo-Mejia, a graduate student at the California Institute of Technology.

It’s possible that the high performance in this case is a fluke: Since fruit fly embryos develop very quickly, perhaps in their case “evolution has found this optimal solution because of that pressure to do everything very rapidly,” said James Briscoe, a biologist at the Francis Crick Institute in London who did not participate in this study. To really cement whether this is something more general, then, researchers will have to test the decoder in other species, including those that develop more slowly.

Even so, these results set up intriguing new questions to ask about the often-enigmatic regulatory elements. Scientists don’t have a solid grasp of how regulatory DNA codes for the control of other genes’ activities. The team’s findings suggest that this involves an optimal Bayesian decoder, which allows the regulatory elements to respond to very subtle changes in combined gap gene expression. “We can ask the question, what is it about regulatory DNA that encodes the decoder?” Kondev said.

And “what about it makes it do this optimal decoding?” he added. “That’s a question we could not have asked before this study.”

“That’s really what this work sets up as the next challenge in the field,” Briscoe said. Besides, there may be many ways of implementing such a decoder at the molecular level, meaning that this idea could apply to other systems as well. In fact, hints of it have been uncovered in the development of the neural tube in vertebrates, the precursor of their central nervous system — which would call for a very different underlying mechanism.

Moreover, if these regulatory regions need to perform an optimal decoding function, that potentially limits how they can evolve — and in turn, how an entire organism can evolve. “We have this one example … which is the life that evolved on this planet,” Kondev said, and because of that, the important constraints on what life can be are unknown. Finding that cells show Bayesian behavior could be a hint that processing information effectively may be “a general principle that makes a bunch of atoms stuck together loosely behave like the thing that we think is life.”

But right now, it is still only a hint. Although it would be “kind of a physicist’s dream,” Gregor said, “we are far from really having proof for this.”

The concept of information optimization is rooted in electrical engineering: Experts originally wanted to understand how best to encode and then decode sound to allow people to talk on the telephone via transoceanic cables. That goal later turned into a broader consideration of how to transmit information optimally through a channel. It wasn’t much of a leap to apply this framework to the brain’s sensory systems and how they measured, encoded and decoded inputs to produce a response.

Now some experts are trying to think about all kinds of “sensory systems” in this way: Razo-Mejia, for instance, has studied how optimally bacteria sense and process chemicals in their environment, and how that might affect their fitness. Meanwhile, Walczak and her colleagues have been asking what a “good decoding strategy” might look like in the adaptive immune system, which has to recognize and respond to a massive repertoire of intruders.

“I don’t think optimization is an aesthetic or philosophical idea. It’s a very concrete idea,” Bialek said. “Optimization principles have time and again pointed to interesting things to measure.” Whether or not they are correct, he considers them productive to think about.

“Of course, the difficulty is that in many other systems, the property being decoded is more difficult than one-dimensional position ,” Walczak said. “The problem is harder to define.”

That’s what made the system Bialek and his colleagues studied so tantalizing. “There aren’t many examples in biology where a high-level idea, like information in this case, leads to a mathematical formula” that is then testable in experiments on living cells, Kondev said.

It’s this marriage of theory and experiment that excites Bialek. He hopes to see the approach continue to guide work in other contexts. “What’s not clear,” he said, “is whether the observation is a curiosity that arises in a few corners, or whether there’s something general about it.”

If the latter does prove to be the case, “then that’s very striking,” Briscoe said. “The ability for evolution to find these really efficient ways of doing things would be an incredible finding.”

Kondev agreed. “As a physicist, you hope that the phenomenon of life is not just about the specific chemistry and DNA and molecules that make living things on planet Earth — that it’s broader,” he said. “What is that broader thing? I don’t know. But maybe this is lifting a little bit of the veil off that mystery.”

*Correction added on March 15: The text was updated to acknowledge the contributions of Mariela Petkova and Gašper Tkačik.*