The idea is polarizing. Some physicists embrace the multiverse to explain why our bubble looks so special (only certain bubbles can host life), while others reject the theory for making no testable predictions (since it predicts all conceivable universes). But some researchers expect that they just haven’t been clever enough to work out the precise consequences of the theory yet.

Now, various teams are developing new ways to infer exactly how the multiverse bubbles and what happens when those bubble universes collide.

“It’s a long shot,” said Jonathan Braden, a cosmologist at the University of Toronto who is involved in the effort, but, he said, it’s a search for evidence “for something you thought you could never test.”

The multiverse hypothesis sprang from efforts to understand our own universe’s birth. In the large-scale structure of the universe, theorists see signs of an explosive growth spurt during the cosmos’s infancy. In the early 1980s, as physicists investigated how space might have started — and stopped — inflating, an unsettling picture emerged. The researchers realized that while space may have stopped inflating here (in our bubble universe) and there (in other bubbles), quantum effects should continue to inflate most of space, an idea known as eternal inflation.

The difference between bubble universes and their surroundings comes down to the energy of space itself. When space is as empty as possible and can’t possibly lose more energy, it exists in what physicists call a “true” vacuum state. Think of a ball lying on the floor — it can’t fall any further. But systems can also have “false” vacuum states. Imagine a ball in a bowl on a table. The ball can roll around a bit while more or less staying put. But a large enough jolt will land it on the floor — in the true vacuum.

In the cosmological context, space can get similarly stuck in a false vacuum state. A speck of false vacuum will occasionally relax into true vacuum (likely through a random quantum event), and this true vacuum will balloon outward as a swelling bubble, feasting on the false vacuum’s excess energy, in a process called false vacuum decay. It’s this process that may have started our cosmos with a bang. “A vacuum bubble could have been the first event in the history of our universe,” said Hiranya Peiris, a cosmologist at University College London.

But physicists struggle mightily to predict how vacuum bubbles behave. A bubble’s future depends on countless minute details that add up. Bubbles also change rapidly — their walls approach the speed of light as they fly outward — and feature quantum mechanical randomness and waviness. Different assumptions about these processes give conflicting predictions, with no way to tell which ones might resemble reality. It’s as though “you’ve taken a lot of things that are just very hard for physicists to deal with and mushed them all together and said, ‘Go ahead and figure out what’s going on,’” Braden said.

Since they can’t prod actual vacuum bubbles in the multiverse, physicists have sought digital and physical analogs of them.

One group recently coaxed vacuum bubble-like behavior out of a simple simulation. The researchers, including John Preskill, a prominent theoretical physicist at the California Institute of Technology, started with “the baby version of this problem that you can think of,” as co-author Ashley Milsted put it: a line of about 1,000 digital arrows that could point up or down. The place where a string of mainly up arrows met a string of largely down arrows marked a bubble wall, and by flipping arrows, the researchers could make bubble walls move and collide. In certain circumstances, this model perfectly mimics the behavior of more complicated systems in nature. The researchers hoped to use it to simulate false vacuum decay and bubble collisions.

At first the simple setup didn’t act realistically. When bubble walls crashed together, they rebounded perfectly, with none of the expected intricate reverberations or outflows of particles (in the form of flipped arrows rippling down the line). But after adding some mathematical flourishes, the team saw colliding walls that spewed out energetic particles — with more particles appearing as the collisions grew more violent.

But the results, which appeared in a preprint in December, foreshadow a dead end in this problem for traditional computation. The researchers found that as the resulting particles mingle, they become “entangled,” entering a shared quantum state. Their state grows exponentially more complicated with each additional particle, choking simulations on even the mightiest supercomputers.

For that reason, the researchers say that further discoveries about bubble behavior might have to wait for mature quantum computers — devices whose computational elements (qubits) can handle quantum entanglement because they experience it firsthand.

Meanwhile, other researchers hope to get nature to do the math for them.

Michael Spannowsky and Steven Abel, physicists at Durham University in the United Kingdom, believe they can sidestep the tricky calculations by using an apparatus that plays by the same quantum rules that the vacuum does. “If you can encode your system on a device that’s realized in nature, you don’t have to calculate it,” Spannowsky said. “It becomes more of an experiment than a theoretical prediction.”

That device is known as a quantum annealer. A limited quantum computer, it specializes in solving optimization problems by letting qubits seek out the lowest-energy configuration available — a process not unlike false vacuum decay.

Using a commercial quantum annealer called D-Wave, Abel and Spannowsky programmed a string of about 200 qubits to emulate a quantum field with a higher- and a lower-energy state, analogous to a false vacuum and a true vacuum. They then let the system loose and watched how the former decayed into the latter — leading to the birth of a vacuum bubble.

The experiment, described in a preprint last June, merely verified known quantum effects and did not reveal anything new about vacuum decay. But the researchers hope to eventually use D-Wave to tiptoe beyond current theoretical predictions.

A third approach aims to leave the computers behind and blow bubbles directly.

Quantum bubbles that inflate at nearly light speed aren’t easy to come by, but in 2014, physicists in Australia and New Zealand proposed a way to make some in the lab using an exotic state of matter known as a Bose-Einstein condensate (BEC). When cooled to nearly absolute zero, a thin cloud of gas can condense into a BEC, whose uncommon quantum mechanical properties include the ability to interfere with another BEC, much as two lasers can interfere. If two condensates interfere in just the right way, the group predicted, experimentalists should be able to capture direct images of bubbles forming in the condensate — ones that act similarly to the putative bubbles of the multiverse.

“Because it’s an experiment, it contains by definition all the physics that nature wants to put in it including quantum effects and classical effects,” Peiris said.

Peiris leads a team of physicists studying how to steady the condensate blend against collapse from unrelated effects. After years of work, she and her colleagues are finally ready to set up a prototype experiment, and they hope to be blowing condensate bubbles in the next few years.

If all goes well, they’ll answer two questions: the rate at which bubbles form, and how the inflation of one bubble changes the odds that another bubble will inflate nearby. These queries can’t even be formulated with current mathematics, said Braden, who contributed to the theoretical groundwork for the experiment.

That information will help cosmologists like Braden and Peiris to calculate exactly how a whack from a neighboring bubble universe in the distant past might have set our cosmos quivering. One likely scar from such an encounter would be a circular cold spot in the sky, which Peiris and others have searched for and not found. But other details — such as whether the collision also produces gravitational waves — depend on unknown bubble specifics.

If the multiverse is just a mirage, physics may still benefit from the bounty of tools being developed to uncover it. To understand the multiverse is to understand the physics of space, which is everywhere.

False vacuum decay “seems like a ubiquitous feature of physics,” Peiris said, and “I personally don’t believe pencil-and-paper theory calculations are going to get us there.”

]]>These particles were first proposed as the driving force behind supernovas in 1966, which made their detection a source of comfort to theorists who had been trying to understand the inner workings of the explosions. Yet over the decades, astrophysicists had constantly bumped into what appeared to be a fatal flaw in their neutrino-powered models.

Neutrinos are famously aloof particles, and questions remained over exactly how neutrinos transfer their energy to the star’s ordinary matter under the extreme conditions of a collapsing star. Whenever theorists tried to model these intricate particle motions and interactions in computer simulations, the supernova’s shock wave would stall and fall back on itself. The failures “entrenched the idea that our leading theory for how supernovas explode maybe doesn’t work,” said Sean Couch, a computational astrophysicist at Michigan State University.

Of course, the specifics of what goes on deep inside a supernova as it explodes have always been mysterious. It’s a cauldron of extremes, a turbulent soup of transmuting matter, where particles and forces often ignored in our everyday world become critical. Compounding the problem, the explosive interior is largely hidden from view, shrouded by clouds of hot gas. Understanding the details of supernovas “has been a central unsolved problem in astrophysics,” said Adam Burrows, an astrophysicist at Princeton University who has studied supernovas for more than 35 years.

In recent years, however, theorists have been able to home in on the surprisingly complex mechanisms that make supernovas tick. Simulations that explode have become the norm, rather than the exception, Burrows wrote in *Nature *this month. Rival research groups’ computer codes are now agreeing on how supernova shock waves evolve, while simulations have advanced so far that even the effects of Einstein’s notoriously intricate general relativity are being included. The role of neutrinos is finally becoming understood.

“It’s a watershed moment,” said Couch. What they’re finding is that without turbulence, collapsing stars may never form supernovas at all.

For much of a star’s life, the inward pull of gravity is delicately balanced by the outward push of radiation from nuclear reactions inside the star’s core. As the star runs out of fuel, gravity takes hold. The core collapses in on itself — plummeting at 150,000 kilometers per hour — causing temperatures to surge to 100 billion degrees Celsius and fusing the core into a solid ball of neutrons.

The outer layers of the star continue to fall inward, but as they hit this incompressible neutron core, they bounce off it, creating a shock wave. In order for the shock wave to become an explosion, it must be driven outward with enough energy to escape the pull of the star’s gravity. The shock wave must also fight against the inward spiral of the star’s outermost layers, which are still falling onto the core.

Until recently, the forces powering the shock wave were only understood in the blurriest of terms. For decades, computers were only powerful enough to run simplified models of the collapsing core. Stars were treated as perfect spheres, with the shock wave emanating from the center the same way in every direction. But as the shock wave moves outward in these one-dimensional models, it slows and then falters.

Only in the last few years, with the growth of supercomputers, have theorists had enough computing power to model massive stars with the complexity needed to achieve explosions. The best models now integrate details such as the micro-level interactions between neutrinos and matter, the disordered motions of fluids, and recent advances in many different fields of physics — from nuclear physics to stellar evolution. Moreover, theorists can now run many simulations each year, allowing them to freely tweak the models and try out different starting conditions.

One turning point came in 2015, when Couch and his collaborators ran a three-dimensional computer model of the final minutes of a massive star’s collapse. Although the simulation only mapped out 160 seconds of the star’s life, it illuminated the role of an underappreciated player that helps stalled shock waves turn into fully fledged explosions.

Hidden inside the belly of the beast, particles twist and turn chaotically. “It’s like boiling water on your stove. There are massive overturns of fluid inside the star, going at thousands of kilometers per second,” said Couch.

This turbulence creates extra pressure behind the shock wave, pushing it further from the star’s center. Away from the center, the inward pull of gravity is weaker, and there’s less inward-falling matter to temper the shock wave. The turbulent matter bouncing around behind the shock wave also has more time to absorb neutrinos. Energy from the neutrinos then heats the matter and drives the shock wave into an explosion.

For years, researchers had failed to realize the importance of turbulence, because it only reveals its full impact in simulations run in three dimensions. “What nature does effortlessly, it has taken us decades to achieve as we went up from one dimension to two and three dimensions,” said Burrows.

These simulations have also revealed that turbulence results in an asymmetric explosion, where the star looks a bit like an hourglass. As the explosion pushes outward in one direction, matter keeps falling onto the core in another direction, fueling the star’s explosion further.

These new simulations are giving researchers a better understanding of exactly how supernovas have shaped the universe we see today. “We can get the correct explosion energy range, and we can get the neutron star masses that we see left behind,” said Burrows. Supernovas are largely responsible for creating the universe’s budget of hefty elements such as oxygen and iron, and theorists are starting to use simulations to predict exactly how much of these heavy elements should be around. “We’re now starting to tackle problems that were unimaginable in the past,” said Tuguldur Sukhbold, a theoretical and computational astrophysicist at Ohio State University.

Despite the exponential rise in computing power, a supernova simulation is far rarer than an observation in the sky. “Twenty years ago there were around 100 supernovae being discovered every year,” said Edo Berger, an astronomer at Harvard University. “Now we’re discovering 10,000 or 20,000 every year,” a rise driven by new telescopes that quickly and repeatedly scan the night sky. By contrast, in a year theorists carry out around 30 computer simulations. A single simulation, re-creating just a few minutes of core collapse, can take many months. “You check in every day and it’s only gone a millisecond,” said Couch. “It’s like watching molasses in the wintertime.”

The broad accuracy of the new simulations has astrophysicists excited for the next nearby blast. “While we’re waiting for the next supernova , we have a lot of work to do. We need to improve the theoretical modeling to understand what features we could detect,” said Irene Tamborra, a theoretical astrophysicist at the University of Copenhagen. “You cannot miss the opportunity, because it’s such a rare event.”

Most supernovas are too far away from Earth for observatories to detect their neutrinos. Supernovas in the immediate vicinity of the Milky Way — like Supernova 1987A — only occur on average about once every half-century.

But if one does occur, astronomers will be able to “peer directly into the center of the explosion,” said Berger, by observing its gravitational waves. “Different groups have emphasized different processes as being important in the actual explosion of the star. And those different processes have different gravitational wave and neutrino signatures.”

While theorists have now broadly reached a consensus on the most important factors driving supernovas, challenges remain. In particular, the outcome of the explosion is “very strongly dictated” by the structure of a star’s core before it collapses, said Sukhbold. Small differences are magnified into a variety of outcomes by the chaotic collapse, and so the evolution of a star before it collapses must also be accurately modeled.

Other questions include the role of intense magnetic fields in a rotating star’s core. “It’s very possible that you can have a hybrid mechanism of magnetic fields and neutrinos,” said Burrows. The way neutrinos change from one type — or “flavor” — into another and how this affects the explosion is also unclear.

“There are a lot of ingredients that still need to be added to our simulations,” said Tamborra. “If a supernova were to explode tomorrow and it matches our theoretical predictions, then it means that all the ingredients that we are currently missing can safely be neglected. But if this is not the case, then we need to understand why.”

]]>Now, researchers in Ralph Bock’s laboratory at the Max Planck Institute of Molecular Plant Physiology in Potsdam have finally discovered the answer by capturing this transfer on video. Not only are cell walls sometimes more porous than was thought, but plants seem to have developed a mechanism that enables whole organelles to crawl through the cell wall into adjacent cells. The researchers reported their discovery in the January 1 issue of *Science Advances*.

“The real novelty is that they’ve shown the actual physical organelle is moving, not only from one cell to another,” said Charles Melnyk, a plant biologist who studies grafting at the Swedish University of Agricultural Sciences in Uppsala. “It’s two different plants that are exchanging organelles.”

Farmers have used plant grafts since at least the days of ancient Rome to grow fruit trees and grapevines. Grafting a scion — the flowering, fruiting part of a plant — onto established rootstock can help young fruit trees or vines bear fruit earlier and improve their resistance to pests and disease. Grafting occurs in nature, too, when closely related plants that touch each other eventually fuse, or when parasitic plants form connections to their hosts. At the graft site, the plants form a kind of scar, or callus, that reestablishes the flow of water and nutrients through vascular tissues across the wound and sometimes gives rise to new shoots.

About a decade ago, Bock and his team grafted together two species of tobacco plants and sequenced genes from both sides of the callus. They found that the whole genomes of chloroplasts had been exchanged between the rootstock and the scion. (Like mitochondria, chloroplasts and the other plant organelles called plastids are remnants of ancient endosymbiotic bacteria and carry their own genetic material.) In fact, the entire 150-kilobase chloroplast genomes had been transferred intact, not as naked DNA fragments haphazardly recombined among other genes. Accidental hybridizations or viral infections, which cause many horizontal transfers, couldn’t accomplish this.

“This is not what you would expect from a plant cell,” said Pal Maliga, a plant scientist at Rutgers University who has independently found genetic evidence for the transfers of chloroplasts and mitochondria inside grafts. Plant cells are armored with a stiff cell wall, so “my image of a plant cell was the cytoplasm sitting in a cage, and nobody goes anywhere,” Maliga said.

The genetic evidence for transfers posed a real puzzle: The only known openings in cell walls were the tiny plasmodesmata, narrow bridges (only about 0.05 microns wide) that allow adjacent plant cells to exchange proteins and RNA molecules. The chloroplast, typically about 5 microns in diameter, “was way too big to move” through those, Maliga said. “It looked like it miraculously showed up in the other cell.”

The mystery persisted until Bock teamed up with his postdoctoral fellow Alexander Hertle, who had expertise in live-cell imaging and microscopy. Hertle was determined to look at what was going on in the callus. Examining thin sections of the graft with electron microscopy, he saw that the cells had openings larger than any previously seen. But even those, which were up to 1.5 microns across, seemed too narrow for the chloroplasts.

Then, while observing live cells in the callus, Hertle caught images of the chloroplasts in the act of migration. Some of the chloroplasts changed into more primitive, more motile proto-plastids that could get as small as 0.2 microns. As Hertle watched, the proto-plastids crawled along the inside of the cell membrane to positions beneath the newly discovered holes in the cell wall. Budlike protrusions of the cell membranes then bulged into neighboring cells and delivered the organelles. As the tissue organization in the graft reestablished itself, the plastids returned to the normal size for chloroplasts.

“So there’s definitely holes in the cell wall that would allow the plastids to move through,” Hertle said. The dogma that a plant cell wall is a thick, more or less permanent barrier “basically disappears with this study.”

The metamorphosis of the chloroplasts isn’t well understood yet, but it seems to be a response to carbon starvation and less photosynthesis, Hertle explained. When the researchers turned off the lights, they observed that more plastids dedifferentiated, and the frequency of organelle transfer increased fivefold.

How well the transferred plastids function in their new host cells depends on how closely related the two species are, Maliga says. If the genetic mismatch with the nuclear DNA is too extreme, the organelles may fail to work and will eventually be lost. But they could thrive in the cells of close relatives.

Maliga suspects the proto-plastids might contain or produce signaling molecules that help the graft wound heal. The large openings that form in the cell walls also seem to be part of the plant’s emergency healing response to the wound at the graft site, but they may occur during some stage of normal plant development as well, Maliga says.

Whole-organelle migration could help explain the observation that the chloroplasts from clumps of different species of beech tree growing near one another have more genetic similarities than chloroplasts from more widely spaced assemblages of beeches, Hertle says. The chloroplast-capture events also explain why researchers sometimes get inconsistent results when reconstructing the evolutionary histories of plants: Nuclear and chloroplast genomes may have different pedigrees.

It’s not clear yet how frequently this kind of horizontal genome transfer through organelle migration occurs in nature. Perhaps plants move chloroplasts between cells routinely in response to injuries or other events; no one knows. Bock, Maliga and other researchers were able to document genome transfers only because the differences in the grafted tissues gave away what was happening. But if plants have evolved a mechanism for organelle transfers, then relatively rare natural grafting events may be only one occasion for them.

Common or not, the phenomenon might have evolutionary or ecological implications. Hertle points out that once a mosaic cell in a graft callus starts to produce roots, shoots and flowers, it could give rise to a new species or subspecies, especially if cell walls open wide enough to admit nuclear genomes. In 2014, Bock’s team used this method to create a new species in the nightshade family with a combination of nuclear and organelle genomes that could not have arisen from hybridization. If nature offers an easy way to transfer organelles between plants, biotechnology researchers can put it to work in creating desirable new crop species.

Although the potential applications are many, for Hertle nothing beats the joy of basic discovery. “The thing that is very interesting about microscopy science is that you see things that you would have never thought existed,” he said.

]]>That paved the way for Christine Darden, who earned a master’s degree in mathematics at a historically Black university in 1967 and was hired into NASA’s all-female pool of “human computers” at the Langley Research Center. However, she soon discovered that her role as a mathematician was limited to performing time-consuming calculations by hand. To do the creative mathematical work she craved, Darden needed to recast herself as an engineer.

Darden transferred to NASA’s male-dominated engineering division and later earned an engineering doctorate. She went on to lead the Sonic Boom Group of NASA’s High-Speed Research Program, though she never stopped thinking of herself as a mathematician. “Despite my doctorate, I probably have more of a mathematics background,” she said. “I really enjoy the story of what these mathematical equations do in the physical world.”

Her groundbreaking work laid the foundation for a new era of research on experimental planes (known as X-planes) that NASA launched in 2016. The goal has always been to accelerate the adoption of quieter, greener, safer, faster and more efficient planes — even supersonic ones, which travel faster than sound.

The fundamental problem she worked on, the sonic boom, begins when an airplane pushes air molecules out of the way as it flies. This creates an invisible, cone-shaped pressure field whose tip is on the aircraft’s nose and whose sides surround the plane. The cone moves with the plane and emits a series of pressure waves that travel at the speed of sound. As the plane speeds up, these waves get closer together. Should the plane exceed the speed of sound — dubbed Mach 1 — the waves coalesce into a potentially destructive shock wave called a sonic boom.

“It sounds like a sharp thunderclap,” said Darden, who published more than 50 papers on high-lift wing design in supersonic flow, flap design and sonic boom prediction and minimization.

Darden retired from NASA in 2007 after a 40-year career. She was featured in Margot Lee Shetterly’s 2016 book *Hidden Figures*, alongside Katherine Johnson, Dorothy Vaughan and Mary Jackson — three Black women mathematicians at NASA who made significant contributions at pivotal moments in the space race. All four women were awarded Congressional Gold Medals in 2019 for their scientific contributions.

*Quanta Magazine* spoke with Darden recently about her experience working for NASA, how to make fast planes quieter, and her surreptitious visits to speak with schoolchildren and Girl Scouts. The interview has been condensed and edited for clarity.

My mother tells the story of giving me a talking doll when I was 5. She was disappointed because, instead of playing with the doll, I cut it open to see why it talked. I also helped my dad work on his car and change the oil. When girls were inside playing, I was in the street, bicycle riding, skating and racing with the boys.

We lived in Union County, North Carolina, right outside of Charlotte. My mother taught in a two-room school. When I turned 4 , Mother took me to school with her. She said I could play outside, but who was I going to play with? I stayed and did the first grade work. , she promoted me to second grade.

I was in high school on October 4, 1957, when Sputnik launched. I felt the country’s excitement that Russia beat us into space. Also, I attended college in Hampton, Virginia, near NACA. John Glenn’s parade rode by campus. So there was certainly that influence too.

After graduating with my master’s degree in applied mathematics , I was hired as a data analyst in the high-speed aeronautics division. We were female mathematicians who helped the male engineers create documents about wing and airflow shapes for the military and airplane companies. The engineers had slide rules and mechanical calculators but didn’t like doing calculations. So the head computer assigned young ladies to do the work.

It wasn’t creative, though we drew figures. I still have some of the French curves I was given to draw smooth lines through my data points.

Yes, I often did after getting an assignment. Once, an engineer asked me to complete his work by writing a computer program. It was an interesting assignment. When I finished, he said my program gave incorrect answers. I reviewed and ran it again. He laughed and said, “That’s still not right.”

I didn’t like the laugh. My work wasn’t wrong. I looked at the work he had done prior to giving me the assignment and found one sign error. When I corrected his mistake and ran the code again, the numbers looked good.

No. But he didn’t laugh anymore.

Well, I later asked a friend why all of the men were in engineering and all of the women were in computing. I thought it was because we had math degrees and they had engineering degrees. But she told me that some male engineers had math degrees.

I wanted to be in engineering. Men in engineering did research, gave talks, wrote and published papers, and got promoted. The women, on the other hand, followed the engineers’ orders. Sometimes they didn’t even know what they were working on. They didn’t give talks, weren’t recognized on papers even when they helped, and didn’t get promoted.

Dorothy Vaughan lived down the street from me. She started in 1943, so she was 24 years ahead of me. Mary Jackson and Katherine Johnson were 15 years ahead of me. Katherine’s daughter was my classmate . Katherine and I sang together in church for 50 years. I went by her office a couple of times. I met some of the men who she worked with. However, I never read anything at NASA about what she or the others did as their work was really hidden. I learned about their work in *Hidden Figures*.

I asked for a transfer to engineering, which my supervisor said was impossible. So I went to the director and asked why males and females with the same background were assigned different jobs. He said, “You know, nobody ever asked me that question before.” I said, “Well, I’m asking it now.”

Three weeks later, I got promoted to engineering.

I talked to some of the ladies in the office, but they weren’t interested. Maybe they weren’t outgoing. Of course, Mary Jackson was outgoing. One of the engineers had suggested that she go to a segregated high school to get credentials so that NASA would let her work in the wind tunnel. She did that and worked in the wind tunnel, but never got promoted there.

My supervisor asked me to program the equations from a paper on sonic boom minimization. The authors had assumed an isothermal atmosphere — — but I put the real atmosphere into the code. Eventually, I published on the topic. I also started working on my mechanical engineering doctorate because I didn’t want anybody saying I couldn’t do that job.

Once I finished the computer program, we input variables such as the airplane’s length, weight, altitude and Mach number. The output gave the equivalent area distribution . With that, we started designing planes.

In the wind tunnel, the difference in pressure inside and outside of the pressure cone had been much smaller for our design than for the baseline plane. However, we needed to fly over people to get feedback about how people would tolerate the minimized boom. Boeing ran a test flight over Chicago and Oklahoma City. Once they started, people called to report damage to sheetrock, windows and the good china in their homes. After that, the U.S. canceled the program and outlawed commercial supersonic flights over land. That law is still there.

But supersonic transport remained very popular. Eventually, in the late 1980s, Congress offered money to address the supersonic boom’s environmental concerns, which include noise and the boom itself, but also possible ozone destruction. They asked that I gather everybody in the U.S. researching the sonic boom for a two-day national meeting at Langley.

I led the design and operational plan of the research program . We did years of testing in our wind tunnels to show what worked. Then DARPA borrowed two F5 supersonic Air Force planes for a test around 2002 over the Mojave Desert. They built and pasted panels onto one of the planes so that it matched the equivalent area distribution from our computer program. The other F5 stayed the same. When the F5 with no changes was flown, you could hear people in the control room shouting because of the loud boom. But the demonstrator plane — the one with the panels on it — had a much softer boom. It worked!

I retired shortly after that, but much later, in 2018, NASA gave a contract to Skunk Works, Lockheed Martin, to build QueSST — a good, low-boom supersonic X plane. They’re working on it now. It has such a long nose that it uses external vision to land the plane. They expect the boom to sound like a thump.

They’ll do flight tests and get feedback on the noise. Then NASA will present the data to the FAA to request a rule change. They’re also talking to other world noise agencies to change laws so that supersonic planes can fly around the globe.

Margot’s father and I worked together, and we’d bring our children to Langley’s big spring picnic. So I first met Margot when she was a girl. Later, Margot was working on Wall Street but wanted to write — her mother taught English and had worked with Margot on her writing.

One day Margot and her husband visited her parents. They were riding to church when her dad said, “Oh, look, Margot, there’s Miss So-and-so,” . “She was a computer at Langley and your Sunday school teacher.” Then her dad talked about the Langley computers. Margot’s husband said, “Well, if the Langley computers did all that, how come I’ve never heard of them?” And Margot thought, “Maybe I should write that book.”

That’s when she called me. Soon, it got so that every time she came to town, we had lunch. Once, I mentioned *The Warmth of Other Suns*, by Isabel Wilkerson, a book about Black migration patterns to the North and West told through three people. It was a great story and great history book. And so Margot put both personal stories and history in her book.

Yes, but when I first met Mary Jackson , she told me, “Do you know that I got a poor performance appraisal because my supervisor said I spent too much time visiting schools?” Remember, she never got promoted there.

When she told me that, I said, “OK, the next time I go, I’ll tell nobody where I’m going. I’ll just say I’m going out for an hour.” Of course, within a few years they were giving awards for people who visited schools.

In recent years, I’ve been talking to students all over the country. Invariably, the young women come up and say, “We didn’t know women did work like that!” Girls need to know that women do this work.

]]>“The hard part about math is that you’re failing 90% of the time, and you have to be the kind of person who can fail 90% of the time,” Farb once said at a dinner party. When another guest, also a mathematician, expressed amazement that he succeeded 10% of the time, he quickly admitted, “No, no, no, I was exaggerating my success rate. Greatly.”

Farb, a topologist at the University of Chicago, couldn’t be happier about his latest failure — though, to be fair, it isn’t his alone. It revolves around a problem that, curiously, is both solved and unsolved, closed and open.

The problem was the 13th of 23 then-unsolved math problems that the German mathematician David Hilbert, at the turn of the 20th century, predicted would shape the future of the field. The problem asks a question about solving seventh-degree polynomial equations. The term “polynomial” means a string of mathematical terms — each composed of numerical coefficients and variables raised to powers — connected by means of addition and subtraction. “Seventh-degree” means that the largest exponent in the string is 7.

Mathematicians already have slick and efficient recipes for solving equations of second, third, and to an extent fourth degree. These formulas — like the familiar quadratic formula for degree 2 — involve algebraic operations, meaning only arithmetic and radicals (square roots, for example). But the higher the exponent, the thornier the equation becomes, and solving it approaches impossibility. Hilbert’s 13th problem asks whether seventh-degree equations can be solved using a composition of addition, subtraction, multiplication and division plus algebraic functions of two variables, tops.

The answer is probably no. But to Farb, the question is not just about solving a complicated type of algebraic equation. Hilbert’s 13th is one of the most fundamental open problems in math, he said, because it provokes deep questions: How complicated are polynomials, and how do we measure that? “A huge swath of modern mathematics was invented in order to understand the roots of polynomials,” Farb said.

The problem has led him and the mathematician Jesse Wolfson at the University of California, Irvine into a mathematical rabbit hole, whose tunnels they’re still exploring. They’ve also drafted Mark Kisin, a number theorist at Harvard University and an old friend of Farb’s, to help them excavate.

They still haven’t solved Hilbert’s 13th problem and probably aren’t even close, Farb admitted. But they have unearthed mathematical strategies that had practically disappeared, and they have explored connections between the problem and a variety of fields including complex analysis, topology, number theory, representation theory and algebraic geometry. In doing so, they’ve made inroads of their own, especially in connecting polynomials to geometry and narrowing the field of possible answers to Hilbert’s question. Their work also suggests a way to classify polynomials using metrics of complexity — analogous to the complexity classes associated with the unsolved P vs. NP problem.

“They’ve really managed to extract from the question a more interesting version” than ones previously studied, said Daniel Litt, a mathematician at the University of Georgia. “They’re making the mathematics community aware of many natural and interesting questions.”

Many mathematicians already thought the problem was solved. That’s because a Soviet prodigy named Vladimir Arnold and his mentor, Andrey Nikolyevich Kolmogorov, published proofs of it in the late 1950s. For most mathematicians, the Arnold-Kolmogorov work closed the book. Even Wikipedia — not a definitive source, but a reasonable proxy for public knowledge — until recently declared the case closed.

But five years ago, Farb came across a few tantalizing lines in an essay by Arnold, in which the famous mathematician reflected on his work and career. Farb was surprised to see that Arnold described Hilbert’s 13th problem as open and had actually spent four decades trying to solve the problem that he’d supposedly already conquered.

“There are all these papers that would just literally repeat that it was solved. They clearly had no understanding of the actual problem,” Farb said. He was already working with Wolfson, then a postdoctoral researcher, on a topology project, and when he shared what he’d found in Arnold’s paper, Wolfson jumped in. In 2017, during a seminar celebrating Farb’s 50th birthday, Kisin listened to Wolfson’s talk and realized with surprise that their ideas about polynomials were related to questions in his own work in number theory. He joined the collaboration.

The reason for the confusion about the problem soon became clear: Kolmogorov and Arnold had solved only a variant of the problem. Their solution involved what mathematicians call continuous functions, which are functions without abrupt discontinuities, or cusps. They include familiar operations like sine, cosine and exponential functions, as well as more exotic ones.

But researchers disagree on whether Hilbert was interested in this approach. “Many mathematicians believe that Hilbert really meant algebraic functions, not continuous functions,” said Zinovy Reichstein, a mathematician at the University of British Columbia. Farb and Wolfson have been working on the problem they believe Hilbert intended ever since their discovery.

Hilbert’s 13th, Farb said, is a kaleidoscope. “You open this thing up, and the more you put into it, the more new directions and ideas you get,” he said. “It cracks open the door to a whole array, this whole beautiful web of math.”

Mathematicians have been probing polynomials for as long as math has been around. Stone tablets carved more than 3,000 years ago show that ancient Babylonian mathematicians used a formula to solve polynomials of second degree — a cuneiform forebear of the same quadratic formula that algebra students learn today. That formula, $latex{x=\frac{{ – b \pm \sqrt {b^2 – 4ac} }}{{2a}}}$, tells you how to find the roots, or the values of *x* that make an expression equal to zero, of the second-degree polynomial $latex{ax^2 + bx +c}$.

Over time, mathematicians naturally wondered if such clean formulas existed for higher-degree polynomials. “The multi-millennial history of this problem is to get back to something that powerful and simple and effective,” said Wolfson.

The higher polynomials grow in degree, the more unwieldy they become. In his 1545 book *Ars Magna*, the Italian polymath Gerolamo Cardano published formulas for finding the roots of cubic (third-degree) and quartic (fourth-degree) polynomials.

The roots of a cubic polynomial written $latex{ax^3 + bx^2 + cx + d = 0}$ can be found using this formula:

The quartic formula is even worse.

“As they go up in degree, they go up in complexity; they form a tower of complexities,” said Curt McMullen of Harvard. “How can we capture that tower of complexities?”

The Italian mathematician Paolo Ruffini argued in 1799 that polynomials of degree 5 or higher couldn’t be solved using arithmetic and radicals; the Norwegian Niels Henrik Abel proved it in 1824. In other words, there can be no similar “quintic formula.” Fortunately, other ideas emerged that suggested ways forward for higher-degree polynomials, which could be simplified through substitution. For example, in 1786, a Swedish lawyer named Erland Bring showed that any quintic polynomial equation of the form $latex{ax^5 + bx^4 + cx^3 + dx^2 + ex + f = 0}$ could be retooled as $latex{px^5 + qx + 1 = 0}$ (where *p* and *q* are complex numbers determined by *a*, *b*, *c*, *d*, *e* and *f*)*. *This pointed to new ways of approaching the inherent but hidden rules of polynomials.

In the 19th century, William Rowan Hamilton picked up where Bring and others had left off. He showed, among other things, that to find the roots of any sixth-degree polynomial equation, you only need the usual arithmetic operations, some square and cube roots, and an algebraic formula that depends on only two parameters.

In 1975, the American algebraist Richard Brauer at Harvard introduced the idea of “resolvent degree,” which describes the lowest number of terms needed to represent the polynomial of some degree. (Less than a year later, Arnold and Japanese number theorist Goro Shimura introduced nearly the same definition in another paper.)

In Brauer’s framework, which represented the first attempt to codify the rules of such substitutions, Hilbert’s 13th problem asks us if it’s possible for seventh-degree polynomials to have a resolvent degree of less than 3; later, he made similar conjectures about sixth- and eighth-degree polynomials.

But these questions also invoke a broader one: What’s the smallest number of parameters you need to find the roots of any polynomial? How low can you go?

A natural way to approach this question is to think about what polynomials look like. A polynomial can be written as a function — $latex{f(x)=x^2 -3x + 1}$, for example — and that function can be graphed. Then finding the roots becomes a matter of recognizing that where the function has value 0, the curve crosses the *x*-axis.

Higher-degree polynomials give rise to more complicated figures. Third-degree polynomial functions with three variables, for example, produce smooth but twisty surfaces embedded in three dimensions. And again, by knowing where to look on these figures, mathematicians can learn more about their underlying polynomial structure.

As a result, many efforts to understand polynomials borrow from algebraic geometry and topology, mathematical fields that focus on what happens when shapes and figures are projected, deformed, squashed, stretched or otherwise transformed without breaking. “Henri Poincaré basically invented the field of topology, and he explicitly said he was doing it in order to understand algebraic functions,” said Farb. “At the time, people were really wrestling with these fundamental connections.”

Hilbert himself unearthed a particularly remarkable connection by applying geometry to the problem. By the time he enumerated his problems in 1900, mathematicians had a vast array of tricks to reduce polynomials, but they still couldn’t make progress. In 1927, however, Hilbert described a new trick. He began by identifying all the possible ways to simplify ninth-degree polynomials, and he found within them a family of special cubic surfaces.

Hilbert already knew that every smooth cubic surface — a twisty shape defined by third-degree polynomials — contains exactly 27 straight lines, no matter how tangled it appears. (Those lines shift as the coefficients of the polynomials change.) He realized that if he knew one of those lines, he could simplify the ninth-degree polynomial to find its roots. The formula required only four parameters; in modern terms, that means the resolvent degree is at most 4.

“Hilbert’s amazing insight was that this miracle of geometry — from a completely different world — could be leveraged to reduce the to 4,” Farb said.

As Kisin helped Farb and Wolfson connect the dots, they realized that the widespread assumption that Hilbert’s 13th was solved had essentially closed off interest in a geometric approach to resolvent degree. In January 2020, Wolfson published a paper reviving the idea by extending Hilbert’s geometric work on ninth-degree polynomials to a more general theory.

Hilbert had focused on cubic surfaces to solve ninth-degree polynomials in one variable. But what about higher-degree polynomials? To solve those in a similar way, Wolfson thought, you could replace that cubic surface with some higher-dimensional “hypersurface” formed by those higher-degree polynomials in many variables. The geometry of these is less understood, but in the last few decades mathematicians have been able to prove that hypersurfaces always have lines in some cases.

Hilbert’s idea of using a line on a cubic surface to solve a ninth-degree polynomial can be extended to lines on these higher-dimensional hypersurfaces. Wolfson used this method to find new, simpler formulas for polynomials for certain degrees. That means that even if you can’t visualize it, you can solve a 100-degree polynomial “simply” by finding a plane on a multidimensional cubic hypersurface (47 dimensions, in this case).

With this new method, Wolfson confirmed Hilbert’s value of the resolvent degree for ninth-degree polynomials. And for other degrees of polynomials — especially those above degree 9 — his method narrows down the possible values for the resolvent degree.

Thus, this isn’t a direct attack on Hilbert’s 13th, but rather on polynomials in general. “They kind of found some adjacent questions and made progress on those, some of them long-standing, in the hopes that that will shed light on the original question,” McMullen said. And their work points to new ways of thinking about these mathematical constructions.

This general theory of resolvent degree also shows that Hilbert’s conjectures about sixth-degree, seventh-degree and eighth-degree equations are equivalent to problems in other, seemingly unrelated fields of math. Resolvent degree, Farb said, offers a way to categorize these problems by a kind of algebraic complexity, rather like grouping optimization problems in complexity classes.

Even though the theory began with Hilbert’s 13th, however, mathematicians are skeptical that it can actually settle the open question about seventh-degree polynomials. It speaks to big, unexplored mathematical landscapes in unimaginable dimensions — but it hits a brick wall at the lower numbers, and it can’t determine their resolvent degrees.

For McMullen, the lack of headway — despite these signs of progress — is itself interesting, as it suggests that the problem holds secrets that modern math simply can’t comprehend. “We haven’t been able to address this fundamental problem; that means there’s some dark area we haven’t pushed into,” he said.

“Solving it would require entirely new ideas,” said Reichstein, who has developed his own new ideas about simplifying polynomials using a concept he calls essential dimension. “There is no way of knowing where they will come from.”

But the trio is undeterred. “I’m not going to give up on this,” Farb said. “It’s definitely become kind of the white whale. What keeps me going is this web of connections, the mathematics surrounding it.”

]]>