The problem was first proposed by Henri Lebesgue, a French mathematician, in a 1914 letter to his friend Julius Pál. Lebesgue asked: What is the shape with the smallest area that can completely cover a host of other shapes (which all share a certain trait in common)?

In the century since, Lebesgue’s “universal covering” problem has turned out to be a mousetrap: Progress, when it’s come at all, has been astonishingly incremental. Gibbs’ improvement is dramatic by comparison, though you still have to squint to see it.

Picture a dozen paper cutouts of different sizes and shapes lying on your floor. Now imagine being asked to design another shape that is just big enough to cover any of those dozen shapes. Through experimentation — by overlaying the shapes and rotating them — you could feel your way to a solution. But once you’d found a “universal” cover, how would you know if you’d found the smallest one? You could imagine returning to your cover throughout the day and finding places to trim a little more here or a little more there.

That is the spirit of Lebesgue’s universal covering problem. Instead of paper cutouts, it considers shapes where no two points are farther than one unit apart. The circle is the most obvious shape with “diameter 1,” but there are infinitely many others: the equilateral triangle, the regular pentagon, the regular hexagon and a three-sided shape with bulging sides known as the Reuleaux triangle, for starters. This diversity of shapes is what makes it hard to find the smallest cover for them all.

Soon after receiving the letter from Lebeque, Pál realized that the regular hexagon is a universal cover. Then he did one better. He noticed he could cut off two nonconsecutive corners from the hexagon — the resulting shape had less area but was still a universal cover.

“You take the hexagon, layer it on top, rotate the second one 30 degrees, and you can cut off two of the corners. That was where Pál left it,” said Gibbs.

Over the next 80 years, two other mathematicians shaved slivers from Pál’s universal cover. In 1936 Roland Sprague removed a section near one of the corners; in 1992 H. C. Hansen removed two vanishingly small wedges from the lower right and left corners. Illustrations of Hansen’s area savings would convey something about the locations but inevitably mislead about the size: They had an area of 0.00000000004 units.

“You can’t really draw them in scale because they’d be atom-sized pieces,” said John Baez, a mathematician at the University of California, Riverside.

Baez lifted Lebesgue’s universal covering problem out of obscurity when he wrote about it in 2013 on his popular math blog. He confessed he was attracted to the problem the way you might be attracted to watching an insect drown.

“My whole interest in this problem is rather morbid,” Baez wrote. “I don’t know any reason that it’s important. I don’t see it as connected to lots of other beautiful math. It just seems astoundingly hard compared to what you might initially think. I admire people who work on it in the same way I admire people who decide to ski across the Antarctic.”

Philip Gibbs had never skied across the Antarctic, but he did read Baez’s blog. When he saw the post about Lebesgue’s universal covering problem, he thought, “That’s exactly the kind of thing I’m looking for.”

Early in his life, Gibbs thought he might become a scientist. He received an undergraduate degree in mathematics from the University of Cambridge and a Ph.D. in theoretical physics from the University of Glasgow. But he soon lost his enthusiasm for academic research and instead became a software engineer. He worked on systems for ship design, air traffic control and finance, before retiring in 2006.

Gibbs remained interested in academic questions, but there wasn’t much he could do as a nonprofessional researcher. “As an independent scientist it’s hard to keep up with everything that’s going on,” he said. “But if you find the right kind of niche, you can do some stuff and come up with some useful results.”

Lebesgue’s universal covering problem was just such a niche. The problem had never attracted much attention from mathematicians, so he suspected that he would be able to make progress. Gibbs also he realized he could use his programming background to gain an advantage. “I’m always on the lookout for problems where you can maybe use computers to try and do a bit of experimental mathematics,” he said.

In 2014 Gibbs ran computer simulations on 200 randomly generated shapes with diameter 1. Those simulations suggested he might be able to trim some area around the top corner of the previous smallest cover. He turned that lead into a proof that the new cover worked for all possible diameter-1 shapes. Gibbs sent the proof to Baez, who worked with one of his undergraduate students, Karine Bagdasaryan, to help Gibbs revise the proof into a more formal mathematical style.

The three of them posted the paper online in February 2015. It reduced the area of the smallest universal covering from 0.8441377 to 0.8441153 units. The savings — just 0.0000224 units — was almost one million times larger than the savings that Hansen had found in 1992.

Gibbs was confident he could do better. In a paper posted online in October, he lopped another relatively gargantuan slice from the universal cover, bringing its area down to 0.84409359 units.

His strategy was to shift all diameter-1 shapes into a corner of the universal cover he’d found a few years earlier, then remove any remaining area in the opposite corner. Accurately measuring the area savings, however, proved exacting. The techniques Gibbs used are all from Euclidean geometry, but he had to execute with a precision that would make any high school student cross-eyed.

“As far as the math goes, it’s just high-school geometry. But it’s carried to a fanatical level of intensity,” wrote Baez.

For now, Gibbs continues to hold the crown for finding the smallest universal cover, but his reign isn’t secure. Gibbs believes there’s still room yet to find a better universal cover. For his part, Baez hopes the renewed attention Gibbs has brought to Lebesgue’s question will stimulate the interest of other mathematicians. At that point, it might be possible to leave the ruler and compass behind and engage the fuller arsenal of modern mathematical techniques.

“It’s possible that the right way to solve this involves very different ideas,” he said, “though I have no idea what those ideas would be.”

]]>One challenge to solving the problem lies in the relative weakness of gravity compared with the strong, weak and electromagnetic forces that govern the subatomic realm. Though gravity exerts an unmistakable influence on macroscopic objects like orbiting planets, leaping sharks and everything else we physically experience, it produces a negligible effect at the particle level, so physicists can’t test or study how it works at that scale.

Confounding matters, the two sets of equations don’t play well together. General relativity paints a continuous picture of space-time while in quantum mechanics everything is quantized in discrete chunks. Their incompatibility leads physicists to suspect that a more fundamental theory is needed to unify all four forces of nature and describe them at all scales.

One relatively recent approach to understanding quantum gravity makes use of a “holographic duality” from string theory called the AdS-CFT correspondence. Our latest *In Theory* video explains how this correspondence connects a lower dimensional particle theory to a higher dimensional space that includes gravity:

This holographic duality has become a powerful theoretical tool in the quest to understand quantum gravity and the inner workings of black holes and the Big Bang, where extreme gravity operates at tiny scales.

We hope you enjoyed this second episode from season two of *Quanta*’s *In Theory video series*. Season two opened in August with an animated explainer about a mysterious mathematical pattern that has been discovered in disparate settings — in the energy spectra of heavy atomic nuclei, a function related to the distribution of prime numbers, an independent bus system in Mexico, spectral measurements of the internet, Arctic ponds, human bones and the color-sensitive cone cells in chicken eyes. To learn more, watch episode one below:

“I felt there was mathematics already within Dante’s writing,” Pettorino said recently.

Dante’s epic poem, in Mark Musa’s translation, begins:

*Midway along the journey of our life
*

Pettorino’s translation reads:

*Given a line segment *AB* of size equal to our life path, consider its midpoint *M*. If *D* is a man called Dante, *D* shall be coincident with *M*.
*

This reimagining, part of a creative writing group project, was published in a collection titled *Faximile* — an homage to admired authors and texts, in which the Pythagorean theorem became a story, *The Iliad* became a football match, and the Italian constitution was rendered in hendecasyllabic verse. “We liked the originals, and we wanted to play with them and understand them better,” Pettorino said.

She has approached cosmology in the same spirit, using storytelling from multiple angles as a guiding principle. After earning her Ph.D. in 2005, she traveled the world, hopping between institutions in Heidelberg, New York, Geneva and Valencia, as well as Naples, Turin and Trieste in her native Italy, alternating between observational, theoretical, methodological and statistical points of view in her study of the cosmos — a dark wood, rather like Dante’s. She considers all of these approaches necessary for unraveling the nature of dark matter and dark energy, little-understood substances that together comprise 95 percent of the universe.

It is perhaps not surprising that in 2016 Pettorino landed at the CosmoStat laboratory at CEA Saclay, a research institute 15 miles south of Paris. At CosmoStat, cosmologists and computer scientists collaborate to develop new statistical and signal-processing methods for interpreting the vast volumes of data acquired by modern telescopes. This summer, Pettorino helped complete the final analysis of data from the European Space Agency’s Planck space telescope, which mapped the early universe with unprecedented precision. Her main focus now is Euclid, the agency’s next major space telescope, set to launch in 2022. Euclid will gather 170 million gigabytes of data about billions of galaxies, slicing the universe at different epochs and tracking its evolution under dark influences.

*Quanta Magazine* spoke with Pettorino over Skype this summer as she helped organize the annual EuroPython conference for users of the Python programming language, among other extracurricular commitments. The interview has been condensed and edited for clarity.

I hadn’t thought about cosmology at all when I started physics, and even then I wasn’t very convinced about physics in itself. But physics offered me a good opportunity to combine several different interests. At the time, I was living in Naples, my city. I really wanted to follow a path that would allow me to get to know people, live in different places, and learn languages. I certainly liked logic and mathematics. And I heard about physics from my uncle, Roberto Pettorino, who was a string theorist; he told me about strings, multiple dimensions, time travel. And I loved science fiction. The authors I read most were Philip José Farmer and Jack Vance — the stories had adventure, and different technologies, and they were very realistic, creating new worlds in great detail with things that don’t exist but could very easily have existed. I liked challenges. At that time, I was taking acting classes and creative writing classes. And then I just said, “Let’s do physics!” I was curious about the whole picture, and physics looked to me like a good combination of logic, of communication, of imagination. My main goal was to learn, to increase my knowledge, to satisfy my curiosity.

I started physics as an undergraduate at the end of 1997, and then in 1998 there was the cosmic acceleration discovery, revealing that a lot of the universe was completely unknown, and this immediately attracted my curiosity. What happened was that independent observations by two different supernova research teams showed very surprising results: Cosmologists were expecting the universe to be expanding after the Big Bang, and since gravity attracts things toward each other, the expectation was that the universe’s expansion was decelerating. Evidence from supernova explosions showed that the expansion is, instead, accelerating — as if there is some extra form of energy that counteracts gravity and increases the velocity of the expansion. This is generically named “dark energy.”

Since 1998, many other experiments have confirmed the same picture: Normal atoms only account for about 5 percent of the total energy budget in the universe. There is an extra 25 percent that is in the form of “dark matter.” Dark matter still feels gravity, but we don’t observe it directly; it acts as a glue that allows structures, like galaxies and clusters of galaxies, to form. And then there is the rest — 70 percent — which is dark energy, and which should be responsible for cosmic acceleration.

That’s still unclear. The simplest way to describe it is as a kind of energy whose density is constant everywhere in time and space, termed the “cosmological constant.” This is one new parameter added to the theory of general relativity, and in practice it fits the data very well — including the final data from the Planck space satellite. Unfortunately, the problem is that the cosmological constant is not well-understood theoretically. First, we cannot predict its value, and we need to have very precise initial conditions to end up with the “right” observed value of this constant. This is the fine-tuning problem. Secondly, the cosmological constant marks our epoch as a very special time within the evolution of the universe. The density of dark energy was completely negligible in the past compared to the density of dark matter (which was higher in the past, when the volume of the visible universe was smaller.) In the future, however, dark energy will dominate over all species of matter, because the dark-matter density will continue to decrease as the universe expands. We happen to live in that epoch in which the cosmological constant is of roughly the same order as matter. That’s a big coincidence.

This lack of understanding about the cosmological constant has motivated researchers to look for alternative explanations. Cosmic acceleration could be caused by a new fluid, or a new particle whose density changes in time instead of being constant, or more than one particle, or more than one fluid. Or, cosmic acceleration could be the hint that our laws of gravity (namely, Albert Einstein’s theory of general relativity) need to be modified, particularly at very large scales.

Astrophysicists have already tested general relativity at the scale of the solar system. Models referred to as “modified gravity” try to modify general relativity at very large scales to account for cosmic acceleration. Some of these modified-gravity models have been excluded already, for example after the recent detection of gravitational waves. But there are still many models fitting current data, and no clear solution of the theoretical problems associated with the cosmological-constant scenario.

It was challenging to move continuously. I moved 10 times in 12 years, sometimes changing country, sometimes changing continent, sometimes for a month, six months, a year, two years — for the possibility of having funding. It was great because I wanted to work with different groups and also have the opportunity to understand different points of view on the same story. It was a bit like in 2004 when I was taking a creative writing class and we rewrote stories from the point of view of different characters or objects. Somehow I wanted to have the same feeling in science, in cosmology. I wanted to learn from different groups and different perspectives — the observational point of view and theoretical point of view. I started as a theorist and then got more and more interested in testing theories with data. I never thought I would become the best theorist or the best observer, but at some point I realized that I could talk to both theorists and observers and that was a valuable skill in itself, allowing me to grasp challenges and requirements on both sides. I wanted to know more about the methods used, the assumptions made in the data analysis, and the difficulties in working with different data sets. And I wanted to test the theories myself.

The challenge for theorists is in trying to formulate a coherent proposal that explains cosmic acceleration without requiring fine-tuning or coincidence. It means developing the formalism, checking that it is stable and consistent, and deriving its theoretical predictions. There are different approaches one can take in trying to test the predictions, but you have to first understand them; you have to describe them analytically with equations and with new numerical codes, capturing how every single species — matter, radiation, extra fields or forces — evolved from the beginning of the universe until today.

From the data point of view, it’s important to make sure you remove systematic effects related to the specific features of the detector, or other effects that may mimic the signal you actually want to measure. In addition, it is important to be aware of the assumptions you’re making within the whole analysis and when you’re validating your tools, for example by comparing independent numerical codes to check that they give the same result. Sometimes time constraints may limit the tests and validations we do to the simplest theoretical scenarios. Testing more exotic ideas is an additional challenge that we need to face to avoid confirmation bias — the tendency to interpret or cherry-pick data in a way that confirms pre-existing beliefs. Sometimes it feels a bit uncomfortable that the cosmological-constant model accepted as standard is also the one which is easier to test.

We need to maximize the information we can get from the large amount of data that’s becoming available and use it to improve the interpretation of the dark universe. That’s what I’m doing at CosmoStat — applying advanced statistics to the data, to improve the comparison between data and predictions from theoretical models. And recently, with Austin Peel, a postdoc here, we developed machine-learning algorithms to identify hallmarks of modified gravity.

In a sense, the story is really just starting. Overall, I feel that data, theory, methodology as well as data science are all key ingredients of the same quest, with different challenges but the same aim of understanding the universe — its evolution, its future. Communication among these diverse communities with different skills and expertise can make the difference between having access or not to new exciting discoveries.

This began two years ago, as a pilot program started by Michelle Lochner from Cape Town, with just a few mentors and a group of mentees. It was originally intended for women in developing countries, but then Michelle got requests from mentees all over the world, and also from mentors who wanted to help. In practice, we connect women who are undergraduates in physics from different countries and also provide them with role models who they can talk to for support. Right now, I have two mentees. It’s really about giving them information about our experience and practical information about, say, how to prepare a presentation or write a CV. And it’s about helping them in difficult situations like gender harassment. It can seem simple, but it really has a lot of impact on their motivation, just the fact that they can talk to someone who has already gone through a similar career path. It’s something that personally I would have really liked at the start of my career — especially in situations in which I found myself as the only woman in the whole department.

**Do you find it discouraging that so few women pursue careers in physics even now, and that women in the field still face such bias and sexism? **

What I find most discouraging is that statistics show there are actually women starting careers in physics at the undergraduate or Ph.D. level, but they become increasingly underrepresented as their career progresses, as actually happens in many fields. When this happens because of bias or lack of equal opportunities, it is frustrating.

The point is not, of course, to convince women to do physics. The point is that they shouldn’t be discouraged. They should have the same possibilities and chances, if they wish to pursue a career in physics. Everyone benefits from that. Not just women. This comes back to the idea that there is a higher chance of scientific progress with diverse approaches and different perspectives when telling the story.

]]>“What we are starting to realize is that these cells aren’t just there to make tissue. They actually have other behavioral roles,” said Shruti Naik, an immunologist at New York University who has studied this memory effect in skin and other tissues. Stem cells, she said, “have an exquisite ability to sense their environment and respond.”

But when those responses go wrong, they may cause or contribute to a variety of enduring health problems involving chronic inflammation, such as severe allergies and autoinflammatory disorders.

Most tissues in the body contain small reservoirs of long-lived stem cells that can divide and specialize into myriad cell types as required. A stem cell in the skin, for example, can divide and give rise to lineages of cells that produce pigment or keratin, cells that form the sweat glands, or even the flexible barrier cells that allow the skin to stretch when the body moves. Serving as miniature factories for other cell types seemed to be stem cells’ primary function, and because they need to stay versatile, an underlying assumption has been that they have to be “blank slates,” unchanged by their histories. But now a new picture is starting to emerge.

In August, a *Nature* paper by Boston-area researchers offered fresh evidence for a kind of memory in stem cells, and some of the first for the phenomenon in humans. The team, led by the single-cell sequencing pioneer Alex Shalek and the immunologist José Ordovas-Montañes, both at the Massachusetts Institute of Technology, and the immunologist Nora Barrett at Brigham and Women’s Hospital, had set out to understand why some people suffer from debilitating chronic allergies to airborne dust, pollen and other substances. Most people experience at most a passing bout of coldlike symptoms from these irritants, but about 12 percent of the population has a severe reaction that persists all year and results in uncomfortable polyps or growths.

The work is the first step in the team’s larger quest to understand chronic inflammatory diseases, such as asthma and inflammatory bowel disease, in which the immune system continues to launch unnecessary attacks even after the initial challenge is over. These types of autoinflammatory disorders have long been blamed on the immune system, which is thought to overreact to a perceived threat. But the Boston team suspected there might be a cause in the tissue itself.

They began by taking cells from the inflamed nasal cavities of people with chronic sinusitis and comparing them to cells from healthy control subjects. After collecting about 60,000 cells from 20 different people, they sequenced RNA molecules taken from individual cells to determine which genes were active in them. In the stem cells from the sinusitis patients, they saw that many of the active genes were associated with allergic inflammation — in particular, the genes were targets of two immune mediators called interleukin 4 (IL-4) and interleukin 13 (IL-13). These are small molecules that immune cells like T and B lymphocytes typically use to communicate with one another.

The fact that the targeted genes were active in stem cells meant that the stem cells were apparently in direct communication with the immune system. A hunch that this communication might have an effect on the chronic nature of the disease led the researchers to a further set of experiments.

They removed cells from the airways of allergy patients, grew them in culture for about five weeks, and then profiled their gene activity. They found that the genes involved in allergic inflammation were still active, even though the allergic threat of dust and pollen was long gone. In addition, the researchers described many of the cells as “stuck” in a less-than-fully-mature state.

For Shalek, this result signals “that stem cells may transfer ‘memories’ to future generations of cells and this can cause near-permanent changes in the tissue they replenish.” This process invites comparisons to the immune system: B cells and T cells draw on their experiences with infections they have previously repelled to fight off new ones more effectively. Similarly, stem cells may retain a record of past assaults to sharpen their responses next time. But in the case of the allergy patients, that memory apparently becomes maladaptive. It may keep stem cells perpetually signaling to the immune system that an attacker is there, creating a feedback cycle that promotes inflammation and polyps.

According to Shalek, an understanding of which cells become “bad actors” and how their response propagates throughout a tissue should lead to more effective interventions. In fact, in their paper they were able to test the effects of an antibody that blocks IL-4 and IL-13 on the stem and secretory cells of an individual with nasal polyps. They noted a substantial restoration of gene expression associated with healthy tissue, a promising step toward the development of future therapies.

“This opens a new paradigm where we don’t only focus on the self-renewal potential of these cells but on their potential interaction with their surroundings,” said Semir Beyaz, an immunologist at Cold Spring Harbor Laboratory. Beyaz was not involved in the study by the Boston group but has made similar findings in the gut: In a paper published in *Nature* in 2016 he demonstrated that the intestines of mice on a high-fat diet produced a greater number of stemlike cells than did those of mice eating less fat. When dividing, the intestinal stem cells also seemed to add to their own numbers more frequently rather than producing more differentiated cells, a change that has been linked to diseases like cancer.

“Functionally, we are realizing that cells can be tuned,” Naik said. “Immunologists are starting to understand that immune reactions take place in tissues, and the way tissues respond to this is at the level of the stem cell.”

A few years ago, in collaboration with stem cell biologists, Naik looked at the effects of prior injury and inflammation on wound healing in mice, in the hope of understanding whether experience with inflammation affects stem cells. As described in their 2017 paper in *Nature*, she and her colleagues discovered that if patches of skin on mice were inflamed and allowed to heal, subsequent wounds to that same spot would heal 2.5 times as quickly, an effect that could last as long as six months.

In that experiment, Naik explained, the memory retained in the stem cells was beneficial because it was “tuning cells to be more powerful at healing wounds and regeneration.” But the flip side of this finding, as Shalek, Barrett and Ordovas-Montañes had observed, is that “if you teach bad behaviors … they are going to remember those bad behaviors as well,” she said.

How the stem cells are storing these memories is unknown; in both the allergy and the wound healing studies, the mechanism appears to involve some modification of the DNA that makes certain genes more or less accessible to activation. Naik found that the DNA in the skin stem cells of the twice-wounded mice contained many regions that were less tightly packed, which usually indicates gene activity, and some of those open regions were retained long after the inflammation was over.

As Naik and her colleagues discussed recently in a review paper for *Cell*, stem cells in a wide range of tissues engage in a chemical “dialogue” with the immune system, with both sides — and potentially many other cell types — pooling their information to cope most effectively with changing conditions. Whatever the details of those conversations might be, all the evidence points to stem cells playing a central role in helping to make tissues more adaptable by preserving some record of their history.

“It makes more sense that a tissue would just learn from its experience,” Naik said. “That way it doesn’t have to reinvent the wheel every single time.”

]]>The puzzles produced a heartwarming dialogue among three correspondents, Michel Nizette, Stephen Rigsby and Xin Yuan Li, which produced a microcosm of how real scientific collaborations work. The three commenters all contributed original work, built on and honed previous ideas or simulations done by one another, and corrected and improved their previous efforts, based on feedback from the others, to reach their solutions. Thanks to all three of them for producing an interesting and cordial discussion that was a treat to follow.

Let’s consider the second problem first. This question is based on the fact that even though the mean number of offspring of males and females in a given generation is exactly the same, males have a greater variance in their number of offspring. The problem is simpler than the first one and can be solved using an elegant trick.

Imagine the two following simple reproductive scenarios, one for females and the other for males, which may have taken place over a small number of generations long ago in a sexually reproducing species such as ours.

The female lineage starts with two females initially. Each female produces two daughters in a single generation, so that the next generation begins with four females. However, two random individuals out of these four are, for reasons of disease or death, unable to reproduce. The remaining two again produce a total of four offspring, out of which only two, chosen randomly, reproduce. The same process repeats every generation. For simplicity, we assume that the generations do not overlap.

The male lineage starts with two males, producing a total of four sons. However, in this case one of the males, chosen at random, can father zero, one, two, three or four sons, while the other one fathers the remaining number required to reach a total of four. Again, two of the offspring are unable to contribute to the next generation. The other two again share four offspring between them in the same random way, each one potentially fathering zero to four sons. The process is repeated over generations, which, as before, do not overlap.

What is the average number of generations it would take for one individual to become the most recent sole ancestor of all the individuals in the female and male lineages?

This problem can be solved by figuring out the probabilities that the offspring of any one of the two original founders will take over an entire generation exclusively, starting from the first generation and continuing forever. We then take the sum of the expectation for each generation, which is the probability multiplied by the generation number, to figure out the expected number of generations. Let us consider the female lineage and designate the children of the first ancestor as *A* and *B* and that of the second, *a* and *b*. In generation one, therefore, we cannot have a universal ancestor because there are four individuals: *A*, *B*, *a* and *b*. To produce generation two, we have to pick two individuals who stay fertile. No matter which one of the four is chosen to be the first fertile one, say *A*, there is only one chance in three that the second, also chosen randomly, will be from the same mother, in this case, *B*. Therefore, the probability of having a universal ancestor for generation two is 1/3, and the expectation is 2 × 1/3 = 2/3. To compute the expectation for generation three we note that if the fertile ones are unrelated (probability 2/3), the situation is the same as that encountered in generation two, with a 1/3 probability of having a universal ancestor in generation three, for a probability of 2/3 × 1/3 = 2/9, and an expectation of 3 × 2/9 = 2/3. The same pattern of an additional factor of 2/3 repeats itself for all succeeding generations.

As we had seen before in our “Puzzles Inspired by Ramanujan,” when such self-similarity happens in an infinite series or continued fraction, we can apply a nifty trick to calculate the sum, as Xin Yuan Li pointed out. We set the sum to be *X*, and use the self-similarity to set up the equation *X = *2/3 + (2/3)(*X* + 1). This simple algebraic equation simplifies to *X*/3 *= *4/3, giving *X* = 4. Therefore, the expected number of generations for a universal female ancestor is 4. The case for males is a little more complicated, but you can use the principles described by Michel Nizette to generate the equation. The expectation for males turns out to be (1* + *4/15)/(2/5 + 4/15) = 29/15, or about 1.9.

If you find the math daunting, you can get the same result by listing the expectations in a spreadsheet: The sum converges quickly, in about 20 generations. Or, if you are proficient in programming, you can find it by running a simulation, as Stephen Rigsby did.

So, as we theorized, the female universal ancestor goes about 2.1 times further back in time than the male universal ancestor, under these conditions. However, this is not necessarily true of the actual human Y-chromosomal Adam and Mitochondrial Eve. In conditions of near exponential population growth (as has been the case in recent human history), no matter how much variance there is, neither lineage can generate a universal ancestor because most descendants survive. We would need to go back in time before the population growth surged to find universal ancestors. And if there was a severe population bottleneck back then (as probably happened to our ancestors in Africa), then our male and female universal ancestors may have existed pretty close to each other in time. While this remains speculative, nonetheless the theory of evolutionary descent leaves no doubt that universal human ancestors existed at some time in our past.

Imagine a small bee colony of 10 females. Initially, each bee’s ovaries are programmed to produce eggs at some constant rate over time. However, once the eggs are laid, their presence signals the bee’s ovaries to slow down the rate of egg laying and focus instead on caretaking. Now imagine that among the 10 individuals is a mutant that produces eggs at a faster rate and also does not respond to the “slow down” signal, but instead keeps producing eggs in a process that becomes easier with time until all the required eggs, say 20, are laid. Assume that each normal bee’s rate of egg production slows down 5 percent of the original rate for each egg laid, whereas for the mutant bee, the rate of egg laying increases 5 percent of its original rate for every egg laid. How much faster should the mutant bee’s initial egg-laying rate be, compared with a normal bee’s, in order for the mutant to take over 50 percent of the hive’s egg-laying duties? How much does this shorten the hive’s egg-laying phase?

This problem, treated purely mathematically, was well solved by Rigsby using his simulation and by Nizette with an analytical solution, yielding the following answers: The mutant needs to lay eggs 2.47 times as fast, and the hive’s egg-laying phase is shortened by a factor of 2.58. For those who want details, you’ll do well to read their respective comments. However, this solution treats egg laying as a continuous process and assumes fractional eggs. From a biological perspective, egg laying is a discrete, all-or-none process. Moreover, it is the sensed presence of a fully laid egg that provides the feedback to the bee. What happens if fractional eggs are outlawed, and we stick to biological plausibility? First, there is no way for the mutant to take over exactly 50 percent of the hive’s egg laying, since at some point all of the nine identical regular bees will lay one egg each simultaneously, and the mutant must finish laying the remaining 11 eggs before the other bees lay their second egg.

This discrete process can be modeled using a spreadsheet, and exact answers can be found using Excel’s handy Goal Seek function. Using one row for each of the 12 egg-laying events, create the following columns: 1) time, 2) the total number of eggs laid so far, 3) egg-laying rate for normal bees, 4) egg-laying rate for mutant bee, 5) time required by normal bee to lay next egg, 6) time required by mutant to lay next egg, 7) who will lay the next egg, 8) the fraction of the growing egg held by the normal bees if they didn’t lay the egg and 9) the fraction of the unlaid egg still in the mutant, if any. You then have to fill the columns with formulas for modifying the normal bee’s and mutant bee’s egg-laying rates as soon as an egg is laid. Start with an arbitrary rate, and use Goal Seek to ensure that the mutant has laid 11 eggs just before the others are ready to lay their second egg. The answer is that the mutant only needs to be about 1.5 times faster initially.

You can see how this works in the table below. We start with a normal bee (N) with a rate of 1 unit and a mutant (M) with a rate of 1.5. At time = 0.67, the mutant lays her first egg, with the other bees’ eggs being 67 percent completed. At this point the mutant’s rate increases by 5 percent and that of the other bees falls 5 percent. But the other bees, having eggs that are already two-thirds complete, win the race to the next egg and lay the next nine eggs all together before the mutant can lay her second egg. At this point, the mutant’s rate increases by 50 percent (to 2.25) and that of the other bees falls by 50 percent. The mutant is now four and a half times faster at laying eggs, and can lay 10 more eggs, just before the other bees can lay their second one, since each new egg makes the mutant faster, while the other bees’ egg-development slows to a crawl.

Egg laying event | Total eggs laid | Normal bee’s egg laying rate | Mutant bee’s egg laying rate | Normal bee’s time to next egg | Mutant bee’s time to next egg | Who will lay the next egg? | Normal bee’s fraction of unlaid eggs | Mutant fraction of unlaid eggs |

0 | 0 | 1.00 | 1.50 | 1.00 | 0.67 | M | 0.67 | 0.00 |

1 | 1 | 0.95 | 1.58 | 0.35 | 0.63 | N | 0.00 | 0.55 |

2 | 10 | 0.50 | 2.25 | 2.00 | 0.20 | M | 0.10 | 0.00 |

3 | 11 | 0.45 | 2.33 | 2.00 | 0.43 | M | 0.29 | 0.00 |

4 | 12 | 0.40 | 2.40 | 1.77 | 0.42 | M | 0.46 | 0.00 |

5 | 13 | 0.35 | 2.48 | 1.54 | 0.40 | M | 0.60 | 0.00 |

6 | 14 | 0.30 | 2.55 | 1.33 | 0.39 | M | 0.72 | 0.00 |

7 | 15 | 0.25 | 2.63 | 1.13 | 0.38 | M | 0.81 | 0.00 |

8 | 16 | 0.20 | 2.70 | 0.93 | 0.37 | M | 0.89 | 0.00 |

9 | 17 | 0.15 | 2.78 | 0.75 | 0.36 | M | 0.94 | 0.00 |

10 | 18 | 0.10 | 2.85 | 0.58 | 0.35 | M | 0.98 | 0.00 |

11 | 19 | 0.05 | 2.93 | 0.46 | 0.34 | M | 0.99 | 0.00 |

12 | 20 |

Incidentally, egg-laying time for the hive actually increases by 50 percent in the above case. The mutant needs to be about 2.24 times faster to restore egg-laying efficiency to the same level as it was before. If the mutant is any faster than this, the hive’s egg-laying efficiency increases. In any case, the above scenario shows that once feedback controls are reversed, the hives are well on the way to role specialization.

Thank you to our three contributors for your interesting work. All of them deserve a piece of the *Quanta* T-shirt. However, this is, alas, a winner-take-all situation. The T-shirt goes to Michel Nizette for his clear analytical contributions. Congratulations!