Testing mosquito pee could help track the spread of diseases

There are no teensy cups. But a urine test for wild mosquitoes has for the first time proved it can give an early warning that local pests are spreading diseases.

Mosquito traps remodeled with a pee-collecting card picked up telltale genetic traces of West Nile and two other worrisome viruses circulating in the wild, researchers in Australia report April 4 in the Journal of Medical Entomology.

The tests were based on an innovative saliva monitoring system unveiled in 2010: traps that lure mosquitoes into tasting honey-coated cards. Among its advantages, this card-based medical testing doesn’t need the constant refrigeration that checking whole mosquitoes does. And it’s not as labor intensive as monitoring sentinel chickens or pigs for signs of infection.
But testing traces of mosquito saliva left on these cards comes close to the limits of current molecular methods for detecting viruses. In part, it’s an issue of volume. A mosquito drools fewer than five nanoliters of saliva when it tastes a card. In comparison, mosquitoes excrete about 1.5 microliters of liquid per pee, offering a veritable flood of material. So Dagmar Meyer of James Cook University in Cairns, Australia and her colleagues created urine collectors using standard overnight light traps and longer-standing traps that exhale delicious carbon dioxide, a mosquito come-hither.

The team set out 29 urine traps in two insect-rich spots in Queensland along with traps equipped to catch mosquito saliva. When mosquitoes fell for the trick and entered a urine trap, their excretions dripped through a mesh floor onto a collecting card. Adding a moist wick of water kept trapped mosquitoes alive and peeing longer, thus improving the sample. Pee traps picked up three viruses — West Nile, Ross River and Murray Valley encephalitis — while the saliva ones detected two, the researchers report.

Hayabusa2 has blasted the surface of asteroid Ryugu to make a crater

Hayabusa2 has blasted the asteroid Ryugu with a projectile, probably adding a crater to the small world’s surface and stirring up dust that scientists hope to snag.

The projectile, a two-kilogram copper cylinder, separated from the Hayabusa2 spacecraft at 9:56 p.m. EDT on April 4, JAXA, Japan’s space agency, reports.

Hayabusa2 flew to the other side of the asteroid to hide from debris that would have been ejected when the projectile hit (SN: 1/19/19, p. 20). Scientists won’t know for sure whether the object successfully made a crater, and, if so, how big it is, until the craft circles back. But by 10:36 p.m. EDT, Hayabusa2’s cameras had captured a blurry shot of a dust plume spurting up from Ryugu, so the team thinks the attempt worked.
“This is the world’s first collision experiment with an asteroid!” JAXA tweeted.

Hayabusa2 plans to briefly touch down inside the crater to pick up a pinch of asteroid dust. The spacecraft has already grabbed one sample of Ryugu’s surface (SN Online: 2/22/19). But dust exposed by the impact will give researchers a look at the asteroid’s subsurface, which has not been exposed to sunlight or other types of space radiation for up to billions of years.

If all goes as planned, Hayabusa2 will return to Earth with both samples in late 2020. A third planned sample pickup has been scrapped because Ryugu’s boulder-strewn surface is so hazardous for the spacecraft.
Comparing the two samples will reveal details of how being exposed to space changes the appearance and composition of rocky asteroids, and will help scientists figure out how Ryugu formed (SN Online: 3/20/19). Scientists hope that the asteroid contains water and organic material that might help explain how life got started in the solar system.

A Greek skull may belong to the oldest human found outside of Africa

A skull found in a cliffside cave on Greece’s southern coast in 1978 represents the oldest Homo sapiens fossil outside Africa, scientists say.

That skull, from an individual who lived at least 210,000 years ago, was encased in rock that also held a Neandertal skull dating to at least 170,000 years ago, contends a team led by paleoanthropologist Katerina Harvati of the University of Tübingen in Germany.

If these findings, reported online July 10 in Nature, hold up, the ancient Greek H. sapiens skull is more than 160,000 years older than the next oldest European H. sapiens fossils (SN Online: 11/2/11). It’s also older than a proposed H. sapiens jaw found at Israel’s Misliya Cave that dates to between around 177,000 and 194,000 years ago (SN: 2/17/18, p. 6).

“Multiple Homo sapiens populations dispersed out of Africa starting much earlier, and reaching much farther into Europe, than previously thought,” Harvati said at a July 8 news conference. African H. sapiens originated roughly 300,000 years ago (SN: 7/8/17, p. 6).
A small group of humans may have reached what’s now Greece more than 200,000 years ago, she suggested. Neandertals who settled in southeastern Europe not long after that may have replaced those first H. sapiens. Then humans arriving in Mediterranean Europe tens of thousands of years later would eventually have replaced resident Neandertals, who died out around 40,000 years ago (SN Online: 6/26/19).

But Harvati’s group can’t exclude the possibility that H. sapiens and Neandertals simultaneously inhabited southeastern Europe more than 200,000 years ago and sometimes interbred. A 2017 analysis of ancient and modern DNA concluded that humans likely mated with European Neandertals at that time.

The two skulls were held in a small section of wall that had washed into Greece’s Apidima Cave from higher cliff sediment and then solidified roughly 150,000 years ago. Since one skull is older than the other, each must originally have been deposited in different sediment layers before ending up about 30 centimeters apart on the cave wall, the researchers say.
Earlier studies indicated that one Apidima skull, which retains the face and much of the braincase, was a Neandertal that lived at least 160,000 years ago. But fossilization and sediment pressures had distorted the skull’s shape. Based on four 3-D digital reconstructions of the specimen, Harvati’s team concluded that its heavy brow ridges, sloping face and other features resembled Neandertal skulls more than ancient and modern human skulls. An analysis of the decay rate of radioactive forms of uranium in skull bone fragments produced an age estimate of at least 170,000 years.

A second Apidima fossil, also dated using uranium analyses, consists of the back of a slightly distorted braincase. Its rounded shape in a digital reconstruction characterizes H. sapiens, not Neandertals, the researchers say. A bunlike bulge often protrudes from the back of Neandertals’ skulls.
But without any facial remains to confirm the species identity of the partial braincase, “it is still possible that both Apidima skulls are Neandertals,” says paleoanthropologist Israel Hershkovitz of Tel Aviv University. Hershkovitz led the team that discovered the Misliya jaw and assigned it to H. sapiens.

Harvati and her colleagues will try to extract DNA and species-distinguishing proteins (SN: 6/8/19, p. 6) from the Greek skulls to determine their evolutionary identities and to look for signs of interbreeding between humans and Neandertals.

The find does little to resolve competing explanations of how ancient humans made their way out of Africa. Harvati’s suggestion that humans trekked from Africa to Eurasia several times starting more than 200,000 years ago is plausible, says paleoanthropologist Eric Delson of City University of New York’s Lehman College in an accompanying commentary. And the idea that some H. sapiens newcomers gave way to Neandertals probably also applied to humans who reached Misliya Cave and nearby Middle Eastern sites as late as around 90,000 years ago, before Neandertals occupied the area by 60,000 years ago, Delson says.

Hershkovitz disagrees. Ancient humans and Neandertals lived side-by-side in the Middle East for 100,000 years or more and occasionally interbred, he contends. Misliya Cave sediment bearing stone tools dates to as early as 274,000 years ago, Hershkovitz says. Since only H. sapiens remains have been found in the Israeli cave, ancient humans probably made those stone artifacts and could have been forerunners of Greek H. sapiens.

Both fish and humans have REM-like sleep

No one should have to sleep with the fishes, but new research on zebrafish suggests that we sleep like them.

Sleeping zebrafish have brain activity similar to both deep slow-wave sleep and rapid eye movement, or REM, sleep that’s found in mammals, researchers report July 10 in Nature. And the team may have tracked down the cells that kick off REM sleep.

The findings suggest that the basics of sleep evolved at least 450 million years ago in zebrafish ancestors, before the evolution of animals that give birth to live young instead of laying eggs. That’s 150 million years earlier than scientists thought when they discovered that lizards sleep like mammals and birds (SN: 5/28/16, p. 9).

What’s more, sleep may have evolved underwater, says Louis C. Leung, a neuroscientist at Stanford University School of Medicine. “These signatures [of sleep] really have important functions — even though we may not know what they are — that have survived hundreds of millions of years of evolution.”
In mammals, birds and lizards, sleep has several stages characterized by specific electrical signals. During slow-wave sleep, the brain is mostly quiet except for synchronized waves of electrical activity. The heart rate decreases and muscles relax. During REM or paradoxical sleep, the brain lights up with activity almost like it’s awake. But the muscles are paralyzed (except for rapid twitching of the eyes) and the heart beats erratically.

For many years, scientists have known that fruit flies, nematodes, fish, octopuses and other creatures have rest periods reminiscent of sleep. But until now, no one could measure the electrical activity of those animals’ brains to see if that rest is the same as mammals’ snoozing.

Leung and colleagues developed a system to do just that in zebrafish by genetically engineering them to make a fluorescent molecule that lights up when it encounters calcium, which is released when nerve cells and muscles are active. By following the flashes of light using a light sheet microscope, the researchers tracked brain and muscle activity in the naturally transparent fish larvae.

The next task was to lull fish asleep under the microscope. In some experiments, the team added drugs that trigger either slow-wave or REM sleep in mammals to the fish’s water. In others, researchers deprived fish of sleep for a night or tuckered the fish out with lots of activity during the day. Results from all the snooze-inducing methods were the same.

Sleeping fish have two distinct types of brain activity while sleeping, the team found. One, similar to slow-wave sleep, was characterized by short bursts of activity in some nerve cells in the brain. The researchers call that state slow-bursting sleep. REM-like sleep, which the researchers dubbed “propagating-wave sleep,” was characterized by frenzied brain activity that spreads like a wave through the brain. The researchers aren’t calling the sleep phases REM or slow-wave sleep because there are some minor differences between the way fish and mammals sleep.
A group of cells that line hollow spaces called ventricles deep in the brain seems to trigger that wave of REM-like brain activity. These ependymal cells dip fingerlike cilia into the cerebral spinal fluid that bathes the ventricles and the central nervous system. The cells appear to beat their cilia faster as amounts of a well-known, sleep-promoting hormone called melanin-concentrating hormone in the fluid increases, the researchers discovered.
It’s unclear how the ependymal cells communicate with the rest of the brain to set off REM-like activity. Such cells are also present in mammals, but no one has yet been able to see that deeply into the brains of sleeping mammals to determine whether the cells play a role in sleep. But knowing about these cells may help researchers develop better sleep aids, Leung says.

Just as in mammals, zebrafish’s whole bodies are affected during sleep. Their muscles relax during sleep and their hearts slow from about 200 beats per minute when awake to about 110 to 120 beats per minute while asleep during the slow-wave–like sleep. During the REM-like sleep, the heart slows even more to about 90 beats per minute and loses its regular rhythm. And the fish’s muscles also go completely slack. The one characteristic that the fish lack is rapid eye movement. Instead, the eyes roll back into their sockets, says study coauthor Philippe Mourrain, a biologist at Stanford University School of Medicine.

Lack of eye movement could indicate that emotion-processing parts of the brain, such as the amygdala, aren’t as active in zebrafish as they are in mammals, says sleep researcher Allan Pack of the University of Pennsylvania Perelman School of Medicine. With their brain-activity monitoring, the researchers have taken sleep research “to the next level,” says Pack, and “they present pretty compelling evidence” of slow-wave and REM-like sleep in the fish.

The whole-body involvement that the researchers documented solidifies the argument that fish sleep is similar to mammals, says neuroscientist Paul Shaw of Washington University School of Medicine in St. Louis. In all organisms known to snooze, “sleep is manifest everywhere” in the body, he says.

Future experiments may show why poor sleep or a lack of Zs contributes to health problems in people, such as obesity, heart disease and diabetes.

Ancient DNA unveils disparate fates of Ice Age hunter-gatherers in Europe

Ice sheets expanded across much of northern Europe from around 25,000 to 19,000 years ago, making a huge expanse of land unlivable. That harsh event set in motion a previously unrecognized tale of two human populations that played out at opposite ends of the continent.

Western European hunter-gatherers outlasted the icy blast in the past. Easterners got replaced by migrations of newcomers.

That’s the implication of the largest study to date of ancient Europeans’ DNA, covering a period before, during and after what’s known as the Last Glacial Maximum, paleogeneticist Cosimo Posth and colleagues report March 1 in Nature.
As researchers have long thought, southwestern Europe provided refuge from the last Ice Age’s big chill for hunter-gatherers based in and near that region, the scientists say. But it turns out that southeastern Europe, where Italy is now located, did not offer lasting respite from the cold for nearby groups, as previously assumed.

Instead, those people were replaced by genetically distinct hunter-gatherers who presumably had lived just to the east along the Balkan Peninsula. Those people, who carried ancestry from parts of southwestern Asia, began trekking into what’s now northern Italy by about 17,000 years ago, as the Ice Age began to wane.

“If local [Ice Age] populations in Italy did not survive and were replaced by groups from the Balkans, this completely changes our interpretation of the archaeological record,” says Posth, of the University of Tübingen in Germany.

Posth and colleagues’ conclusions rest on analyses of DNA from 356 ancient hunter-gatherers, including new molecular evidence for 116 individuals from 14 countries in Europe and Asia. Excavated human remains that yielded DNA dated to between about 45,000 and 5,000 years ago (SN: 4/7/21).

Comparisons of sets of gene variants inherited by these hunter-gatherers from common ancestors enabled the researchers to reconstruct population movements and replacements that shaped ancient Europeans’ genetic makeup. For the first time, ancient DNA evidence included individuals from what’s known as the Gravettian culture, which dates from about 33,000 to 26,000 years ago in central and southern Europe, and from southwestern Europe’s Solutrean culture, which dates to between about 24,000 and 19,000 years ago.
Contrary to expectations, makers of Gravettian tools came from two genetically distinct groups that populated western and eastern Europe for roughly 10,000 years before the Ice Age reached its peak, Posth says. Researchers have traditionally regarded Gravettian implements as products of a biologically uniform population that occupied much of Europe.

“What we previously thought was one genetic ancestry in Europe turned out to be two,” says paleogeneticist Mateja Hajdinjak of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who did not participate in the new study. And “it seems that western and southwestern Europe served as a [refuge from glaciation] more than southeastern Europe and Italy.”

Descendants of the western Gravettian population, who are associated with Solutrean artifacts and remnants of another ancient culture in western Europe that ran from about 19,000 to 14,000 years ago, outlasted the Ice Age before spreading northeastward across Europe, the researchers say.

Further support for southwestern Europe as an Ice Age refuge comes from DNA extracted from a pair of fossil teeth that belonged to an individual linked to the Solutrean culture in southern Spain. That roughly 23,000-year-old adult was genetically similar to western European hunter-gatherers who lived before and after the Last Glacial Maximum, Max Planck paleogeneticist Vanessa Villalba-Mouco and colleagues, including Posth, report March 1 in Nature Ecology & Evolution.

Meanwhile, the genetic evidence suggests that hunter-gatherers in what’s now Italy were replaced by people from farther east, probably based in the Balkan region. Those newcomers must have brought with them a distinctive brand of stone artifacts, previously excavated at Italian sites and elsewhere in eastern Europe, known as Epigravettian tools, Posth says. Many archaeologists have suspected that Epigravettian items were products of hunter-gatherers who clustered in Italy during the Ice Age’s peak freeze.

But, Hajdinjak says, analyses of DNA from fossils of Ice Age Balkan people are needed to clarify what groups moved through Italy, and when those migrations occurred.

Ultimately, descendants of Ice Age migrants into Italy reached southern Italy and then western Europe by around 14,000 years ago, Posth and colleagues say. Ancient DNA evidence indicates that, during those travels, they left a major genetic mark on hunter-gatherers across Europe.

How meningitis-causing bacteria invade the brain

Bacteria can slip into the brain by commandeering cells in the brain’s protective layers, a new study finds. The results hint at how a deadly infection called bacterial meningitis takes hold.

In mice infected with meningitis-causing bacteria, the microbes exploit previously unknown communication between pain-sensing nerve cells and immune cells to slip by the brain’s defenses, researchers report March 1 in Nature. The results also hint at a new way to possibly delay the invasion — using migraine medicines to interrupt those cell-to-cell conversations.
Bacterial meningitis is an infection of the protective layers, or meninges, of the brain that affects 2.5 million people globally per year. It can cause severe headaches and sometimes lasting neurological injury or death.

“Unexpectedly, pain fibers are actually hijacked by the bacteria as they’re trying to invade the brain,” says Isaac Chiu, an immunologist at Harvard Medical School in Boston. Normally, one might expect pain to be a warning system for us to shut down the bacteria in some way, he says. “We found the opposite…. This [pain] signal is being used by the bacteria for an advantage.”

It’s known that pain-sensing neurons and immune cells coexist in the meninges, particularly in the outermost layer called the dura mater (SN: 11/11/20). So to see what role the pain and immune cells play in bacterial meningitis, Chiu’s team infected mice with two of the bacteria known to cause the infection in humans: Streptococcus pneumoniae and S. agalactiae. The researchers then observed where that bacteria ended up in mice genetically tweaked to lack pain-sensing nerve cells and compared those resting spots to those in mice with the nerve cells intact.

Mice without pain-sensing neurons had fewer bacteria in the meninges and brain than those with the nerve cells, the team found. This contradicts the idea that pain in meningitis serves as a warning signal to the body’s immune system, mobilizing it for action.

Further tests showed that the bacteria triggered a chain of immune-suppressing events, starting with the microbes secreting toxins in the dura mater.

The toxins hitched onto the pain neurons, which in turn released a molecule called CGRP. This molecule is already known to bind to a receptor on immune cells, where it helps control the dura mater’s immune responses. Injecting infected mice with more CGRP lowered the number of dural immune cells and helped the infection along, the researchers found.

The team also looked more closely at the receptor that CGRP binds to. In infected mice bred without the receptor, fewer bacteria made it into the brain. But in ones with the receptor, immune cells that would otherwise engulf bacteria and recruit reinforcements were disabled.
The findings suggest that either preventing the release of CGRP or preventing it from binding to immune cells might help delay infection.

In humans, neuroscientists know that CGRP is a driver of headaches — it’s already a target of migraine medications (SN: 6/5/18). So the researchers gave five mice the migraine medication olcegepant, which blocks CGRP’s effects, and infected them with S. pneumoniae. After infection, the medicated mice had less bacteria in the meninges and brain, took longer to show symptoms, didn’t lose as much weight and survived longer than mice that were not given the medication.

The finding suggests olcegepant slowed the infection. Even though it only bought mice a few extra hours, that’s crucial in meningitis, which can develop just as quickly. Were olcegepant to work the same way in humans, it might give doctors more time to treat meningitis. But the effect is probably not as dramatic in people, cautions Michael Wilson, a neurologist at the University of California, San Francisco who wasn’t involved with the work.

Scientists still need to determine whether pain-sensing nerve cells and immune cells have the same rapport in human dura mater, and whether migraine drugs could help treat bacterial meningitis in people.

Neurologist Avindra Nath has doubts. Clinicians think the immune response and inflammation damage the brain during meningitis, says Nath, who heads the team investigating nervous system infections at the National Institute of Neurological Disorders and Stroke in Bethesda, Md. So treatment involves drugs that suppress the immune response, rather than enhance it as migraine medications might.

Chiu acknowledges this but notes there might be room for both approaches. If dural mater immune cells could head the infection off at the pass, it may keep some bacteria from penetrating the defenses, minimizing brain inflammation.

This study might not ultimately change how clinicians treat patients, Wilson says. But it still reveals something new about one of the first lines of defense for the brain.

Fungi don’t turn humans into zombies. But The Last of Us gets some science right

Like so many others, I’ve been watching the HBO series The Last of Us. It’s a classic zombie apocalypse drama following Joel (played by Pedro Pascal) and Ellie (Bella Ramsey) as they make their way across the former United States (now run by a fascist government called Fedra).

I’m a big fan of zombie and other post-apocalyptic fiction. And my husband had told me how good the storyline is in the video game that inspired the series, so I was prepared for interesting storytelling. What I didn’t expect was to be so intrigued by the science behind the sci-fi.
In the opening minutes of the series, two scientists on a fictional 1968 talk show discuss the microbes that give them pandemic nightmares. One says it’s fungi — not viruses or bacteria — that keep him awake. Especially worrisome, he says, are the fungi that control rather than kill their hosts. He gives the example of fungi that turns ants into living zombies, puppeteering the insects by flooding their brains with hallucinogens.

He goes on to warn that even though human body temperature keeps us fungus-free, that might not be true if the world got a little bit warmer. He predicts that as the thermostat climbs, a fungus that hijacks insects could mutate a gene allowing it to burrow into human brains and take control of our minds. Such a fungus could induce its human puppets to spread the fungus “by any means necessary,” he says. What’s worse, there are no preventatives, treatments or cures, nor any way to make them.

It’s a brief segment, but it had me hooked. It all sounded so chilling and … plausible. After all, fungi like ones that cause nail infections, yeast infections and ringworm already infect people.

So I consulted some experts on fungal infections to find out whether this could actually happen.

I’ve got good news and bad news.

First, the bad news.

Bad news: Climate change has already helped one fungus mutate to infect humans
I wanted to know if warming has spurred any fungi to mutate and become infectious. So I called Arturo Casadevall. He has been thinking about fungi and heat for a long time. He’s proposed that the need to avoid fungal infections may have provided the evolutionary pressure that drove mammals and birds to evolve warm-bloodedness (SN: 12/3/10).

Most fungal species simply can’t reproduce at human body temperature (37° Celsius, or 98.6° Fahrenheit). But as the world warms, “these strains either have to die or adapt,” says Casadevall, a microbiologist who specializes in fungal infections at Johns Hopkins Bloomberg School of Public Health. That raises the possibility that fungi that now infect insects or reptiles could evolve to grow at temperatures closer to human body temperature.

At the same time, humans’ average body temperature has been falling since the 19th century, at least in high-income countries, researchers reported in eLife in 2020. One study from the United Kingdom pegs average body temperature at 36.6° C (97.9° F). And some of us are even cooler.

Fungi’s possible adaptation to higher heat and humans’ cooling body temperature are on a collision course, Casadevall says.
He and colleagues presented evidence of one such crash. Climate change may have allowed a deadly fungus called Candida auris to acclimate to human body temperatures (SN: 7/26/19). A version of the fungus that could infect humans independently emerged on three continents from 2012 to 2015. “It’s not like someone took a plane a spread it. These things came out of nowhere simultaneously,” Casadevall says.

Some people argue that the planet hasn’t warmed enough to make fungi a problem, he says. “But you have to think about all the really hot days [that come with climate change]. Every really hot day is a selection event,” in which many fungi will die. But some of those fungi will have mutations that help them handle the heat. Those will survive. Their offspring may be able to survive future even hotter heat waves until human body temperature is no challenge.

Fungi that infect people are usually not picky about their hosts, Casadevall says. They will grow in soil or — if given an opportunity — in people, pets or in other animals. The reason fungi don’t infect people more often is that “the world is much colder than we are, and they have no need of us,” he says.

When people do get infected, the immune system usually keeps the fungi in check. But fungal infections can cause serious illness or be deadly, particularly to people with weakened immune systems (SN: 11/29/21; SN: 1/10/23).

The second episode of The Last of Us reveals that the zombie-creating fungi initially spread through people eating contaminated flour. Then, the infected people attack and bite others, spreading the fungus.

In real life, most human infections arise from breathing in spores. But Casadevall says it’s “not implausible” that people could get infected by eating spores or by being bitten.

Also bad: Fungal genes can adapt to higher heat
I also wondered exactly how a fungus could evolve in response to heat. Asiya Gusa, a fungal researcher at Duke University School of Medicine, has published one possibility.

In 2020, she and colleagues reported in the Proceedings of the National Academy of Sciences on how one fungus mutated at elevated temperature to become harder to fight.

Cryptococcus deneoformans, which already infects humans (though it’s no zombie-maker), became resistant to some antifungal drugs when grown at human body temperature. The resistance was born when mobile bits of DNA called transposons (often called jumping genes) hopped into a few genes needed for the antifungals to work.

In a follow-up study, Gusa and colleagues grew C. deneoformans at either 30° C or 37° C for 800 generations, long enough to detect multiple changes in their DNA. Fungi had no problem growing at the balmy 30° C (86° F), the temperature at which researchers typically grow fungi in the lab. But their growth slowed at the higher temperature, a sign that the fungi were under stress from the heat.

In C. deneoformans, that heat stress really got things jumping. One type of transposon accumulated a median of 12 extra copies of itself in fungi grown at body temperature. By contrast, fungi grown at 30° C tended to pick up a median of only one extra copy of the transposon. The team reported those results January 20 in PNAS. The researchers don’t yet know the effect the transposon hops might have on the fungi’s ability to infect people, cause disease or resist fungus-fighting drugs.

So yeah, the bad news is not great. Fungi are mutating in the heat and at least one species has gained the ability to infect people thanks to climate change. Other fungi that infect people are more widespread than they were in the 1950s and 1960s, also thanks to a warming world (SN: 1/4/23).

But I promised good news. And here it is.

Good news: Human brains may resist zombification
It may not be our body temperature, but our brain chemistry, that protects us from being hijacked by zombifying fungi.

I consulted Charissa de Bekker and Jui-Yu Chou, two researchers who study the Ophiocordyceps fungi that are the model for the TV show’s fungal menace. These fungi infect ants, flooding the insects with a cocktail of chemicals that steer the ants to climb plants. Once in position, the ants chomp down and the chemicals keep the jaw muscles locked in place (SN: 7/17/19).

Unlike most fictional zombies, the ants are alive during this process. “A lot of people get the misconception that we work on undead ants,” says de Bekker, a microbiologist at Utrecht University in the Netherlands. She’s glad to see the show “stick to the story of the host being very much alive while its behaviors change.” The fungi even help preserve the ant, keeping it alive even while feeding on it. But eventually the ant dies. Then a mushroom rises from the corpse, showering spores onto the ground where other ants may become infected.

Related species of Ophiocordyceps infect various species of ants and other insects. But each fungal species is very specific to the host it infects. That’s because the fungi had to individualize the chemicals they use to control the particular species they infect. The ability to manipulate behavior comes at the cost of not being able to infect multiple species.
A fungus that specializes in infecting ants probably can’t get past humans’ immune systems, says Chou, a fungal researcher at the National Changhua University of Education in Taiwan. “Think of a key that fits into a specific lock. It is only this unique combination that will trigger the lock to open,” he says.

Even if the fungi evolved to withstand human body temperature and immune system attacks, they probably couldn’t take control of our minds, de Bekker says. “Manipulation is like a whole different ballgame. You need a ton of additional tools to get there.” It took millions of years of coevolution for the fungi to master piloting ants, after all.

While fungi do make mind-altering chemicals that can affect human behavior (LSD and psilocybin, for instance), Casadevall agrees that fungi that mind control insects probably won’t turn humans into zombies. “It’s not one of my worries,” he says.

Infected ants don’t turn into vicious, biting zombies either, de Bekker says. “If anything, we actually see the healthy ants being aggressive toward infected individuals, once they figure out that they’re infected, to basically get rid of them.” That “social immunity” helps protect the rest of the nest from infection.

Also good: Humans are innovative enough to develop treatments
The fictional scientist’s assertion that we couldn’t prevent, treat or cure these fungal infections is also a stretch.

Antifungal drugs exist and they cure many fungal infections, though some infections may persist. Some that spread to the brain may be particularly difficult to clear.Some fungi are also evolving resistance to the drugs. And a few fungal vaccines are in the works, although they may not be ready for years.

The experts I talked to say they hope the show will bring attention to real fungal diseases.

Gusa was especially glad to see fungi in the limelight. And she shares my fondness for that retro series opening in which the scientist predicts climate change could spawn mind-controlling fungi bent on infecting every person on the planet.

“I was pretty much yelling at the TV when I watched the [show’s] intro,” in an excited kind of way, she says. “This is the foundation of a lot of my grant funding … the threat of thermal adaptation of fungi.… To see it played out on the screen was something kind of fun.”

The Milky Way may be spawning many more stars than astronomers had thought

The Milky Way is churning out far more stars than previously thought, according to a new estimate of its star formation rate.

Gamma rays from aluminum-26, a radioactive isotope that arises primarily from massive stars, reveal that the Milky Way converts four to eight solar masses of interstellar gas and dust into new stars each year, researchers report in work submitted to arXiv.org on January 24. That range is two to four times the conventional estimate and corresponds to an annual birthrate in our galaxy of about 10 to 20 stars, because most stars are less massive than the sun.
At this rate, every million years — a blink of the eye in astronomical terms — our galaxy spawns 10 million to 20 million new stars. That’s enough to fill roughly 10,000 star clusters like the beautiful Pleiades cluster in the constellation Taurus. In contrast, many galaxies, including most of the ones that orbit the Milky Way, make no new stars at all.

“The star formation rate is very important to understand for galaxy evolution,” says Thomas Siegert, an astrophysicist at the University of Würzburg in Germany. The more stars a galaxy makes, the faster it enriches itself with oxygen, iron and the other elements that stars create. Those elements then alter star-making gas clouds and can change the relative number of large and small stars that the gas clouds form.

Siegert and his colleagues studied the observed intensity and spatial distribution of emission from aluminum-26 in our galaxy. A massive star creates this isotope during both life and death. During its life, the star blows the aluminum into space via a strong wind. If the star explodes when it dies, the resulting supernova forges more. The isotope, with a half-life of 700,000 years, decays and gives off gamma rays.

Like X-rays, and unlike visible light, gamma rays penetrate the dust that cloaks the youngest stars. “We’re looking through the entire galaxy,” Siegert says. “We’re not X-raying it; here we’re gamma-raying it.”

The more stars our galaxy spawns, the more gamma rays emerge. The best match with the observations, the researchers find, is a star formation rate of four to eight solar masses a year. That is much higher than the standard estimate for the Milky Way of about two solar masses a year.

The revised rate is very realistic, says Pavel Kroupa, an astronomer at the University of Bonn in Germany who was not involved in the work. “I’m very impressed by the detailed modeling of how they account for the star formation process,” he says. “It’s a very beautiful work. I can see some ways of improving it, but this is really a major step in the absolutely correct direction.”

Siegert cautions that it is difficult to tell how far the gamma rays have traveled before reaching us. In particular, if some of the observed emission arises nearby — within just a few hundred light-years of us — then the galaxy has less aluminum-26 than the researchers have calculated, which means the star formation rate is on the lower side of the new estimate. Still, he says it’s unlikely to be as low as the standard two solar masses per year.
In any event, the Milky Way is the most vigorous star creator in a collection of more than 100 nearby galaxies called the Local Group. The largest Local Group galaxy, Andromeda, converts only a fraction of a solar mass of gas and dust into new stars a year. Among Local Group galaxies, the Milky Way ranks second in size, but its high star formation rate means that we definitely try a lot harder.

Psychedelics may improve mental health by getting inside nerve cells

Psychedelics go beneath the cell surface to unleash their potentially therapeutic effects.

These drugs are showing promise in clinical trials as treatments for mental health disorders (SN: 12/3/21). Now, scientists might know why. These substances can get inside nerve cells in the cortex — the brain region important for consciousness — and tell the neurons to grow, researchers report in the Feb. 17 Science.

Several mental health conditions, including depression and post-traumatic stress disorder, are tied to chronic stress, which degrades neurons in the cortex over time. Scientists have long thought that repairing the cells could provide therapeutic benefits, like lowered anxiety and improved mood.
Psychedelics — including psilocin, which comes from magic mushrooms, and LSD — do that repairing by promoting the growth of nerve cell branches that receive information, called dendrites (SN: 11/17/20). The behavior might explain the drugs’ positive outcomes in research. But how they trigger cell growth was a mystery.

It was already known that, in cortical neurons, psychedelics activate a certain protein that receives signals and gives instructions to cells. This protein, called the 5-HT2A receptor, is also stimulated by serotonin, a chemical made by the body and implicated in mood. But a study in 2018 determined that serotonin doesn’t make these neurons grow. That finding “was really leaving us scratching our heads,” says chemical neuroscientist David Olson, director of the Institute for Psychedelics and Neurotherapeutics at the University of California, Davis.

To figure out why these two types of chemicals affect neurons differently, Olson and colleagues tweaked some substances to change how well they activated the receptor. But those better equipped to turn it on didn’t make neurons grow. Instead, the team noticed that “greasy” substances, like LSD, that easily pass through cells’ fatty outer layers resulted in neurons branching out.

Polar chemicals such as serotonin, which have unevenly distributed electrical charges and therefore can’t get into cells, didn’t induce growth. Further experiments showed that most cortical neurons’ 5-HT2A receptors are located inside the cell, not at the surface where scientists have mainly studied them.

But once serotonin gained access to the cortical neurons’ interior — via artificially added gateways in the cell surface — it too led to growth. It also induced antidepressant-like effects in mice. A day after receiving a surge in serotonin, animals whose brain cells contained unnatural entry points didn’t give up as quickly as normal mice when forced to swim. In this test, the longer the mice tread water, the more effective an antidepressant is predicted to be, showing that inside access to 5-HT2A receptors is key for possible therapeutic effects.

“It seems to overturn a lot about what we think should be true about how these drugs work,” says neuroscientist Alex Kwan of Cornell University, who was not involved in the study. “Everybody, including myself, thought that [psychedelics] act on receptors that are on the cell surface.”
That’s where most receptors that function like 5-HT2A are found, says biochemist Javier González-Maeso of the Virginia Commonwealth University in Richmond, who was also not involved in the work.

Because serotonin can’t reach 5-HT2A receptors inside typical cortical neurons, Olson proposes that the receptors might respond to a different chemical made by the body. “If it’s there, it must have some kind of role,” he says. DMT, for example, is a naturally occurring psychedelic made by plants and animals, including humans, and can reach a cell’s interior.

Kwan disagrees. “It’s interesting that psychedelics can act on them, but I don’t know if the brain necessarily needs to use them when performing its normal function.” Instead, he suggests that the internal receptors might be a reserve pool, ready to replace those that get degraded on the cell surface.

Either way, understanding the cellular mechanisms behind psychedelics’ potential therapeutic effects could help scientists develop safer and more effective treatments for mental health disorders.

“Ultimately, I hope this leads to better medicines,” Olson says.

Glassy eyes may help young crustaceans hide from predators in plain sight

Fledgling crustaceans have eyes like the sea, a peculiarity that could help them hide from predators.

Young shrimp, crab or lobster larvae already rock nearly translucent bodies to stay out of view. But dark eye pigments essential for vision pose the risk of exposing the animals anyway.

Some see-through ocean animals rely on mirrored irises or minuscule eyes to avoid detection. Young shrimp and prawns, on the other hand, camouflage their dark pigments behind light-reflecting glass made of tiny, crystalline spheres, researchers report in the Feb. 17 Science.
Variations in the size and placement of the orbs allow the crustaceans’ eyes to shine light that precisely matches the color of the surrounding water, possibly rendering them invisible to predators on the hunt for a meal.

Technologies that mimic the nanospheres’ structure could one day inspire more efficient solar energy or bio-friendly paints, the scientists say.

“I’ve often wondered what’s going on with [these animals’] eyeshine,” says evolutionary biologist Heather Bracken-Grissom of Florida International University in Miami, who was not involved in the study. She and colleagues often collect crustaceans from the deep sea, giving them nicknames like “blue-eyed arthropod” or “green-eyed, weird-looking shrimp” because the creatures don’t resemble their adult forms. Now, she says, that eye color makes sense.

In the study, chemist Keshet Shavit and colleagues used an electron microscope to peer into the eyes of lab-raised and wild crustaceans. Inside shrimp and prawn eyes, the team found crystalline nanospheres made of isoxanthopterin, a molecule that reflects light.

The spheres are a bit like disco balls, with highly reflective surfaces pointing outward, says study coauthor Benjamin Palmer, a chemist at Ben-Gurion University of the Negev in Beer-Sheva, Israel. Each sphere is made of thin, isoxanthopterin plates that stick together to form balls that range in size from around 250 to 400 nanometers in diameter.

These balls are arranged in clusters at the base of protein-dense cones that focus light on the animal’s light-sensing nerves, and form a protective cover over the pigmented cells. But crustacean larvae can still see because there are small holes in the glass, Palmer says. “It’s basically allowing light to go down to the retina on some specific angles, but on other angles, it’s reflecting light back.”
The size and order of the spheres seem to influence the color of the reflected light, the team’s observations and computer simulations show.

“The correlation between the particle size and the eyeshine color is beyond amazing,” says Shavit, also at Ben-Gurion University. Nanosphere size appears to help the animals’ eyes match the color of their native habitat, helping the critters blend into the background.

Blue-eyed shrimp that inhabit the Gulf of Aqaba’s clear blue waters off the coast of Israel, for instance, have spheres that are approximately 250 to 325 nanometers in diameter. The 400-nanometer-wide spheres of a freshwater prawn (Macrobrachium rosenbergii) glitter yellow-green, mimicking muddy waters found in the salty estuaries where they live.
The prawn’s eyes also seem to be able to reflect different colors in different environments. Individuals exposed to sunlight for four hours in the lab had silvery yellow eyes, possibly a result of nanospheres arranged in a disorganized jumble. But individuals left in the dark overnight had green eyes. Their nanospheres are arranged in layers — though the orbs within each layer are still disorganized, Palmer says.

Such adaptable eyes could help larvae move undetected through different parts of the ocean as changing light levels alter the color of the water, Bracken-Grissom says. At night, young crustaceans migrate to shallow waters to feed and dive back down when the sun rises. “If they are in fact using it as a form of camouflage, it would be an ingenious way to camouflage themselves as they move through these different light environments.”