The undoing of toxic “forever chemicals” may be found in products in your pantry.
Perfluoroalkyl and polyfluoroalkyl substances, also known as PFAS, can persist in the environment for centuries. While the health impacts of only a fraction of the thousands of different types of PFAS have been studied, research has linked exposure to high levels of some of these widespread, humanmade chemicals to health issues such as cancer and reproductive problems.
Now, a study shows that the combination of ultraviolet light and a couple of common chemicals can break down nearly all the PFAS in a concentrated solution in just hours. The process involves blasting UV radiation at a solution containing PFAS and iodide, which is often added to table salt, and sulfite, a common food preservative, researchers report in the March 15 Environmental Science & Technology. “They show that when [iodide and sulfite] are combined, the system becomes a lot more efficient,” says Garrett McKay, an environmental chemist at Texas A&M University in College Station who was not involved in the study. “It’s a big step forward.”
A PFAS molecule contains a chain of carbon atoms that are bonded to fluorine atoms. The carbon-fluorine bond is one the strongest known chemical bonds. This sticky bond makes PFAS useful for many applications, such as water- and oil-repellant coatings, firefighting foams and cosmetics (SN: 6/4/19; SN: 6/15/21). Owing to their widespread use and longevity, PFAS have been detected in soils, food and even drinking water. The U.S. Environmental Protection Agency sets healthy advisory levels for PFOA and PFOS — two common types of PFAS — at 70 parts per trillion.
Treatment facilities can filter PFAS out of water using technologies such as activated carbon filters or ion exchange resins. But these removal processes concentrate PFAS into a waste that requires a lot of energy and resources to destroy, says study coauthor Jinyong Liu, an environmental chemist at the University of California, Riverside. “If we don’t [destroy this waste], there will be secondary contamination concerns.”
One of the most well-studied ways to degrade PFAS involves mixing them into a solution with sulfite and then blasting the mixture with UV rays. The radiation rips electrons from the sulfite, which then move around, snipping the stubborn carbon-fluorine bonds and thereby breaking down the molecules.
But some PFAS, such as a type known as PFBS, have proven difficult to degrade this way. Liu and his colleagues irradiated a solution containing PFBS and sulfite for an entire day, only to find that less than half of the pollutant in the solution had broken down. Achieving higher levels of degradation required more time and additional sulfite to be poured in at spaced intervals.
The researchers knew that iodide exposed to UV radiation produces more bond-cutting electrons than sulfite. And previous research has demonstrated that UV irradiation paired with iodide alone could be used to degrade PFAS chemicals.
So Liu and his colleagues blasted UV rays at a solution containing PFBS, iodide and sulfite. To the researchers’ surprise, after 24 hours of irradiation, less than 1 percent of the stubborn PFBS remained.
What’s more, the researchers showed that the process destroyed other types of PFAS with similar efficiency and was also effective when PFAS concentrations were 10 times that which UV light and sulfite alone could degrade. And by adding iodide the researchers found that they could speed up the reaction, Liu says, making the process that much more energy efficient.
In the solution, iodide and sulfite worked together to sustain the destruction of PFAS molecules, Liu explains. When UV rays release an electron from iodide, that iodide is converted into a reactive molecule which may then recapture freed electrons. But here sulfite can step in and bond with these reactive molecules and with electron-scavenging oxygen in the solution. This sulfite “trap” helps keep the released electrons free to cut apart PFAS molecules for eight times longer than if sulfite wasn’t there, the researchers report.
It’s surprising that no one had demonstrated the effectiveness of using sulfite with iodide to degrade PFAS before, McKay says.
Liu and his colleagues are now collaborating with an engineering company, using their new process to treat PFAS in a concentrated waste stream. The pilot test will conclude in about two years.
U.S. and Soviet leaders … signed agreements on space, science and technology, health and the environment…. The space agreement … outlines plans for cooperation in fields such as meteorology, study of the natural environment, planetary exploration and space biology.
Update The 1972 space agreement led to the first international human spaceflight, the Apollo-Soyuz mission, during which Soviet and U.S. crews socialized in space (SN: 7/26/75, p. 52). Apollo-Soyuz encouraged decades of collaboration that continues today on the International Space Station. Now, Russia’s war in Ukraine has prompted many countries to pull back on scientific endeavors with Russia, in space and on Earth (SN: 3/26/22, p. 6). While NASA remains committed to the space station, the head of Russia’s space agency has threatened to end the cooperation in retaliation for sanctions imposed in response to the war. Russia has yet to make moves to abandon the station, though the country has ceased supplying rocket engines to the United States.
Astronomers have added a new species to the neutron star zoo, showcasing the wide diversity among the compact magnetic remains of dead, once-massive stars.
The newfound highly magnetic pulsar has a surprisingly long rotation period, which is challenging the theoretical understanding of these objects, researchers report May 30 in Nature Astronomy. Dubbed PSR J0901-4046, this pulsar sweeps its lighthouse-like radio beam past Earth about every 76 seconds — three times slower than the previous record holder. While it’s an oddball, some of this newfound pulsar’s characteristics are common among its relatives. That means this object may help astronomers better connect the evolutionary phases among mysterious species in the neutron star menagerie.
Astronomers know of many types of neutron stars. Each one is the compact object left over after a massive star’s explosive death, but their characteristics can vary. A pulsar is a neutron star that astronomers detect at a regular interval thanks to its cosmic alignment: The star’s strong magnetic field produces beams of radio waves emanating from near the star’s poles, and every time one of those beams sweeps across Earth, astronomers can see a radio pulse.
The newfound, slowpoke pulsar sits in our galaxy, roughly 1,300 light-years away. Astrophysicist Manisha Caleb of the University of Sydney in Australia and her colleagues found it in data from the MeerKAT radio telescope outside Carnarvon, South Africa.
Further observations with MeerKAT revealed not only the pulsar’s slow, steady radio beat — a measure of how fast it spins — but also another important detail: The rate at which the spin slows as the pulsar ages. And those two bits of info revealed something odd about this pulsar. According to theory, it should not be emitting radio waves. And yet, it is.
As neutron stars age, they lose energy and spin more slowly. According to calculations, “at some point, they’ve exhausted all their energy, and they cease to emit any sort of emission,” Caleb says. They’ve become dead to detectors.
A pulsar’s rotation period and the slowdown of its spin relates to the strength of its magnetic field, which accelerates subatomic particles streaming from the star and, in turn, generates radio waves. Any neutron stars spinning as slowly as PSR J0901-4046 are in this stellar “graveyard” and shouldn’t produce radio signals.
But “we just keep finding weirder and weirder pulsars that chip away at that understanding,” says astrophysicist Maura McLaughlin of West Virginia University in Morgantown, who wasn’t involved with this work.
The newfound pulsar could be its own unique species of neutron star. But in some ways, it also looks a bit familiar, Caleb says. She and her colleagues calculated the pulsar’s magnetic field from the rate its spin is slowing, and it’s incredibly strong, similar to magnetars (SN: 9/17/02). This hints that PSR J0901-4046 could be what’s known as a “quiescent magnetar,” which is a pulsar with very strong magnetic fields that occasionally emits brilliantly energetic bursts of X-rays or other radiation. “We’re going to need either X-ray emission or [ultraviolet] observations to confirm whether it is indeed a magnetar or a pulsar,” she says.
The discovery team still has additional observations to analyze. “We do have a truckload more data on it,” says astrophysicist Ian Heywood of the University of Oxford. The researchers are looking at how the object’s brightness is changing over time and whether its spin abruptly changes, or “glitches.”
The astronomers also are altering their automated computer programs, which scan the radio data and flag intriguing signals, to look for these longer-duration spin periods — or even weirder and more mysterious neutron star phenomena. “The sweet thing about astronomy, for me, is what’s out there waiting for us to find,” Heywood says.
There are things I will always remember from my time in New Mexico. The way the bark of towering ponderosa pines smells of vanilla when you lean in close. Sweeping vistas, from forested mountaintops to the Rio Grande Valley, that embellish even the most mundane shopping trip. The trepidation that comes with the tendrils of smoke rising over nearby canyons and ridges during the dry, wildfire-prone summer months.
There were no major wildfires near Los Alamos National Laboratory during the year and a half that I worked in public communications there and lived just across Los Alamos Canyon from the lab. I’m in Maryland now, and social media this year has brought me images and video clips of the wildfires that have been devastating parts of New Mexico, including the Cerro Pelado fire in the Jemez Mountains just west of the lab. Wherever they pop up, wildfires can ravage the land, destroy property and displace residents by the tens of thousands. The Cerro Pelado fire is small compared with others raging east of Santa Fe — it grew only to the size of Washington, D.C. The fire, which started mysteriously on April 22, is now mostly contained. But at one point it came within 5.6 kilometers of the lab, seriously threatening the place that’s responsible for creating and maintaining key portions of fusion bombs in our nation’s nuclear arsenal.
That close call may be just a hint of growing fire risks to come for the weapons lab as the Southwest suffers in the grip of an epic drought made worse by human-caused climate change (SN: 4/16/20). May and June typically mark the start of the state’s wildfire season. This year, fires erupted in April and were amplified by a string of warm, dry and windy days. The Hermits Peak and Calf Canyon fires east of Santa Fe have merged to become the largest wildfire in New Mexico’s recorded history.
Los Alamos National Lab is in northern New Mexico, about 56 kilometers northwest of Santa Fe. The lab’s primary efforts revolve around nuclear weapons, accounting for 71 percent of its $3.9 billion budget, according the lab’s fiscal year 2021 numbers. The budget covers a ramp-up in production of hollow plutonium spheres, known as “pits” because they are the cores of nuclear bombs, to 30 per year beginning in 2026. That’s triple the lab’s current capability of 10 pits per year. The site is also home to radioactive waste and debris that has been a consequence of weapons production since the first atomic bomb was built in Los Alamos in the early 1940s (SN: 8/6/20).
What is the danger due to fire approaching the lab’s nuclear material and waste? According to literature that Peter Hyde, a spokesperson for the lab, sent to me to ease my concern, not much.
Over the last 3½ years, the lab has removed 3,500 tons of trees and other potential wildfire fuel from the sprawling, 93-square-kilometer complex. Lab facilities, a lab pamphlet says, “are designed and operated to protect the materials that are inside, and radiological and other potentially hazardous materials are stored in containers that are engineered and tested to withstand extreme environments, including heat from fire.”
What’s more, most of roughly 20,000 drums full of nuclear waste that were stored under tents on the lab’s grounds have been removed. They were a cause for anxiety during the last major fire to threaten the lab in 2011. According to the most recent numbers on the project’s website, all but 3,812 of those drums have been shipped off to be stored 655 meters underground at the Waste Isolation Pilot Plant near Carlsbad, N.M.
But there’s still 3,500 cubic meters of nuclear waste in the storage area, according to a March 2022 DOE strategic planning document for Los Alamos. That’s enough to fill 17,000 55-gallon drums. So potentially disastrous quantities of relatively exposed nuclear waste remain at the lab — a single drum from the lab site that exploded after transport to Carlsbad in 2014 resulted in a two-year shutdown of the storage facility. With a total budgeted cleanup cost of $2 billion, the incident is one of the most expensive nuclear accidents in the nation’s history.
Since the 2011 fire, a wider buffer space around the tents has been cleared of vegetation. In conjunction with fire suppression systems, it’s unlikely that wildfire will be a danger to the waste-filled drums, according to a 2016 risk analysis of extreme wildfire scenarios conducted by the lab.
But a February 2021 audit by the U.S. Department of Energy’s Office of Inspector General is less rosy. It found that, despite the removal of most of the waste drums and the multiyear wildfire mitigation efforts that the lab describes, the lab’s wildfire protection is still lacking.
According to the 20-page federal audit, the lab at that time had not developed a “comprehensive, risk-based approach to wildland fire management” in accordance with federal policies related to wildland fire management. The report also noted compounding issues, including the absence of federal oversight of the lab’s wildfire management activities. Among the ongoing risks, not all fire roads were maintained well enough to provide a safe route for firefighters and others, “which could create dangerous conditions for emergency responders and delay response times,” the auditors wrote.
And a canyon that runs between the lab and the adjacent town of Los Alamos was identified in the report as being packed with 10 times the number of trees that would be ideal, from a wildfire safety perspective. To make matters worse, there’s a hazardous waste site at the bottom of the canyon that could, the auditors wrote, “produce a health risk to the environment and to human health during a fire.”
“The report was pretty stark,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists. “And certainly, after all the warnings, if they’re still not doing all they need to do to fully mitigate the risk, then that’s just foolishness.”
A 2007 federal audit of Los Alamos, as well as nuclear weapons facilities in Washington state and Idaho, showed similar problems. In short, it seems little has changed at Los Alamos in the 14-year span between 2007 and 2021. Lab spokespeople did not respond to my questions about the lab’s efforts to address the specific problems identified in the 2021 report, despite repeated requests.
The Los Alamos area has experienced three major wildfires since the lab was founded — the Cerro Grande fire in 2000, Las Conchas in 2011 and Cerro Pelado this year. But we probably can’t count on 11-year gaps between future wildfires near Los Alamos, according to Alice Hill, the senior fellow for energy and the environment with the Council on Foreign Relations, who’s based in Washington, D.C.
The changing climate is expected to dramatically affect wildfire risks in years to come, turning Los Alamos and surrounding areas into a tinderbox. A study in 2018 in Climatic Change found that the region extending from the higher elevations in New Mexico, where Los Alamos is located, into Colorado and Arizona will experience the greatest increase in wildfire probabilities in the Southwest. A new risk projection tool that was recommended by Hill, called Risk Factor, also shows increasing fire risk in the Los Alamos area over the next 30 years.
“We are at the point where we are imagining, as we have to, things that we’ve never experienced,” Hill says. “That is fundamentally different than how we have approached these problems throughout human history, which is to look to the past to figure out how to be safer in the future…. The nature of wildfire has changed as more heat is added [to the planet], as temperatures rise.”
Increased plutonium pit production will add to the waste that needs to be shipped to Carlsbad. “Certainly, the radiological assessments in sort of the worst case of wildfire could lead to a pretty significant release of radioactivity, not only affecting the workers onsite but also the offsite public. It’s troubling,” says Lyman, who suggests that nuclear labs like Los Alamos should not be located in such fire-prone areas. For now, some risks from the Cerra Pelado wildfire will persist, according to Jeff Surber, operations section chief for the U.S. Department of Agriculture Forestry Service’s efforts to fight the fire. Large wildfires like Cerra Pelado “hold heat for so long and they continue to smolder in the interior where it burns intermittently,” he said in a May 9 briefing to Los Alamos County residents, and to concerned people like me watching online.
It will be vital to monitor the footprint of the fire until rain or snow finally snuffs it out late in the year. Even then, some danger will linger in the form of “zombie fires” that can flame up long after wildfires appear to have been extinguished (SN: 5/19/21). “We’ve had fires come back in the springtime because there was a root underground that somehow stayed lit all winter long,” said Surber.
So the Cerro Pelado fire, and its occasional smoky tendrils, will probably be a part of life in northern New Mexico for months still. And the future seems just as fiery, if not worse. That’s something all residents, including the lab, need to be preparing for.
Meantime, if you make it out to the mountains of New Mexico soon enough, be sure to sniff a vanilla-flavored ponderosa while you still can. I know I will.
It turns out that chicken and rice may have always gone together, from the birds’ initial domestication to tonight’s dinner.
In two new studies, scientists lay out a potential story of chicken’s origins. This poultry tale begins surprisingly recently in rice fields planted by Southeast Asian farmers around 3,500 years ago, zooarchaeologist Joris Peters and colleagues report. From there, the birds were transported westward not as food but as exotic or culturally revered creatures, the team suggests June 6 in the Proceedings of the National Academy of Sciences. “Cereal cultivation may have acted as a catalyst for chicken domestication,” says Peters, of Ludwig Maximilian University of Munich.
The domesticated fowl then arrived in Mediterranean Europe no earlier than around 2,800 years ago, archaeologist Julia Best of Cardiff University in Wales and colleagues report June 6 in Antiquity. The birds appeared in northwest Africa between 1,100 and 800 years ago, the team says.
Researchers have debated where and when chickens (Gallus gallus domesticus) originated for more than 50 years. India’s Indus Valley, northern China and Southeast Asia have all been touted as domestication centers. Proposed dates for chickens’ first appearance have mostly ranged from around 4,000 to 10,500 years ago. A 2020 genetic study of modern chickens suggested that domestication occurred among Southeast Asian red jungle fowl. But DNA analyses, increasingly used to study animal domestication, couldn’t specify when domesticated chickens first appeared (SN: 7/6/17).
Using chicken remains previously excavated at more than 600 sites in 89 countries, Peters’ group determined whether the chicken bones had been found where they were originally buried by soil or, instead, had moved downward into older sediment over time and thus were younger than previously assumed.
After establishing the timing of chickens’ appearances at various sites, the researchers used historical references to chickens and data on subsistence strategies in each society to develop a scenario of the animals’ domestication and spread.
The new story begins in Southeast Asian rice fields. The earliest known chicken remains come from Ban Non Wat, a dry rice–farming site in central Thailand that roughly dates to between 1650 B.C. and 1250 B.C. Dry rice farmers plant the crop on upland soil soaked by seasonal rains rather than in flooded fields or paddies. That would have made rice grains at Ban Non Wat fair game for avian ancestors of chickens.
These fields attracted hungry wild birds called red jungle fowl. Red jungle fowl increasingly fed on rice grains, and probably grains of another cereal crop called millet, grown by regional farmers, Peters’ group speculates. A cultivated familiarity with people launched chicken domestication by around 3,500 years ago, the researchers say.
Chickens did not arrive in central China, South Asia or Mesopotamian society in what’s now Iran and Iraq until nearly 3,000 years ago, the team estimates.
Peters and colleagues have for the first time assembled available evidence “into a fully coherent and plausible explanation of not only where and when, but also how and why chicken domestication happened,” says archaeologist Keith Dobney of the University of Sydney who did not participate in the new research.
But the new insights into chickens don’t end there. Using radiocarbon dating, Best’s group determined that 23 chicken bones from 16 sites in Eurasia and Africa were generally younger, in some cases by several thousand years, than previously thought. These bones had apparently settled into lower sediment layers over time, where they were found with items made by earlier human cultures. Archaeological evidence indicates that chickens and rice cultivation spread across Asia and Africa in tandem, Peters’ group says. But rather than eating early chickens, people may have viewed them as special or sacred creatures. At Ban Non Wat and other early Southeast Asian sites, partial or whole skeletons of adult chickens were placed in human graves. That behavior suggests chickens enjoyed some sort of social or cultural significance, Peters says.
In Europe, several of the earliest chickens were buried alone or in human graves and show no signs of having been butchered.
The expansion of the Roman Empire around 2,000 years ago prompted more widespread consumption of chicken and eggs, Best and colleagues say. In England, chickens were not eaten regularly until around 1,700 years ago, primarily at Roman-influenced urban and military sites. Overall, about 700 to 800 years elapsed between the introduction of chickens in England and their acceptance as food, the researchers conclude. Similar lag times may have occurred at other sites where the birds were introduced.
For weeks, I have been watching coronavirus cases drop across the United States. At the same time, cases were heading skyward in many places in Europe, Asia and Oceania. Those surges may have peaked in some places and seem to be on a downward trajectory again, according to Our World in Data.
Much of the rise in cases has been attributed to the omicron variant’s more transmissible sibling BA.2 clawing its way to prominence. But many public health officials have pointed out that the surges coincide with relaxing of COVID-19 mitigation measures.
People around the world are shedding their masks and gathering in public. Immunity from vaccines and prior infections have helped limit deaths in wealthier countries, but the omicron siblings are very good at evading immune defenses, leading to breakthrough infections and reinfections. Even so, at the end of February, the U.S. Centers for Disease Control and Prevention posted new guidelines for masking, more than doubling the number of cases needed per 100,000 people before officials recommended a return to the face coverings (SN: 3/3/22).
Not everyone has ditched their masks. I have observed some regional trends. The majority of people I see at my grocery store and other places in my community in Maryland are still wearing masks. But on road trips to the Midwest and back, even during the height of the omicron surge, most of the faces I saw in public were bare. Meanwhile, I was wearing my N95 mask even when I was the only person doing so. I reasoned that I was protecting myself from infection as best I could. I was also protecting my loved ones and other people around me from me should I have unwittingly contracted the virus.
But I will tell you a secret. I don’t really like wearing masks. They can be hot and uncomfortable. They leave lines on my face. And sometimes masks make it hard to breathe. At the same time, I know that wearing a good quality, well-fitting mask greatly reduces the chance of testing positive for the coronavirus (SN: 2/12/21). In one study, N95 or KN95 masks reduced the chance of testing positive by 83 percent, researchers reported in the February 11 Morbidity and Mortality Weekly Report. And school districts with mask mandates had about a quarter of the number of in-school infections as districts where masks weren’t required (SN: 3/15/22).
With those data in mind, I am not ready to go barefaced. And I’m not alone. Nearly 36 percent of the 1,916 respondents to a Science News Twitter poll said that they still wear masks everywhere in public. Another 28 percent said they mask in indoor crowds, and 23 percent said they mask only where it’s mandatory. Only about 12 percent have ditched masks entirely.
Some poll respondents left comments clarifying their answers, but most people’s reasons for masking aren’t clear. Maybe they live in the parts of the country or world where transmission levels are high and hospitals are at risk of being overrun. Maybe they are parents of children too young for vaccination. Perhaps they or other loved ones are unvaccinated or have weakened immune systems that put them at risk for severe disease. Maybe, like me, they just don’t want to get sick — with anything.
Before the pandemic, I caught several colds a year and had to deal with seasonal allergies. Since I started wearing a mask, I haven’t had a single respiratory illness, though allergies still irritate my eyes and make my nose run. I’ve also got some health conditions that raise my risk of severe illness. I’m fully vaccinated and boosted, so I probably won’t die if I catch the virus that causes COVID-19, but I don’t want to test it (SN: 11/8/21). Right now, I just feel safer wearing a mask when I’m indoors in public places.
I’ve been thinking a lot about what would convince me that it was safe to go maskless. What is the number or metric that will mark the boundary of my comfort zone?
The CDC now recommends using its COVID-19 Community Levels map for determining when mask use is needed. That metric is mostly concerned with keeping hospitals and other health care systems from becoming overwhelmed. By that measure, most of the country has the green light to go maskless. I’m probably more cautious than the average person, but the levels of transmission in that metric that would trigger mask wearing — 200 or more cases per 100,000 population — seem high to me, particularly since CDC’s prior recommendations urged masking at a quarter of that level.
The metric is designed for communities, not individuals. So what numbers should I, as an individual, go by? There’s always the CDC’s COVID-19 Integrated County View that tracks case rates and test positivity rates — the percentage of tests that have a positive result. Cases in my county have been ticking up in the last few days, with 391 people having gotten COVID-19 in the last week — that’s about 37 out of every 100,000 people. That seems like relatively low odds of coming into contact with a contagious person. But those are only the cases we know about officially. There may be many more cases that were never reported as people take rapid antigen tests at home or decide not to test. There’s no way to know exactly how much COVID-19 is out there.
And the proportion of cases caused by BA.2 is on the rise, with the more infectious omicron variant accounting for about 35 percent of cases nationwide in the week ending March 19. In the mid-Atlantic states where I live, about 30 percent of cases are now caused by BA.2. But in some parts of the Northeast, that variant now causes more than half of cases. The increase is unsettling but doesn’t necessarily mean the United States will experience another wave of infections as Europe has. Or maybe we will. That uncertainty makes me uncomfortable removing my mask indoors in public right now.
Maybe in a few weeks, if there’s no new surge in infections, I’ll feel comfortable walking around in public with my nose and mouth exposed. Or maybe I’ll wait until the number of cases in my county is in single digits. I’m pretty sure there will come a day when I won’t feel the need to filter every breath, but for me, it’s not that time yet. And I truthfully can’t tell you what my magic number will be.
Here’s what I do know: Even if I do decide to have an unmasked summer, I will be strapping my mask back on if COVID-19 cases begin to rise again.
The sperm whale is an endangered species. A major reason is that the whale oil is heat-resistant and chemically and physically stable. This makes it useful for lubricating delicate machinery. The only substitute is expensive carnauba wax from the leaves of palm trees that grow only in Brazil … [but] wax from the seeds of the jojoba, an evergreen desert shrub, is nearly as good.
Update After sperm whale oil was banned in the early 1970s, the United States sought to replenish its reserves with eco-friendly oil from jojoba seeds (SN: 5/17/75, p. 335). Jojoba oil’s chemical structure is nearly identical to that of sperm whale oil, and the shrub is native to some North American desert ecosystems, making the plant an appealing replacement. Today, jojoba shrubs are cultivated around the world on almost every continent. Jojoba oil is used in hundreds of products, including cosmetics, pharmaceuticals, adhesives and lubricants. Meanwhile, sperm whale populations have started to recover under international anti-whaling agreements (SN: 2/27/21, p. 4).
Surviving on blood alone is no picnic. But a handful of genetic tweaks may have helped vampire bats evolve to become the only mammal known to feed exclusively on the stuff.
These bats have developed a range of physiological and behavioral strategies to exist on a blood-only diet. The genetic picture behind this sanguivorous behavior, however, is still blurry. But 13 genes that the bats appear to have lost over time could underpin some of the behavior, researchers report March 25 in Science Advances.
“Sometimes losing genes in evolutionary time frames can actually be adaptive or beneficial,” says Michael Hiller, a genomicist now at the Senckenberg Society for Nature Research in Frankfurt. Hiller and his colleagues pieced together the genetic instruction book of the common vampire bat (Desmodus rotundus) and compared it with the genomes of 26 other bat species, including six from the same family as vampire bats. The team then searched for genes in D. rotundus that had either been lost entirely or inactivated through mutations.
Of the 13 missing genes, three had been previously reported in vampire bats. These genes are associated with sweet and bitter taste receptors in other animals, meaning vampire bats probably have a diminished sense of taste — all the better for drinking blood. The other 10 lost genes are newly identified in the bats, and the researchers propose several ideas about how the absence of these genes could support a blood-rich diet.
Some of the genes help to raise levels of insulin in the body and convert ingested sugar into a form that can be stored. Given the low sugar content of blood, this processing and storage system may be less active in vampire bats and the genes probably aren’t that useful anymore. Another gene is linked in other mammals to gastric acid production, which helps break down solid food. That gene may have been lost as the vampire bat stomach evolved to mostly store and absorb fluid.
One of the other lost genes inhibits the uptake of iron in gastrointestinal cells. Blood is low in calories yet rich in iron. Vampire bats must drink up to 1.4 times their own weight during each feed, and, in doing so, ingest a potentially harmful amount of iron. Gastrointestinal cells are regularly shed in the vampire bat gut, so by losing that gene, the bats may be absorbing huge amounts of iron and quickly excreting it to avoid an overload — an idea supported by previous research.
One lost gene could even be linked to vampire bats’ remarkable cognitive abilities, the researchers suggest. Because the bats are susceptible to starvation, they share regurgitated blood and are more likely to do so with bats that previously donated to themselves (SN: 11/19/15). Vampire bats also form long-term bonds and even feed with their friends in the wild (SN: 10/31/19; SN: 9/23/21). In other animals, this gene is involved in breaking down a compound produced by nerve cells that is linked to learning and memory — traits thought to be necessary for the vampire bats’ social abilities.
“I think there are some compelling hypotheses there,” says David Liberles, an evolutionary genomicist at Temple University in Philadelphia who wasn’t involved in the study. It would be interesting to see if these genes were also lost in the other two species of vampire bats, he says, as they feed more on the blood of birds, while D. rotundus prefers to imbibe from mammals.
Whether the diet caused these changes, or vice versa, isn’t known. Either way, it was probably a gradual process over millions of years, Hiller says. “Maybe they started drinking more and more blood, and then you have time to better adapt to this very challenging diet.”
Higher and higher still, the cotton bollworm moth caterpillar climbs, its tiny body ceaselessly scaling leaf after leaf. Reaching the top of a plant, it will die, facilitating the spread of the virus that steered the insect there.
One virus behind this deadly ascent manipulates genes associated with caterpillars’ vision. As a result, the insects are more attracted to sunlight than usual, researchers report online March 8 in Molecular Ecology.
The virus involved in this caterpillar takeover is a type of baculovirus. These viruses may have been evolving with their insect hosts for 200 million to 300 million years, says Xiaoxia Liu, an entomologist at China Agricultural University in Beijing. Baculoviruses can infect more than 800 insect species, mostly the caterpillars of moths and butterflies. Once infected, the hosts exhibit “tree-top disease,” compelled to climb before dying and leaving their elevated, infected cadavers for scavengers to feast upon. The clever trick of these viruses has been known for more than a century, Liu says. But how they turn caterpillars into zombies doomed to ascend to their own deaths wasn’t understood.
Previous research suggested that infected caterpillars exhibit greater “phototaxis,” meaning they are more attracted to light than uninfected insects. Liu and her team confirmed this effect in the laboratory using cotton bollworm moth caterpillars (Helicoverpa armigera) infected with a baculovirus called HearNPV.
The researchers compared infected and uninfected caterpillars’ positions in glass tubes surrounding a climbing mesh under an LED light. Uninfected caterpillars would wander up and down the mesh, but would return to the bottom before pupating. That behavior makes sense because in the wild, this species develops into adults underground. But infected hosts would end up dead at the top of the mesh. The higher the source of light, the higher infected hosts climbed.
The team moved to the horizontal plane to confirm that the hosts were responding to light rather than gravity, placing caterpillars in a hexagonal box with one of the side panels illuminated. By the second day after infection, host caterpillars crawled to the light about four times as often as the uninfected.
When the researchers surgically removed infected caterpillars’ eyes and put the insects in the box, the blinded insects were attracted to the light a quarter as often as unaltered infected hosts. That suggested that the virus was using a caterpillar’s vision against itself.
The team then compared how active certain genes were in various caterpillar body parts in infected and uninfected larvae. Detected mostly in the eyes, two genes for opsins, the light-sensitive proteins that are fundamental for vision, were more active after an infection with the virus, and so was another gene associated with vision called TRPL. It encodes for a channel in cell membranes involved in the conversion of light into electrical signals.
When the team used the gene-editing tool CRISPR/Cas9 to shut off the opsin genes and TRPL in infected caterpillars, the number of hosts attracted to the light in the box was cut roughly in half. Their height at death on the mesh was also reduced.
Baculoviruses appear capable of commandeering the genetic architecture of caterpillar vision, exploiting an ancient importance of light for insects, Liu says.
Light can cue crucial biological processes in insects, from directing their developmental timing, to setting their migration routes.
These viruses were already known to be master manipulators in other ways, tweaking their hosts’ sense of smell, molting patterns and the programmed death of cells, says Lorena Passarelli, a virologist at Kansas State University in Manhattan, who was not involved with the study. The new research shows that the viruses manipulate “yet another physiological host process: visual perception.”
There’s still a lot to learn about this visual hijacking, Passarelli says. It’s unknown, for instance, which of the virus’s genes are responsible for turning caterpillars into sunlight-chasing zombies in the first place.
Human language, in its many current forms, may owe an evolutionary debt to our distant ape ancestors who sounded off in groups of scattered individuals.
Wild orangutans’ social worlds mold how they communicate vocally, much as local communities shape the way people speak, researchers report March 21 in Nature Ecology & Evolution. This finding suggests that social forces began engineering an expanding inventory of communication sounds among ancient ancestors of apes and humans, laying a foundation for the evolution of language, say evolutionary psychologist Adriano Lameira, of the University of Warwick in England, and his colleagues.
Lameira’s group recorded predator-warning calls known as “kiss-squeaks” — which typically involve drawing in breath through pursed lips — of 76 orangutans from six populations living on the islands of Borneo and Sumatra, where they face survival threats (SN: 2/15/18). The team tracked the animals and estimated their population densities from 2005 through 2010, with at least five consecutive months of observations and recordings in each population. Analyses of recordings then revealed how much individuals’ kiss-squeaks changed or remained the same over time. Orangutans in high-density populations, which up the odds of frequent social encounters, concoct many variations of kiss-squeaks, the researchers report. Novel reworkings of kiss-squeaks usually get modified further by other orangutans or drop out of use in crowded settings, they say.
In spread-out populations that reduce social mingling, these apes produce relatively few kiss-squeak variants, Lameira’s group finds. But occasional kiss-squeak tweaks tend to catch on in their original form in dispersed groups, leading to larger call repertoires than in high-density populations.
Low-density orangutan groups — featuring small clusters of animals that occasionally cross paths — might mirror the social settings of human ancestors. Ancient apes and hominids also lived in dispersed groups that could have bred a growing number of ways to communicate vocally, the researchers suspect.