CRISPR inspires new tricks to edit genes

Scientists usually shy away from using the word miracle — unless they’re talking about the gene-editing tool called CRISPR/Cas9. “You can do anything with CRISPR,” some say. Others just call it amazing.

CRISPR can quickly and efficiently manipulate virtually any gene in any plant or animal. In the four years since CRISPR has been around, researchers have used it to fix genetic diseases in animals, combat viruses, sterilize mosquitoes and prepare pig organs for human transplants. Most experts think that’s just the beginning. CRISPR’s powerful possibilities — even the controversial notions of creating “designer babies” and eradicating entire species — are stunning and sometimes frightening.

So far CRISPR’s biggest impact has been felt in basic biology labs around the world. The inexpensive, easy-to-use gene editor has made it possible for researchers to delve into fundamental mysteries of life in ways that had been difficult or impossible. Developmental biologist Robert Reed likens CRISPR to a computer mouse. “You can just point it at a place in the genome and you can do anything you want at that spot.”

Anything, that is, as long as it involves cutting DNA. CRISPR/Cas9 in its original incarnation is a homing device (the CRISPR part) that guides molecular scissors (the Cas9 enzyme) to a target section of DNA. Together, they work as a genetic-engineering cruise missile that disables or repairs a gene, or inserts something new where it cuts.

Even with all the genetic feats the CRISPR/Cas9 system can do, “there were shortcomings. There were things we wanted to do better,” says MIT molecular biologist Feng Zhang, one of the first scientists to wield the molecular scissors. From his earliest report in 2013 of using CRISPR/Cas9 to cut genes in human and mouse cells, Zhang has described ways to make the system work more precisely and efficiently.

He isn’t alone. A flurry of papers in the last three years have detailed improvements to the editor. Going even further, a bevy of scientists, including Zhang, have dreamed up ways to make CRISPR do a toolbox’s worth of jobs.

Turning CRISPR into a multitasker often starts with dulling the cutting-edge technology’s cutting edge. In many of its new adaptations, the “dead” Cas9 scissors can’t snip DNA. Broken scissors may sound useless, but scientists have upcycled them into chromosome painters, typo-correctors, gene activity stimulators and inhibitors and general genome tinkerers.

“The original Cas9 is like a Swiss army knife with only one application: It’s a knife,” says Gene Yeo, an RNA biologist at the University of California, San Diego. But Yeo and other researchers have bolted other proteins and chemicals to the dulled blades and transformed the knife into a multifunctional tool.

Zhang and colleagues are also exploring trading the Cas9 part of the system for other enzymes that might expand the types of manipulations scientists can perform on DNA and other molecules. With the expanded toolbox, researchers may have the power to pry open secrets of cancer and other diseases and answer new questions about biology.
Many enzymes can cut DNA; the first were discovered in the 1970s and helped to launch the whole field of genetic engineering. What makes CRISPR/Cas9 special is its precision. Scientists can make surgical slices in one selected spot, as opposed to the more scattershot approach of early tools. A few recent gene-editing technologies, such as zinc finger nucleases and TALENs, could also lock on to a single target. But those gene editors are hard to redirect. A scientist who wants to snip a new spot in the genome has to build a new editor. That’s like having to assemble a unique guided missile for every possible target on a map. With CRISPR/Cas9, that’s not necessary.

The secret to CRISPR’s flexibility is its guidance system. A short piece of RNA shepherds the Cas9 cutting enzyme to its DNA target. The “guide RNA” can home in on any place a researcher selects by chemically pairing with DNA’s information-containing building blocks, or bases (denoted by the letters A, T, C and G). Making a new guide RNA is easy; researchers often simply order one online by typing in the desired sequence of bases.

That guidance system is taking genetic engineers to places they’ve never been. “With CRISPR, literally overnight what had been the biggest frustration of my career turned into an undergraduate side project,” says Reed, of Cornell University. “It was incredible.”
Reed studies how patterns are painted on butterfly and moth wings. Color patterning is one of the fundamental questions evolutionary and developmental biologists have been trying to answer for decades. In 1994, Sean B. Carroll and colleagues discovered that a gene called Distal-less is turned on in butter­fly wings in places where eyespots later form. The gene appeared to be needed for eyespot formation, but the evidence was only circumstantial. That’s where researchers have been stuck for 20 years, Reed says. They had no way to manipulate genes in butter­fly wings to get more direct proof of the role of different genes in painting wing patterns.

With CRISPR/Cas9, Reed and Cornell colleague Linlin Zhang cut and disabled the Distal-less gene at an early stage of wing development and got an unexpected result: Rather than cause eyespots, Distal-less limits them. When CRISPR/Cas9 knocks out Distal-less, more and bigger eyespots appear, the researchers reported in June in Nature Communications. Reed and colleagues have snipped genes in not just one, but six different butterfly species using CRISPR, he says.

CRISPR cuts genes very well, maybe too well, says neuro­scientist Marc Tessier-Lavigne of Rockefeller University in New York City. “The Cas9 enzyme is just so prolific. It cuts and recuts and recuts,” he says. That constant snipping can result in unwanted mutations in genes that researchers are editing or in genes that they never intended to touch. Tessier-Lavigne and colleagues figured out how to tame the overeager enzyme and keep it from julienning the genes of human stem cells grown in lab dishes. With better control, the researchers could make one or two mutations in two genes involved in early-onset Alzheimer’s disease, they reported in the May 5 Nature. Growing the mutated stem cells into brain cells showed that increasing the number of mutated copies of the genes also boosts production of the amyloid-beta peptide that forms plaques in Alzheimer’s-afflicted brains. The technology could make stem cells better mimics of human diseases.
While Tessier-Lavigne and others are working to improve the CRISPR/Cas9 system, building better guide RNAs and increasing the specificity of its cuts, some researchers are turning away from snippy Cas9 altogether.

Nuanced edits
Cas9 isn’t entirely to blame for the mess created when it causes a double-stranded break by slicing through both rails of the DNA ladder. “The cell’s response to double-stranded breaks is the source of a lot of problems,” says David Liu, a chemical biologist at Harvard University. A cell’s go-to method for fixing a DNA breach is to glue the cut ends back together. But often a few bases are missing or bits get stuck where they don’t belong. The result is more genome “vandalism than editing,” Liu says, quoting Harvard colleague George Church.

Liu wanted a gene editor that wouldn’t cause any destructive breaches: One that could A) go to a specific site in a gene and B) change a particular DNA base there, all without cutting DNA. The tool didn’t exist, but in Cas9, Liu and colleagues saw the makings of one, if they could tweak it just a bit.

They started by dulling Cas9’s cutting edge, effectively killing the enzyme. The “dead” Cas9 could still grip the guide RNA and ride it to its destination, but it couldn’t slice through DNA’s double strands. Liu and colleagues then attached a hitch­hiking enzyme, whose job is to initiate a series of steps to change the DNA base C into a T, or a G to an A. The researchers had to tinker with the system in other ways to get the change to stick. Once they worked out the kinks, they could make permanent single base-pair changes in 15 to 75 percent of the DNA they targeted without introducing insertions and deletions the way traditional CRISPR editing often does. Liu and collaborators reported the accomplishment in Nature in May.  A similar base editor, reported in Science in August by researchers in Japan, may be useful for editing DNA in bacteria and other organisms that can’t tolerate having their DNA cut.

There are 12 possible combinations of DNA base swaps. The hitchhiking enzyme that Liu used, cytidine deaminase, can make two of the swaps. Liu and others are working to fuse enzymes to Cas9 that can do the 10 others. Other enzyme hitchhikers may make it possible to edit single DNA bases at will, Liu says. Such a base editor could be used to fix single mutations that cause genetic diseases such as cystic fibrosis or muscular dystrophy. It might even correct the mutations that lead to inherited breast cancer.

Rewriting the score
Dead Cas9 is already helping researchers tinker with DNA in ways they couldn’t before. Variations on the dull blade may help scientists solve one of the great mysteries of biology: How does the same set of 20,000 genes give rise to so many different types of cells in the body?

The genome is like a piano, says Jonathan Weissman, a biochemist at the University of California, San Francisco. “You can play a huge variety of different music with only 88 keys by how hard you hit the keys, what keys you mix up and the timing.” By dialing down or turning up the activity of combinations of genes at precise times during development, cells are coaxed into becoming hundreds of different types of body cells.

For the last 20 years, researchers have been learning more about that process by watching when certain genes turn on and off in different cells. Gene activity is controlled by a dizzying variety of proteins known as transcription factors. When and where a transcription factor acts is at least partly determined by chemical tags on DNA and the histone proteins that package it. Those tags are known collectively as epigenetic marks. They work something like the musical score for an orchestra, telling the transcription factor “musicians” which notes to hit and how loudly or softly to play. So far, scientists have only been able to listen to the music. With dead Cas9, researchers can create molecules that will change epigenetic notes at any place in the score, Weissman says, allowing researchers to arrange their own music.

Epigenetic marks are alleged to be involved in addiction, cancer, mental illness, obesity, diabetes and heart disease. Scientists haven’t been able to prove that epigenetic marks are really behind these and other ailments, because they could never go into a cell and change just one mark on one gene to see if it really produced a sour note.

One such epigenetic mark, the attachment of a chemical called an acetyl group to a particular amino acid in a histone protein, is often associated with active genes. But no one could say for sure that the mark was responsible for making those genes active. Charles Gersbach of Duke University and colleagues reported last year in Nature Biotechnology that they had fused dead Cas9 to an enzyme that could make that epigenetic mark. When the researchers placed the epigenetic mark on certain genes, activity of those genes shot up, evidence that the mark really does boost gene activity. With such CRISPR epigenetic editors in hand, researchers may eventually be able to correct errant marks to restore harmony and health.

Weissman’s lab group was one of the first to turn dead Cas9 into a conductor of gene activity. Parking dead Cas9 on a gene is enough to nudge down the volume of some genes’ activity by blocking the proteins that copy DNA into RNA, the researchers found. Fusing a protein that silences genes to dead Cas9 led to even better noise-dampening of targeted genes. The researchers reported in Cell in 2014 that they could reduce gene activity by 90 to 99 percent for some genes using the silencer (which Weissman and colleagues call CRISPRi, for interference). A similar tool, created by fusing proteins that turn on, or activate, genes to dead Cas9 (called CRISPRa, for activator) lets researchers crank up the volume of activity from certain genes. In a separate study, published in July in the Proceedings of the National Academy of Sciences, Weissman and colleagues used their activation scheme to find new genes that make cancer cells resistant to chemo­therapy drugs.

RNA revolution
New, refitted Cas9s won’t just make manipulating DNA easier. They also could revolutionize RNA biology. There are already multiple molecular tools for grabbing and cutting RNA, Yeo says. So for his purposes, scissors weren’t necessary or even desirable. The homing ability of CRISPR/Cas9 is what Yeo found appealing.

He started simple, by using a tweaked CRISPR/Cas9 to tag RNAs to see where they go in the cell. Luckily, in 2014, Jennifer Doudna at the University of California, Berkeley — one of the researchers who in 2012 introduced CRISPR/Cas9 — and colleagues reported that Cas9 could latch on to messenger RNA molecules, or mRNAs (copies of the protein-building instructions contained in DNA). In a study published in April in Cell, Doudna, Yeo and colleagues strapped fluorescent proteins to the back of a dead Cas9 and pointed it toward mRNAs from various genes.
With the glowing Cas9, the researchers tracked mRNAs produced from several different genes in living cells. (Previous methods for pinpointing RNA’s location in a cell killed the cell.) In May, Zhang of MIT and colleagues described a two-color RNA-tracking system in Scientific Reports. Yet another group of researchers described a CRISPR rainbow for giving DNA a multicolored glow, also in living cells. That glow allowed the team to pinpoint the locations of up to six genes and see how the three-dimensional structure of chromosomes in the nucleus changes over time, the researchers reported in the May Nature Biotechnology. A team from UC San Francisco reported in January in Nucleic Acids Research that it had tracked multiple genes using combinations of two color tags.

But Yeo wants to do more than watch RNA move around. He envisions bolting a variety of different proteins to Cas9 to manipulate and study the many steps an mRNA goes through between being copied from DNA and having its instructions read to make a protein. Learning more about that multistep process and what other RNAs do in a cell could help researchers understand what goes wrong in some diseases, and maybe learn how to fix the problems.

Zhang wants to improve Cas9, but he would also like other versatile tools. He and colleagues are looking for such tools in bacteria.

CRISPR/Cas9 was first discovered in bacteria as a rudimentary immune system for fighting off viruses (SN: 12/12/15, p. 16). It zeroes in on and then shreds the viral DNA. Researchers most often use the Cas9 cutting enzyme from Streptococcus pyogenes bacteria.

But almost half of all bacteria have CRISPR immune systems, scientists now know, and many use enzymes other than Cas9. In the bacterium Francisella novicida U112, Zhang and colleagues found a gene-editing enzyme, Cpf1, which does things a little differently than Cas9 does. It has a different “cut here” signal that could make it more suitable than Cas9 for cutting DNA in some cases, the team reported last October in Cell. Cpf1 can also chop one long guide RNA into multiple guides, so researchers may be able to edit several genes at once. And Cpf1 cuts DNA so that one strand of the DNA is slightly longer than the other. That could make it easier to insert new genes into DNA.

Zhang more recently found an enzyme in the bacterium Leptotrichia shahii that could tinker with RNA. The RNA­ cutting enzyme is called C2c2, he and colleagues reported August 5 in Science. Like Cas9, C2c2 uses a guide RNA to lead the way, but instead of slicing DNA, it chops RNA.

Zhang’s team is exploring other CRISPR/Cas9-style enzymes that could help them “edit or modulate or interact with a genome more efficiently or more effectively,” he says. “Our search is not done yet.”

The explosion of new ways to use CRISPR hasn’t ended. “The field is advancing so rapidly,” says Zhang. “Just looking at how far we have come in the last three and a half years, I think what we’ll see coming in the next few years will just be amazing.”

Pterosaurs weren’t all super-sized in the Late Cretaceous

Pterosaurs didn’t have to be gargantuan to survive in the Late Cretaceous.

Fragmentary fossils of a roughly 77-million-year-old pterosaur found in British Columbia suggest it had a wingspan of just 1.5 meters, close to that of a bald eagle. The ancient flier is the smallest pterosaur discovered during this time period — by a lot, report paleontologist Elizabeth Martin-Silverstone of the University of Southampton in England and colleagues August 30 in Royal Society Open Science.

Dozens of larger pterosaurs, some with wings spanning more than 10 meters (nearly the length of a school bus), have been unearthed. But until now, scientists had found only two small-scale versions, with wingspans 2.5 to 3 meters long, from the period stretching from 66 million to 100 million years ago.

Some scientists blamed competition with birds for the scarcity of little flying reptiles. Researchers have proposed that, “the only way pterosaurs could survive was by evolving completely crazy massive sizes,” Martin-Silverstone says.

The new find, she says, may mean that, “pterosaurs were doing better than we thought.”

Mars lander silent as mission scientists work out what went wrong

The Schiaparelli Mars lander remains silent since its attempted landing October 19 on the Red Planet. All data transmitted by the lander during its descent have been relayed to Earth, and mission scientists are now in the thick of trying to figure out what went wrong.

“I am extremely confident that we’ll be able to fully understand what happened,” ESA spacecraft operations manager Andrea Accomazzo said at an October 20 news briefing. Schiaparelli is most likely on the surface, but its condition remains unknown.

Early data indicate that Schiaparelli survived most of its parachute entry, but in the last few seconds before jettisoning the chute, something unexpected happened. Mission scientists cannot say yet what that “something” was. The retrorocket designed to slow it down further did appear to fire, but for a shorter time than expected. Mission scientists also don’t yet know if all the rockets fired as planned. Further details will come with the analysis of data received from the lander.

Other spacecraft orbiting Mars will continue to listen for a signal from Schiaparelli, which has enough battery power to last a few Martian days, maybe more. The lander was designed as an experiment to test technologies and protocols for safely dropping a payload on the surface of the Red Planet, such as a rover planned to arrive in 2021.

The Trace Gas Orbiter, which also arrived as part of the ExoMars mission, appears healthy and in orbit around the Red Planet, ready to undertake an investigation of trace gases in the Martian atmosphere.

50 years later, vaccines have eliminated some diseases

More vaccines promised — “The decline of poliomyelitis among more than 350 million people of the world … (offers) a promise of vaccines that will soon be used against other diseases considered hopeless or untreatable until recently. Vaccines against some of the many viruses causing the common cold, as well as those causing rubella, mumps and other diseases are on the way.” — Science News, November 19, 1966

UPDATE
In 1971, vaccines against mumps and rubella were combined with the measles vaccine into one MMR shot. All three diseases are now very rare in the United States. But persistent pockets of lower vaccination rates (spurred in part by the repeatedly debunked belief that vaccines cause autism) have allowed sporadic outbreaks of all three illnesses. A vaccine against the common cold has not yet materialized. Creating one vaccine that protects against the hundred or so strains of rhinoviruses that can cause colds is not easy. But some scientists are giving it a shot, along with vaccines against HIV, Ebola and Zika.

Human genes often best Neandertal ones in brain, testes

Humans and Neandertals are still in an evolutionary contest, a new study suggests.

Geneticist Joshua Akey of the University of Washington in Seattle and colleagues examined gene activity of more than 700 genes in which at least one person carried a human and a Neandertal version of the gene. Human versions of some genes are more active than Neandertal versions, especially in the brain and testes, the researchers report February 23 in Cell. In other tissues, some Neandertal versions of genes were more active than their human counterparts.
In the brain, human versions were favored over Neandertal variants in the cerebellum and basal ganglia. That finding may help explain why Neandertals had proportionally smaller cerebellums than humans do. Neandertal versions of genes in the testes, including some needed for sperm function, were also less active than human varieties. That finding is consistent with earlier studies that suggested male human-Neandertal hybrids may have been infertile, Akey says.

But Neandertal genes don’t always lose. In particular, the Neandertal version of an immunity gene called TLR1 is more active than the human version, the researchers discovered.

Lopsided gene activity may help explain why carrying Neandertal versions of some genes has been linked to human diseases, such as lupus and depression (SN: 3/5/16, p. 18). Usually, both copies contribute equally to a gene’s total activity. Less robust activity of a version inherited from Neandertals might cause total activity to dip to unhealthy levels, for instance.

Lakes worldwide feel the heat from climate change

About 40 kilometers off Michigan’s Keweenaw Peninsula, in the waters of Lake Superior, rises the stone lighthouse of Stannard Rock. Since 1882, it has warned sailors in Great Lakes shipping lanes away from a dangerous shoal. But today, Stannard Rock also helps scientists monitor another danger: climate change.

Since 2008, a meteorological station at the lighthouse has been measuring evaporation rates at Lake Superior. And while weather patterns can change from year to year, Lake Superior appears to be behaving in ways that, to scientists, indicate long-term climate change: Water temperatures are rising and evaporation is up, which leads to lower water levels in some seasons. That’s bad news for hydropower plants, navigators, property owners, commercial and recreational fishers and anyone who just enjoys the lake.
When most people think of the physical effects of climate change, they picture melting glaciers, shrinking sea ice or flooded coastal towns (SN: 4/16/16, p. 22). But observations like those at Stannard Rock are vaulting lakes into the vanguard of climate science. Year after year, lakes reflect the long-term changes of their environment in their physics, chemistry and biology. “They’re sentinels,” says John Lenters, a limnologist at the University of Wisconsin–Madison.

Globally, observations show that many lakes are heating up — but not all in the same way or with the same ecological consequences. In eastern Africa, Lake Tanganyika is warming relatively slowly, but its fish populations are plummeting, leaving people with less to eat. In the U.S. Upper Midwest, quicker-warming lakes are experiencing shifts in the relative abundance of fish species that support a billion-dollar-plus recreational industry. And at high global latitudes, cold lakes normally covered by ice in the winter are seeing less ice year after year — a change that could affect all parts of the food web, from algae to freshwater seals.

Understanding such changes is crucial for humans to adapt to the changes that are likely to come, limnologists say. Indeed, some northern lakes will probably release more methane into the air as temperatures rise — exacerbating the climate shift that is already under way.
Lake layers
Lakes and ponds cover about 4 percent of the land surface not already covered by glaciers. That may sound like a small fraction, but lakes play a key role in several planetary processes. Lakes cycle carbon between the water’s surface and the atmosphere. They give off heat-trapping gases such as
carbon dioxide and methane, while simultaneously tucking away carbon in decaying layers of organic muck at lake bottoms. They bury nearly half as much carbon as the oceans do.

Yet the world’s more than 100 million lakes are often overlooked in climate simulations. That’s surprising, because lakes are far easier to measure than oceans. Because lakes are relatively small, scientists can go out in boats or set out buoys to survey temperature, salinity and other factors at different depths and in different seasons.

A landmark study published in 2015 aimed to synthesize these in-water measurements with satellite observations for 235 lakes worldwide. In theory, lake warming is a simple process: The hotter the air above a lake, the hotter the waters get. But the picture is far more complicated than that, the international team of researchers found.
On average, the 235 lakes in the study warmed at a rate of 0.34 degrees Celsius per decade between 1985 and 2009. Some warmed much faster, like Finland’s Lake Lappajärvi, which soared nearly 0.9 degrees each decade. A few even cooled, such as Blue Cypress Lake in Florida. Puzzlingly, there was no clear trend in which lakes warmed and which cooled. The most rapidly warming lakes were scattered across different latitudes and elevations.

Even some that were nearly side by side warmed at different rates from one another — Lake Superior, by far the largest of the Great Lakes, is warming much more rapidly, at a full degree per decade, than others in the chain, although Huron and Michigan are also warming fast.

“Even though lakes are experiencing the same weather, they are responding in different ways,” says Stephanie Hampton, an aquatic biologist at Washington State University in Pullman.

Such variability makes it hard to pin down what to expect in the future. But researchers are starting to explore factors such as lake depth and lake size (intuitively, it’s less teeth-chattering to swim in a small pond in early summer than a big lake).

Depth and size play into stratification, the process through which some lakes separate into layers of different temperatures. Freshwater is densest at 4° C, just above freezing. In spring, using the Great Lakes as an example, the cold surface waters begin to warm; when they reach 4°, they become dense enough to sink. The lake’s waters mix freely and become much the same temperature at all depths.
But then, throughout the summer, the upper waters heat up relatively quickly. The lake stops mixing and instead separates into layers, with warm water on top and cold, dense water at the bottom. It stays that way until autumn, when chilly air temperatures cool the surface waters to 4°. The newly dense waters sink again, mixing the lake for the second time of the year.

Lake Superior is warming so quickly because it is stratifying earlier and earlier each year. It used to separate into its summer layers during mid- to late July, on average. But rising air temperatures mean that it is now stratifying about a month earlier — giving the shallow surface layers much more time to get toasty each summer. “If you hit that starting point in June, now you’ve got all summer to warm up that top layer,” Lenters says.

Deep lakes warm very slowly in the spring, and small changes in water temperature at the end of winter can lead to large changes in the timing of summer stratification for these lakes. Superior is about 406 meters deep at its greatest point, so it is particularly vulnerable to such shifts.

In contrast, shallow lakes warm much more quickly in the spring, so the timing of their summer stratification is much less variable than for deep lakes. Lake Erie is only 64 meters deep at its maximum, which is why Erie is not experiencing big changes in its stratification start date. Erie is warming one-tenth as fast as Superior, just 0.1 degrees per decade.

Superior is also warming because of a decline in cloud cover over the Great Lakes in recent years; more heat from solar radiation hits the lakes, Lenters said at a limnology meeting in Honolulu in March. Why the cloud cover is changing isn’t known — it could be natural variability. But the increased sunlight means another source of warming for Superior and the other Great Lakes.

On top of that, evaporation, measured from spots like Stannard Rock, also plays into the complexity. High evaporation rates in a warm autumn can actually lead to more ice cover the following winter and slower ice breakup in the spring, because the water is colder after evaporation. “When lakes sweat, they cool off,” Lenters says. All these factors conspire to complicate the picture of why Superior is warming so quickly, and what people in the Great Lakes region can do about it.

A new reality
Warming water — even small changes — can have a big impact on a lake’s ecology. One of the most famous examples is Lake Tanganyika in eastern Africa. It has been warming relatively slowly, about 0.2 degrees per decade. But that is enough to make it more stratified year-round and less likely to mix. With layers stagnating, nutrients that used to rise from the bottom of the lake become trapped down low, Hampton says.

With fewer nutrients reaching the upper waters, lake productivity has plummeted. Since the late 1970s, catches of sardines and sprats have declined by as much as 50 percent, and the hundreds of thousands of people who depend on the lake for food have had to find additional sources of protein. Factors such as overfishing may also play a role, but a study published last August in Proceedings of the National Academy of Sciences found that lake temperatures in the last century were the highest of at least the previous 500 years.

Elsewhere, lake warming seems to be shifting the relative abundances of fish within certain lakes. This is apparent in a study of walleye (Sander vitreus), a popular recreational fishing target in the lakes of the U.S. Upper Midwest.

In Wisconsin, recreational freshwater fishing brings in more than $1.5 billion annually. So officials were worried when, around 2000, anglers and biologists began reporting that walleye numbers seemed to be dropping.

“We’ve seen declines in some of our most valuable fish,” says Jordan Read, a limnologist at the U.S. Geological Survey in Middleton, Wis. Hoping to figure out why, Read and colleagues analyzed water temperatures in 2,148 Wisconsin lakes from 1989 to 2014. Some of these lakes had seen populations of walleye drop as populations of largemouth bass (Micropterus salmoides) increased. Largemouth bass are also popular catches, although not as popular as walleye.
The scientists simulated how lake temperatures would probably rise through the year 2089 and how that might affect walleye survival in the state’s lakes. The team used a measure that describes whether walleye can spawn and their young can survive in a particular environment, compared with the relative abundance of largemouth bass. Up to 75 percent of the lakes studied would no longer be able to support young walleye by 2089, while the number of lakes that could support lots of bass could increase by 60 percent, the researchers estimate in the April Global Change Biology.

“Bass and walleye seem to be responding to the same temperature threshold but in opposite directions,” says Gretchen Hansen, a fisheries scientist at the Minnesota Department of Natural Resources in St. Paul who led the work.

The reason isn’t yet clear. Physiologically, walleye should still be able to survive in the higher temperatures. But something is already causing them to wane — perhaps they have fewer food sources, or they spawn less successfully. Field studies are under way to try to answer that question, Hansen says.

Variability in lake warming offers hope for the walleye. The study identified lakes where the walleye might be able to hold on. Some of these places have warmed less than others, making them more amenable to walleye, even as largemouth bass take over other lakes.

If the researchers can identify lakes that are likely to keep walleye healthy in the future, then officials can foster walleye spawning in those places and keep the state’s fishing industry healthy for decades to come. “While the outlook isn’t great, there are … lakes that are a good target of management action,” Read says. The scientists are now expanding their analysis into Minnesota and other neighboring states.

Less ice
Ecological changes put into motion during a particularly cold or hot time can send ripples during the following seasons, researchers are finding. “What happens in previous seasons sometimes matters more than the current season,” Lenters says. This is especially true for lakes at high latitudes that are covered in ice each winter but may see less ice as temperatures rise. Ice acts as an insulator, protecting the waters from big changes in the air temperature above. When the ice finally melts in spring, the water is exposed to warming from the atmosphere and from sunlight. “It’s a way the temperature can really rapidly increase in those lakes,” Hampton says.
Siberia’s Lake Baikal, for example, sees three to four weeks less ice cover than it did a century ago. That shift could affect Baikal seals ( Pusa sibirica ), the world’s only freshwater seals, which depend on ice cover to birth and shelter their pups each spring. There are no hard data on seal declines, in part because annual surveys of seal populations ceased in the early 1990s when the Soviet Union broke apart. “But if the ice duration is too short, then the pups may be exposed to predators before they’re ready,” Hampton says.
More broadly, and at other lakes, big questions remain about how winter and summer ecosystems connect. Biologists are assessing what wintertime ecosystems look like now, as a framework for understanding future change.

In a survey of 101 ice-covered lakes, Hampton and colleagues found more plankton under the ice than they had expected; chlorophyll levels were 43 percent of what they were in the summer. “It surprised me it was that high,” she says. “Some of these are snow-covered lakes not getting a lot of light.” The team reported its puzzling findings in January in Ecology Letters.

As winter shortens, fish may find more nutrients available to them earlier in the year than usual. Other algae-grazing creatures may become more abundant as the food web adjusts to what’s available with less ice cover.

More methane
Warming lakes themselves might exacerbate climate change. As temperatures rise, methane from microbes called archaea at the lake’s bottom bubbles up through the water column — particularly in northern lakes — and adds to the atmosphere’s greenhouse gas load.

At the Honolulu meeting, biogeochemist Tonya DelSontro of the University of Quebec in Montreal reported on methane release from boreal lakes, those lying between 50° and 70° N in realms such as Canada and Siberia. The boreal region contains up to half of the world’s carbon, with lakes an important source and sink.
DelSontro simulated how the boreal zone’s 9 million lakes would behave in the future. Just a 1 degree rise in surface temperature would boost methane emissions from boreal lakes by about 10 percent, DelSontro found. That’s not taking into account other factors such as a lengthening of the ice-free season, which would also put more methane into the air.

And at the University of Exeter in England, lake expert Gabriel Yvon-Durocher has been working to measure, on a small scale, how exactly ponds and lakes will respond to rising temperatures. His team built a series of experimental ponds, each of which is warmed by a certain temperature range over a certain period of time.

After heating the ponds by 4 to 5 degrees over seven years, the scientists found the lakes’ methane emissions more than doubled. In the same period, the ability to suck down carbon dioxide was cut almost by half. Such shifts could make climate change even worse, the team wrote in February in Nature Climate Change.

With so much variability among lakes, and so much uncertainty remaining about where they may head in the future, Lenters argues that limnologists need to keep gathering as much information as possible. Just as the Stannard Rock lighthouse is providing key data on Lake Superior, other locations need to be pressed into service to keep an eye on what lakes are doing. “There are aspects of the Pacific Ocean we know better than Lake Superior,” he says. “Lakes are woefully understudied.”

Learning takes brain acrobatics

Peer inside the brain of someone learning. You might be lucky enough to spy a synapse pop into existence. That physical bridge between two nerve cells seals new knowledge into the brain. As new information arrives, synapses form and strengthen, while others weaken, making way for new connections.

You might see more subtle changes, too, like fluctuations in the levels of signaling molecules, or even slight boosts in nerve cell activity. Over the last few decades, scientists have zoomed in on these microscopic changes that happen as the brain learns. And while that detailed scrutiny has revealed a lot about the synapses that wire our brains, it isn’t enough. Neuroscientists still lack a complete picture of how the brain learns.

They may have been looking too closely. When it comes to the neuroscience of learning, zeroing in on synapse action misses the forest for the trees.

A new, zoomed-out approach attempts to make sense of the large-scale changes that enable learning. By studying the shifting interactions between many different brain regions over time, scientists are beginning to grasp how the brain takes in new information and holds onto it.
These kinds of studies rely on powerful math. Brain scientists are co-opting approaches developed in other network-based sciences, borrowing tools that reveal in precise, numerical terms the shape and function of the neural pathways that shift as human brains learn.

“When you’re learning, it doesn’t just require a change in activity in a single region,” says Danielle Bassett, a network neuroscientist at the University of Pennsylvania. “It really requires many different regions to be involved.” Her holistic approach asks, “what’s actually happening in your brain while you’re learning?” Bassett is charging ahead to both define this new field of “network neuroscience” and push its boundaries.

“This line of work is very promising,” says neuroscientist Olaf Sporns of Indiana University Bloomington. Bassett’s research, he says, has great potential to bridge gaps between brain-imaging studies and scientists’ understanding of how learning happens. “I think she’s very much on the right track.”
Already, Bassett and others have found tantalizing hints that the brains that learn best have networks that are flexible, able to rejigger connections on the fly to allow new knowledge in. Some brain regions always communicate with the same neural partners, rarely switching to others. But brain regions that exhibit the most flexibility quickly swap who they’re talking with, like a parent who sends a birthday party invite to the preschool e-mail list, then moments later, shoots off a work memo to colleagues.

In a few studies, researchers have witnessed this flexibility in action, watching networks reconfigure as people learn something while inside a brain scanner. Network flexibility may help several types of learning, though too much flexibility may be linked to disorders such as schizophrenia, studies suggest.

Not surprisingly, some researchers are rushing to apply this new information, testing ways to boost brain flexibility for those of us who may be too rigid in our neural connections.

“These are pretty new ideas,” says cognitive neuroscientist Raphael Gerraty of Columbia University. The mathematical and computational tools required for this type of research didn’t exist until recently, he says. So people just weren’t thinking about learning from a large-scale network perspective. “In some ways, it was a pretty boring mathematical, computational roadblock,” Gerraty says. But now the road is clear, opening “this conceptual avenue … that people can now explore.”
It takes a neural village
That conceptual avenue is more of a map, made of countless neural roads. Even when a person learns something very simple, large swaths of the brain jump in to help. Learning an easy sequence of movements, like tapping out a brief tune on a keyboard, prompts activity in the part of the brain that directs finger movements. The action also calls in brain areas involved in vision, decision making, memory and planning. And finger taps are a pretty basic type of learning. In many situations, learning calls up even more brain areas, integrating information from multiple sources, Gerraty says.

He and colleagues caught glimpses of some of these interactions by scanning the brains of people who had learned associations between two faces. Only one of the faces was then paired with a reward. In later experiments, the researchers tested whether people could figure out that the halo of good fortune associated with the one face also extended to the face it had been partnered with earlier. This process, called “transfer of learning,” is something that people do all the time in daily life, such as when you’re wary of the salad at a restaurant that recently served tainted cheese.

Study participants who were good at applying knowledge about one thing — in this case, a face — to a separate thing showed particular brain signatures, Gerraty and colleagues reported in 2014 in the Journal of Neuroscience. Connections between the hippocampus, a brain structure important for memory, and the ventromedial prefrontal cortex, involved in self-control and decision making, were weaker in good learners than in people who struggled to learn. The scans, performed several days after the learning task, revealed inherent differences between brains, the researchers say. The experiment also turned up other neural network differences among these regions and larger-scale networks that span the brain.

Children who have difficulty learning math, when scanned, also show unexpected brain connectivity, according to research by neuroscientist Vinod Menon of Stanford University and colleagues. Compared with kids without disabilities, children with developmental dyscalculia who were scanned while doing math problems had more connections, particularly among regions involved in solving math problems. That overconnectivity, described in 2015 in Developmental Science, was a surprise, Menon says, since earlier work had suggested that these math-related networks were too weak. But it may be that too many links create a system that can’t accommodate new information. “The idea is that if you have a hyperconnected system, it’s not going to be as responsive,” he says.
There’s a balance to be struck, Menon says. Neural pathways that are too weak can’t carry necessary information, and pathways that are too connected won’t allow new information to move in. But the problem isn’t as simple as that. “It’s not that everything is changing everywhere,” he says. “There is a specificity to it.” Some connections are more important than others, depending on the task.

Neural networks need to shuttle information around quickly and fluidly. To really get a sense of this movement as opposed to snapshots frozen in time, scientists need to watch the brain as it learns. “The next stage is to figure out how the networks actually shift,” Menon says. “That’s where the studies from Dani Bassett and others will be very useful.”

Flexing in real time
Bassett and colleagues have captured these changing networks as people learn. Volunteers were given simple sequences to tap out on a keyboard while undergoing a functional MRI scan. During six weeks of scanning as people learned the task, neural networks in their brains shifted around. Some connections grew stronger and some grew weaker, Bassett and her team reported in Nature Neuroscience in 2015.

People who quickly learned to tap the correct sequence of keys showed an interesting neural trait: As they learned, they shed certain connections between their frontal cortex, the outermost layer of the brain toward the front of the head, and the cingulate, which sits toward the middle of the brain. This connection has been implicated in directing attention, setting goals and making plans, skills that may be important for the early stages of learning but not for later stages, Bassett and colleagues suspect. Compared with slow learners, fast learners were more likely to have shunted these connections, a process that may have made their brains more efficient.

Flexibility seems to be important for other kinds of learning too. Reinforcement learning, in which right answers get a thumbs up and wrong answers are called out, also taps into brain flexibility, Gerraty, Bassett and others reported online May 30 at bioRxiv.org. This network comprises many points on the cortex, the brain’s outer layer, and a deeper structure known as the striatum. Other work on language comprehension, published by Bassett and colleagues last year in Cerebral Cortex, found some brain regions that were able to quickly form and break connections.

These studies captured brains in the process of learning, revealing “a much more interesting network structure than what we previously thought when we were only looking at static snapshots,” Gerraty says. The learning brain is incredibly dynamic, he says, with modules breaking off from partners and finding new ones.

While the details of those dynamics differ from study to study, there is an underlying commonality: “It seems that part of learning about the world is having parts of your brain become more flexible, and more able to communicate with different areas,” Gerraty says. In other words, the act of learning takes flexibility.

But too much of a good thing may be bad. While performing a recall task in a scanner, people with schizophrenia had higher flexibility among neural networks across the brain than did healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. “That suggests to me that while flexibility is good for healthy people, there is perhaps such a thing as too much flexibility,” Bassett says.
Just how this flexibility arises, and what controls it, is unknown. Andrea Stocco, a cognitive neuroscientist at the University of Washington in Seattle, suspects that a group of brain structures called the basal ganglia, deep within the brain, has an important role in controlling flexibility. He compares this region, which includes the striatum, to an air traffic controller who shunts information to where it’s most needed. One of the basal ganglia’s jobs seems to be shutting things down. “Most of the time, the basal ganglia is blocking something,” he says. Other researchers have found evidence that crucial “hubs” in the cortex help control flexibility.

Push for more
Researchers don’t yet know how measures of flexibility in brain regions relate to the microscopic changes that accompany learning. For now, the macro and the micro views of learning are separate worlds. Despite that missing middle ground, researchers are charging ahead, looking for signs that neural flexibility might offer a way to boost learning aptitude.

It’s possible that external brain stimulation may enhance flexibility. After receiving brain stimulation carefully aimed at a known memory circuit, people were better able to recall lists of words, scientists reported May 8 in Current Biology. If stimulation can boost memory, some argue, the technique could enhance flexibility and perhaps learning too.
Certain drugs show promise. DXM, found in some cough medicines, blocks proteins that help regulate nerve cell chatter. Compared with a placebo, the compound made some brain regions more flexible and able to rapidly switch partners in healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. She is also studying whether neurofeedback — a process in which people try to change their brain patterns to become more flexible with real-time monitoring — can help.

Something even simpler might work for boosting flexibility. On March 31 in Scientific Reports, Bassett and colleagues described their network analyses of an unusual subject. For a project called MyConnectome, neuroscientist Russ Poldrack, then at the University of Texas at Austin, had three brain scans a week for a year while assiduously tracking measures that included mood. Bassett and her team applied their mathematical tools to Poldrack’s data to get measurements of his neural flexibility on any given scan day. The team then looked for associations with mood. The standout result: When Poldrack was happiest, his brain was most flexible, for reasons that aren’t yet clear. (Flexibility was lowest when he was surprised.)

Those results are from a single person, so it’s unknown how well they would generalize to others. What’s more, the study identifies only a link, not that happiness causes more flexibility or vice versa. But the idea is intriguing, if not obvious, Bassett says. “Of course, no teacher is really going to say we’re doing rocket science if we tell them we should make the kids happier and then they’ll learn better.” But finding out exactly how happiness relates to learning is important, she says.

The research is just getting started. But already, insights on learning are coming quickly from the small group of researchers viewing the brain as a matrix of nodes and links that deftly shift, swap and rearrange themselves. Zoomed out, network science brings to the brain “a whole new set of hypotheses and new ways of testing them,” Bassett says.

Microbes hobble a widely used chemo drug

Some bacteria may shield tumor cells against a common chemotherapy drug.

Certain types of bacteria make an enzyme that inactivates the drug gemcitabine, researchers report in the Sept. 15 Science. Gemcitabine is used to treat patients with pancreatic, lung, breast and bladder cancers.

Bacteria that produce the enzyme cytidine deaminase converted the drug to an inactive form. That allowed tumor cells to survive gemcitabine treatment in lab dishes and mouse studies, Leore Geller of the Weizmann Institute of Science in Rehovot, Israel, and colleagues discovered. More than 98 percent of the enzyme-producing microbes belong to the Gammaproteobacteria class, which includes E. coli and about 250 bacterial genera.
Pancreatic tumors taken from human patients also carried the enzyme-producing bacteria. Of 113 pancreatic ductal adenocarcinoma samples studied, 86 contained gemcitabine-inactivating bacteria.

Antibiotics may correct the problem. In the study, Geller and colleagues infected mice that had colon cancer with the enzyme-producing bacteria. Tumors grew rapidly in infected mice treated with gemcitabine alone. Giving the mice antibiotics helped gemcitabine kill tumor cells, increasing the number of tumor cells going through a type of cell death called apoptosis from about 15 percent to 60 percent or more. That result may indicate that combinations of gemcitabine and antibiotics could make chemotherapy more effective for some cancer patients.

Alligators eat sharks — and a whole lot more

Alligators don’t just stick to freshwater and the prey they find there. These crafty reptiles can live quite easily, at least for a bit, in salty waters and find plenty to eat — including crabs, sea turtles and even sharks.

“They should change the textbooks,” says James Nifong, an ecologist with the Kansas Cooperative Fish and Wildlife Research Unit at Kansas State University in Manhattan, who has spent years documenting the estuarine gator diet.

Nifong’s most recent discovery, splashed all over the news last month, is that the American alligator (Alligator mississippiensis) eats at least three species of shark and two species of rays, he and wildlife biologist Russell Lowers report in the September Southeastern Naturalist.

Lowers captured a female gator with a young Atlantic stingray in her jaws near where he works at Kennedy Space Center in Cape Canaveral, Florida. And he and Nifong gathered several other eyewitness accounts: A U.S. Fish and Wildlife employee spotted a gator consuming a nurse shark in a Florida mangrove swamp in 2003. A birder photographed an alligator eating a bonnethead shark in a Florida salt marsh in 2006. One of Nifong’s collaborators, a marine turtle researcher, saw gators consuming both bonnethead and lemon sharks in the late 1990s. And Nifong found yet another report of a gator eating a bonnethead shark in Hilton Head, S.C., after their paper was published. All of these snacks required gators to venture into salty waters.
But shark may not be the most surprising item on the alligator estuarine menu. Nifong spent years catching hundreds of wild gators and pumping their stomachs to figure out what they eat, work that relies “on electrical tape, duct tape and zip ties,” Nifong says. And he found that the menu is pretty long.

To snag an alligator, he uses a big blunted hook or, with smaller animals, just grabs the animal and hauls it into the boat. He gets a noose around its neck. Then the researchers tape the mouth shut, take body measurements (everything from weight to toe length) and get blood or urine samples.

Once that’s out of the way, the team will strap the gator to a board with Velcro ties or rope. Then, it’s time to untape the mouth, quickly insert a piece of pipe to hold it open, and tape the alligator’s mouth around the pipe. The pipe, Nifong says, is there “so they can’t bite down.” And that’s important, because next someone has to stick a tube down the gator’s throat and hold it there to keep the animal’s throat open.
Finally, “we fill [the stomach] up with water very slowly so we don’t injure the animal,” Nifong says. “Then we do basically the Heimlich maneuver.” Pressing down on the abdomen forces the gator to give up its stomach contents. Usually.
“Sometimes it goes better than other times,” he says. “They can just decide to not let it out.” Then the researchers carefully undo all their work to let the gator loose.

Back in the lab, Nifong and his colleagues teased out what they could find in those stomach contents, and looked for more clues about the animals’ diet from in the blood samples. Nifong and his colleagues found that the gators were eating a rich marine diet, including small fish, mammals, birds, insects and crustaceans. They’ll even eat fruit and seeds. The sharks and rays didn’t show up in these studies (nor did sea turtles, which gators have also been spotted munching on). But Nifong and Lowers speculate that’s because the tissue of those animals gets digested very quickly. So if a gator had eaten a shark more than a few days before being caught, there was no way to know.

Because alligators don’t have any salt glands, “they’re subject to the same pressures as me or you when being out in saltwater,” Nifong says. “You’re losing water, and you’re increasing salt in your blood system.” That can lead to stress and even death, he notes. So the gators tend to just go back and forth between saltwater and freshwater. They can also close off their throat with a cartilaginous shield and shut their nostrils to keep salty water out. And when they eat, they’ll tip their head up to let the saltwater drain out before gulping down their catch.
What alligators eat isn’t as important a finding as the discovery that they regularly travel between saltwater and freshwater environments, Nifong says. And, he notes, “it occurs across a wide variety of habitats across the U.S. southeast.” That’s important because the gators are moving nutrients from rich marine waters into poorer, fresh waters. And they may be having a larger effect on estuarine food webs that anyone had imagined.

For instance, one of the prey items on the alligator menu is blue crab. Gators “scare the bejesus out of them,” Nifong says. And when gators are around, blue crabs decrease their predation of snails, which might then eat more of the cordgrass that forms the base of the local ecosystem. “Understanding that an alligator has a role in that kind of interaction,” Nifong points out, is important when planning conservation efforts.

Artificial insulin-releasing cells may make it easier to manage diabetes

Artificial cells made from scratch in the lab could one day offer a more effective, patient-friendly diabetes treatment.

Diabetes, which affects more than 400 million people around the world, is characterized by the loss or dysfunction of insulin-making beta cells in the pancreas. For the first time researchers have created synthetic cells that mimic how natural beta cells sense blood sugar concentration and secrete just the right amount of insulin. Experiments with mice show that these cells can regulate blood sugar for up to five days, researchers report online October 30 in Nature Chemical Biology.
If the mouse results translate to humans, diabetics could inject these artificial beta cells to automatically regulate their blood sugar levels for days at a time.

That would be a “a huge leap forward” for diabetic patients who currently have to check their blood sugar and inject insulin several times a day, says Omid Veiseh, a bioengineer at Rice University in Houston who wasn’t involved in the research. “Even if it were just a one-day thing, it would still be impressive,” he says.
Fashioned from human-made materials and biological ingredients like proteins, these faux cells contain insulin-filled pouches much like the insulin-carrying compartments inside real beta cells. And, similar to a natural beta cell, when one of these artificial beta cells is surrounded by excess blood sugar, its insulin sacs fuse with its outer membrane and eject insulin into the bloodstream. As blood sugar levels drop, insulin packets stop fusing with the membrane, which stems the cell’s insulin secretion.
Fabricating artificial insulin delivery systems that actually imitate the inner workings of real beta cells for ultrafine blood sugar regulation is “an engineering feat,” says Patrik Rorsman, a diabetes researcher at the University of Oxford who wasn’t involved in the work. The cellular imitations are “not as perfect as the beta cells we’re equipped with when we’re healthy,” he adds. For one thing, the faux cells eventually run out of insulin to release. But Rorsman believes that such artificial cells present a viable diabetes treatment.
Unlike transplanted beta cells — or other types of real cells genetically engineered to release insulin for diabetes treatment (SN: 1/15/11, p. 9) — these artificial cells could be mass-produced and have a much longer shelf life than live cells, says study coauthor Zhen Gu, a biomedical engineer at the University of North Carolina at Chapel Hill.

When Gu and colleagues injected their synthetic cells into diabetic mice, the animals’ blood sugar levels normalized within an hour and stayed that way up to five days, when the cells ran out of insulin. The researchers plan to perform further tests on lab animals to assess the fake cells’ long-term health effects before running clinical trials.

Even for patients who manage their insulin with automated mechanical pumps (SN Online: 5/8/10), synthetic cells offer the advantage of more precise, real time blood sugar regulation, says Michael Strano, a bioengineer at MIT. The creation of the new faux cells not only poses a potential diabetes treatment, “but it’s also a bellwether. It’s slightly ahead of its time,” says Strano. “I think therapeutics of the future are going to look like this.”