Culture

Scientists discover two new species of ancient, burrowing mammal ancestors

image: Holotype specimens of Fossiomanus senensis and Jueconodon cheni

Image: 
MAO Fangyuan

A joint research team led by Dr. MAO Fangyuan and Dr. ZHANG Chi from the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) of the Chinese Academy of Sciences and Prof. MENG Jin from the American Museum of Natural History have discovered two new species of mammal-like, burrowing animals that lived about 120 million years ago in what is now northeastern China.

The new species, described in Nature on April 7, are distantly related. However, they independently evolved traits to support their digging lifestyle. They represent the first "scratch diggers" discovered in this ecosystem.

"There are many hypotheses about why animals dig into the soil and live underground," said Prof. MENG, lead author of the study. "For protection against predators, to maintain a temperature that's relatively constant, or to find food sources like insects and plant roots. These two fossils are a very unusual, deep-time example of animals that are not closely related and yet both evolved the highly specialized characteristics of a digger."

The fossil mammaliamorph species--predecessors to mammals--were discovered in the Jehol Biota, which represents the Early Cretaceous epoch, about 145 to 100 million years ago.

One is a mammal-like reptile called a tritylodontid and represents the first of its kind to be identified in this biota. About a foot in length, it was named Fossiomanus sinensis. The other one, Jueconodon cheni, is a eutriconodontan, a distant cousin of modern placental mammals and marsupials, which were common in the biota. It is about seven inches long.

Mammals that are adapted to burrowing have specialized traits for digging. The researchers found some of these hallmark features, including shorter limbs, strong forelimbs with robust hands, and a short tail, in both Fossiomanus and Jueconodon. In particular, these characteristics point to a type of digging behavior known as "scratch digging," accomplished mainly by the claws of the forelimbs.

"This is the first convincing evidence for fossorial life in those two groups," said Dr. MAO. "It also is the first case of scratch diggers we know about in the Jehol Biota, which was home to a great diversity of animals, from dinosaurs and insects to plants."

The animals also share another unusual feature: an elongated vertebral column. Typically, from the neck to the hip, mammals have 26 vertebrae. However, Fossiomanus had 38 vertebrae--a staggering 12 more than the usual number--while Jueconodon had 28.

To try to determine how these animals got their elongated axial skeleton, the paleontologists turned to recent studies in developmental biology, finding that the variation could be attributed to gene mutations that determine the number and shape of the vertebrae during early embryonic development of these animals. Variation in vertebrae number can be found in modern mammals as well, for example, in elephants, manatees, and hyraxes.

Credit: 
Chinese Academy of Sciences Headquarters

Inheriting acquired traits requires trailblazer modifications to unfertilized eggs

image: Representative images of H2AK119ub1 and H3K27me3 immunostaining analysis in control and matKO embryos. M, maternal pronucleus; P,
paternal pronucleus.

Image: 
RIKEN

An epigenetic study at the RIKEN Center for Integrative Medical Sciences shows that in mouse egg cells, modifications to histone H2A at lysine 119 lay the groundwork for inherited DNA functional modifications from the mother.

In books and the movies, a group of people on a special mission always sends out a scout to do reconnaissance before they proceed. Sometimes, the scouts leave signs or markers that allow the group to know where there should go. Researchers led by Azusa Inoue at the RIKEN Center for Integrative Medical Sciences in Japan have discovered a mark left behind in unfertilized egg cells that determine which DNA modifications will be inherited if the egg is fertilized. Specifically, they found that without initial modifications to histone H2A at lysine 119--technically called H2AK119ub1--later inheritable modifications would not occur. When allowed to develop, one consequence of this deficit was an enlarged placenta after embryo implantation. This study was published in Nature Genetics on April 5.

For many years we were taught in school that acquired traits were not inherited. In some sense this was correct; stretching your neck a lot to get food will not result in children with longer necks. However, your DNA function can be modified throughout your life. For example, DNA structure in chromosomes is supported by proteins called histones. When histones are modified, they can change how genes are expressed in the body. This is epigenetics, and a previous study by Inoue and colleagues showed that acquired tri-methylation of histone H3 at lysine 27 (thankfully abbreviate to H3K27me3) in mammalian egg cells can be inherited. In the new study, the team used technology called low-input CUT&RUN to begin answering the question of how this happens.

First, the researchers examined the timing of the two different histone modifications. They found that every gene exhibiting H3K27me3 also showed H2AK119ub1 in mouse egg cells. Suspecting its importance, the researchers knocked out two proteins that make up H2AK119ub1 in egg cells. Low-input CUT&RUN showed that the knock-out egg cells had much less H3K27me3 than controls at a subset of genes that normally bring H3K27me3 into the next generation. Thus, H2AK119ub1 acts like a kind of marker left by a scout, identifying where subsequent H3K27me3 should follow. "We discovered that H2AK119ub1 is necessary for maternal inheritance of H3K27me3, making the H2AK119ub1-H3K27me3 pathway a major player in transgenerational epigenetic inheritance in mammals," says Inoue.

The researchers then found something they didn't expect. Testing showed that the loss of H3K27me3 was itself inherited by fertilized embryos, and could not be reversed. Furthermore, this deficiency led increased lethality--miscarriages--and enlarged placentas. "It was surprising to find that defects in an egg's histone modification are irreversibly inherited by embryos and cause long term consequences in development," says Inoue.

The results thus showed that despite normal DNA in the mouse egg cell, if the proper instructions--first H2AK119ub1 and then H3K27me3 modifications--were missing, miscarriages and enlarged placentas could occur. These findings have clinical implications, especially for reproductive medicine and placental defects. "The next step," says Inoue, "is to see whether any diseases or surrounding environments can affect the heritable histone modification."

Credit: 
RIKEN

UMD tracks the adoption of green infrastructure, from water conservation to policy

In a new paper published in the Journal of Environmental Policy & Planning, the University of Maryland teamed up with local researchers to examine green infrastructure adoption and leadership in Tucson, Arizona, an interesting case study where grassroots efforts have helped to drive policy change in a growing urban area surrounded by water-constrained desert. Green infrastructure (any installation that manages water or environmental factors, such as rain gardens, stormwater basins, or urban tree cover) is slowly transitioning from a fringe activity to an important part of the way governments and municipalities are dealing with water and the local effects of a changing climate. By examining the trajectory of sustainability and the role of policy entrepreneurship in broader adoption, Tucson can provide a peek into the future of green infrastructure in the Southwest and across the country.

"This work came out of a long term collaboration in Arizona trying to understand a lot of aspects of how green infrastructure (GI) is used there," says Mitchell Pavao-Zuckerman, assistant professor in Environmental Science and Technology at UMD. "We are looking at the functionality of GI, its practical benefits, but also how governance and learning around GI changes, inhibits, or helps adoption. Looking at evolution and adoption, we can see different types of players that are key, like policy entrepreneurs who are early adopters or innovators in either practice or policy and how they help diffuse knowledge around the city. Learning these lessons, we gain a lot of insight into how policy is changing, and how other areas could adapt going forward."

Funded by the National Science Foundation's (NSF) Coupled Human and Natural Systems program, Pavao-Zuckerman collaborated with the University of Arizona, the Udall Center for Public Policy in Tucson, and the University of Virginia to examine these GI trends. The researchers took a mixed methods approach to the work, examining policy, documentation, and newspaper reports to create a timeline of GI developments in the history of the city. The timeline was then used as a starting point when interviewing stakeholders and GI players in Tucson, providing a richer context and backdrop to the interview data.

"The timeline and our approach to gathering this information is innovative; it puts a method behind anecdotal stories," explains Pavao-Zuckerman. "Studying this kind of process in an urban setting around GI is a new thing, so that is one of the unique pieces of this paper. In lots of places, you have this knowledge and history of how things have come about, but using the timeline and interviews to document how things have changed, and putting it within theories of adaptation and governance - these are new frontiers for working with GI and urban environments."

As Pavao-Zuckerman describes it, Tucson provides a compelling look at how GI emerges in places that don't necessarily have water quality mandates, which are prominent in Maryland and the area surrounding the Chesapeake Bay. In Tucson and much of the Southwest, water sustainability and conservation is often the driver.

"In Maryland with the Bay, a lot of GI is implemented as a way to meet water quality standards and meet pollution reduction targets," says Pavao-Zuckerman. "But in this case, there aren't water quality mandates, and the focus is really on harvested water. A lot of water consumption in the Southwest goes to domestic irrigation for lawns and gardens, which can sometimes be up to 50% of potable water usage, so the demand is huge. You also see integration with urban tree canopy and stormwater basins that can help mitigate heat islands and buffer for future climate change when things get even hotter out there. So you see the same types of things there as in the Bay area, like curb cuts to redirect stormwater and urban tree cover, but it is coming from a different place. So it is interesting to see how you get to the same place from a different start point."

One thing that Pavao-Zuckerman and the team found in Tucson that the rest of the country can learn from is an overall culture of what is known as water ethics. Similar to the concept of One Health (the intersection and interconnectedness of animal, human, and environmental health), Tucson water municipalities call it One Water.

"Part of what we see going forward is a more holistic way of thinking about water," says Pavao-Zuckerman. "Stormwater is usually thought of as a waste stream that we want to get rid of as quickly as possible, but people are starting to see it as a resource and not a waste. The water municipality there calls it One Water, thinking about the integration of the whole water system. Instead of thinking of stormwater and drinking water as two separate things, we think about water collectively, and that gives you a different perspective for management. That mindset will hopefully also start to happen in other places."

Other key findings from the paper include the need to think about GI across all scales, from individual and neighborhood adoption to the city level. Additionally, there is a need for more equitable dispersion of GI to ensure environmental and social justice.

"A lot of this practice is done effectively voluntarily," explains Pavao-Zuckerman. "Neighborhoods and the city will promote it, but the city isn't necessarily going out and implementing most of these structures - that is up to the home or property owner. Because implementation has started from policy entrepreneurs and individuals in Tucson, it didn't happen randomly and also didn't happen necessarily in communities where it is most needed. Most cities are like this, with more affluent communities having more implementation, and places that have less money or more people of color tend to have less implementation, so they are bearing the brunt of the environmental harms that they are trying to solve. So that needs to be part of the trajectory going forward, thinking about how to shift practice to reflect that and encourage cooperation at all levels and scales to be more equitable."

Overall, this paper provides a landscape of GI implementation and gives researchers, policy makers, and advocates alike a chance to understand where things are coming from so they can think more strategically about where things are headed.

"It lets us do backcasting and forwardcasting," emphasizes Pavao-Zuckerman. "We can see where things came from and new threads that are starting to emerge. GI is important because it adds different aspects of resilience to an environment. It helps to buffer environmental extremes, and it adds more flexibility throughout the landscape to withstand and respond to extreme events. We think of climate change as this thing that is going to be hotter, wetter, or drier, but it is the extreme ends of weather events that really hit cities and people hard, and GI is something that we think is going to really help. We are paying particular attention to the role of people and organizations in driving GI change in this work to understand it as a way for how people can shape urban transformations to make a more sustainable and resilient community."

Credit: 
University of Maryland

Genome sequencing reveals a new species of bumblebee

image: The newly discovered Bombus incognitus looks identical to another species, Bombus sylvicola. It is impossible to say which one of the two is seen in this photo.

Image: 
Jennifer Geib

While studying genetic diversity in bumblebees in the Rocky Mountains, USA, researchers from Uppsala University discovered a new species. They named it Bombus incognitus and present their findings in the journal Molecular Biology and Evolution.

Bumblebees are vital for agriculture and the natural world due to their role in plant pollination. There are more than 250 species of bumblebee, and they are found mainly in northern temperate regions of the planet. Alarmingly, many species are declining due to the effects of climate change, and those with alpine and arctic habitats are particularly threatened. However, the full diversity of bumblebee species in these environments is still unknown.

Matthew Webster's research group at Uppsala University, together with colleagues in the USA, studied genetic diversity in bumblebees in the Rocky Mountains, Colorado by collecting hundreds of samples and sequencing their genomes. Surprisingly, the data revealed the presence of a new species, which was indistinguishable in appearance to the species Bombus sylvicola, but clearly distinct at the genetic level. The authors named this species Bombus incognitus.

By comparing the genomes of Bombus sylvicola and Bombus incognitus, the team were able to learn about how this new species formed. They found signals consistent with gene flow between the species during their evolution. They also identified parts of chromosomes that are incompatible between species, which act as genetic barriers to gene flow and were likely important in causing the species to separate.

These results indicate that the number of bumblebee species in arctic and alpine environments may be larger than previously thought. It is possible that mountainous terrain is conducive to speciation. Cold-adapted populations could become isolated at high altitudes during periods of warming in their evolutionary history, leading to the formation of new species. It is also possible that additional genome sequencing of bumblebees will reveal even more cryptic species that have so far gone undetected.

Credit: 
Uppsala University

Ant responses to social isolation resemble those of humans

image: An ant of the species Temnothorax nylanderi

Image: 
photo/©: Susanne Foitzik, JGU

Ants react to social isolation in a similar way as do humans and other social mammals. A study by an Israeli-German research team has revealed alterations to the social and hygienic behavior of ants that had been isolated from their group. The research team was particularly surprised by the fact that immune and stress genes were downregulated in the brains of the isolated ants. "This makes the immune system less efficient, a phenomenon that is also apparent in socially isolating humans - notably at present during the COVID-19 crisis," said Professor Susanne Foitzik, who headed up the study at Johannes Gutenberg University Mainz (JGU). The study on a species of ant native to Germany has recently been published in Molecular Ecology.

Effects of isolation in social insects little studied so far

Humans and other social mammals experience isolation from their group as stressful, having a negative impact on their general well-being and physical health. "Isolated people become lonely, depressed, and anxious, develop addictions more easily, and suffer from a weakened immune system and impaired overall health," added Professor Inon Scharf, lead author of the article and cooperation partner of the Mainz research group at Tel Aviv University in Israel. While the effects of isolation have been extensively studied in social mammals such as humans and mice, less is known about how social insects respond in comparable situations - even though they live in highly evolved social systems. Ants, for instance, live their entire lives as members of the same colony and are dependent on their colony mates. The worker ants relinquish their own reproductive potential and devote themselves to feeding the larvae, cleaning and defending the nest, and searching for food, while the queen does little more than just lay eggs.

The research team looked at the consequences of social isolation in the case of ants of the species Temnothorax nylanderi. These ants inhabit cavities in acorns and sticks on the ground in European forests, forming colonies of a few dozen workers. Young workers engaged in brood care were taken singly from 14 colonies and kept in isolation for varying lengths of time, from one hour to a maximum of 28 days. The study was conducted between January and March 2019 and highlighted three particular aspects in which changes were observed. After the end of their isolation, the workers were less interested in their adult colony mates, but the length of time they spent in brood contact increased; they also spent less time grooming themselves. "This reduction in hygienic behavior may make the ants more susceptible to parasites, but it is also a feature typical of social deprivation in other social organisms," explained Professor Susanne Foitzik.

Stress due to isolation adversely affects the immune system

While the study revealed significant changes in the behaviors of the isolated insects, its findings with regard to gene activity were even more striking: Many genes related to immune system function and stress response were downregulated. In other words, these genes were less active. "This finding is consistent with studies on other social animals that demonstrated a weakening of the immune system after isolation," said Professor Inon Scharf.

The discovery by the team of biologists led by Professor Susanne Foitzik is the first of its kind, combining behavioral and genetic analyses on the effects of isolation in social insects. "Our study shows that ants are as affected by isolation as social mammals are and suggests a general link between social well-being, stress tolerance, and immunocompetence in social animals," concluded Foitzik, summarizing the results of the Israeli-German study. Foitzik is also collaborating with her Israeli partner Professor Inon Scharf and with co-author and group leader Dr. Romain Libbrecht of JGU on a new joint project on the fitness benefits and the molecular basis of spatial learning in ants, funded by the German Research Foundation (DFG).

Credit: 
Johannes Gutenberg Universitaet Mainz

Study shows why crossing obstacles is difficult for patients with Parkinson's disease

image: The scientists detected incapacities related to gait timing and foot placement. Their discoveries serve as a basis for the development of an exercise protocol that mitigates the difficulty

Image: 
Movi-Lab

By Karina Ninni | Agência FAPESP – A multidisciplinary research group affiliated with the Department of Physical Education’s Human Movement Laboratory (Movi-Lab) at São Paulo State University (UNESP) in Bauru, Brazil, measured step length synergy while crossing obstacles in patients with Parkinson’s disease and concluded that it was 53% lower than in healthy subjects of the same age and weight. Step length is one of the main variables affected by the disease.

Synergy, defined as combined operation, refers in this case to the capacity of the locomotor (or musculoskeletal) system to adapt movement while crossing an obstacle, combining factors such as speed and foot position, for example. Improving synergy in Parkinson’s patients while they are walking can make a significant difference to their quality of life, as they tend to fall three times more often on average than healthy people of the same age.

“There are patients in our exercise group who fall three or four times a week. It’s important to understand how these patients’ gait and locomotion adapt while crossing obstacles so that we can improve step-length synergy. This approach enables us to refine the exercise protocol, improve locomotion, and try to reduce fall frequency,” said Fabio Augusto Barbieri, a professor in UNESP’s Department of Physical Education and its Movement Science Graduate Program.

An article on the study is published in the journal Gait & Posture. Barbieri is the last author. The first author is mechanical engineer Satyajit Ambike, a professor in the Department of Health and Kinesiology at Purdue University in the United States. The study is the first to report on impaired locomotor synergies in Parkinson’s patients.

“The innovation in our study is its focus on gait timing, or rhythmicity, the constancy with which patients position their feet to maintain locomotion,” Barbieri said. “This can be evaluated by measuring step-length synergy. Synergy presupposes a goal and has to do with the way the locomotor system adjusts to achieve it. In our case, we investigated how the system adapts to achieve the objective of crossing an obstacle during locomotion.”

The researchers found that Parkinson’s patients are less able to adapt the position of their feet than healthy people as they approach and cross an obstacle. “The locomotor system always tries to adapt in order to maintain constancy during locomotion. Absent this constancy, we may make mistakes that can lead to a fall,” Barbieri said. “Parkinson’s patients are less constant in positioning their feet while walking, and gait timing tends to be unstable as a result. Their speed rises and falls as they walk, and step length varies along with foot placement.”

Obstacles

Thirteen Parkinson’s patients and 11 neurologically healthy controls participated in the study. All participants were over 50. To be eligible they had to be able to walk without assistance, to have normal sight and hearing (with or without lenses and hearing aids), to have no orthopedic or neurological diseases apart from Parkinson’s, and to be able to understand and follow instructions. The patients took medication for Parkinson’s (Levodopa) for at least three months before data collection.

The participants had to walk along a gangway (length 8.5 m, width 3.5 m), and cross a foam rubber obstacle (height 15 cm, width 60 cm, depth 5 cm) placed 4 m from the starting point. Gait speed was not imposed but chosen by each participant. No instructions were given regarding which leg should cross the obstacle first, but its position was adjusted for each participant so that the right leg had to lead.

“We tried to standardize the task so that all the subjects crossed the obstacle with their right leg leading,” Barbieri said. “The idea was to ensure there was no interference from other factors in the locomotion pattern. The height of the obstacle was 15 cm because that’s the standard curb height in Brazil. We thought it would be best to stick to the standard.”

A number of systems need to work together for synergy to happen in terms of achieving an objective, he explained. “When the distance between the toes and the obstacle [before it is crossed] and between the heel and the obstacle [after it is crossed] varies a lot, the risk of contacting the obstacle increases. Being too close to the obstacle before crossing entails having to raise the leading leg so high that it may prove impossible. If the trailing foot comes down too close to the obstacle after crossing, the heel is likely to touch it,” he said, adding that gait timing should ideally be constant and the foot should not come too close to the obstacle on either side.

Biomechanics

Step-length synergy was measured using a methodology derived from mechanical engineering and adapted to the study of human movement. The methodology is not specific to gait analysis or Parkinson’s, but adapted from a set of techniques used to measure upper-limb strength by Ambike and Mark Latash of Pennsylvania State University.

Eight motion capture cameras used in the study were purchased with funding from FAPESP (grant no. 2017/19516-8). The study was also supported by a visiting researcher grant.

Twenty reflective markers were placed according to a specific gait analysis model on the body of each participant in the experiment. “While the subject is walking along the gangway toward the obstacle and crossing it, the cameras emit infrared light, which is reflected by the markers. The cameras capture the position of the markers, enabling us to determine step length and duration. Gait analysis software does the other calculations,” Barbieri explained.

The study was the first time this methodology was applied to gait analysis, according to Barbieri. “Another innovation was that we used a single variable to detect possible gait timing-related incapacities relatively simply, facilitating more consistent future intervention to improve gait timing via training,” he said. “This is the point of gait analysis in general. You want to determine possible variables of changes in gait and modify intervention on that basis.”

The same researchers have since begun a study to find out whether the height of the obstacle affects step-length synergy. “We want to know if this synergy changes because the obstacle is higher or lower. This concerns the environment in which the patient moves. If there are obstacles of a certain height in the area, they may cause problems and lead to falls, so we can modify the environment to facilitate locomotion,” Barbieri said.

The article “Step length synergy while crossing obstacles is weaker in patients with Parkinson’s disease” by Satyajit Ambike, Tiago Penedo, Ashwini Kulkarni, Felipe Balistieri Santinelli and Fabio A. Barbieri can be retrieved from: www.sciencedirect.com/science/article/abs/pii/S0966636221000011.

Journal

Gait & Posture

DOI

10.1016/j.gaitpost.2021.01.002

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Crunching on coral

image: A variety of reef fishes swim among the staghorn coral in Mo'orea, French Polynesia as a researcher swims overhead.

Image: 
UC Santa Barbara

You might not think an animal made out of stone would have much to worry about in the way of predators, and that's largely what scientists had thought about coral. Although corallivores like parrotfish and pufferfish are well known to biologists, their impact on coral growth and survival was believed to be small compared to factors like heatwaves, ocean acidification and competition from algae.

But researchers at UC Santa Barbara have found that young corals are quite vulnerable to these predators, regardless of whether a colony finds itself alone on the reef or surrounded by others of its kind. The research, led by doctoral student Kai Kopecky, appears in the journal Coral Reefs.

Kopecky and his co-authors were curious how corals can reemerge following large disturbances like cyclones and marine heatwaves, which periodically devastate the reefs of Mo'orea, French Polynesia, where the research was conducted.

"Mo'orea is prone to big heat shocks, storm waves, cyclones and predatory sea star outbreaks," said co-author Adrian Stier, an associate professor in the Department of Ecology Evolution & Marine Biology and one of Kopecky's advisors. "It just wipes the slate clean in terms of coral death. And sometimes, just a few years later, you can swim around and see thriving life. We're still really curious about what allows these ecosystems to bounce back."

Scientists had implicated predators in shaping coral population dynamics, but there hadn't really been many direct studies. "People who study coral reefs have thought a lot about the supply of new babies coming from elsewhere, or limitation by the amount of nutrients, or competition with algae as important drivers of coral recovery," Stier continued. "But there hasn't been as much done on the importance of predators as a limiting factor."

After reviewing the literature on coral growth, mortality and predation, Kopecky decided to focus on the effects predation and density had on young coral colonies.

He planted small nubbins of Pacific staghorn coral at various locations on the reef either alone, in a group of four, or in a group of eight. Some of these groups were protected by metal cages, while others were left exposed. For the unprotected coral, Kopecky sought to determine whether high density increased or decreased predation on the staghorn. For the protected groups, he was curious how density influenced coral growth.

The researchers found that protection was key to these small corals' futures. After 30 days out on the reef, nearly all of the unprotected nubbins had been completely consumed. In fact, density had virtually no effect on this outcome.

The researchers let the experiment run for an entire year. When they returned, virtually none of the unprotected specimens remained. On the other hand, the caged corals had outgrown their accommodations by year's end.

"The corals that were protected grew all the way into the upper corners of the cages and were poking little branches out," Kopecky recalled. "They formed a cube of coral inside the cages, whereas the ones that were exposed to predators were just barely hanging on."

Coral are not typically fast-growing organisms, but staghorn coral grows quickly, giving it a competitive edge following disturbances that remove large amounts of coral. But according to Kai, staghorn coral are like the popcorn chicken of the reef: irresistible to a hungry corallivore.

The protected corals grew so quickly that Kai had to adopt a different way to measure them, because at a certain point the nubbins fused, and he could no longer unscrew their base plates to measure them in the lab.

So, if there's no protection in numbers for these tasty staghorns, how do they ever survive their infancy?

The corals benefit from the protection of fish like the Dusky farmerfish, which farms algae on the reef. These farmer fish doggedly defend their territories, offering protection to any small coral that happens to settle in their range, Kopecky explained. And while algae and coral are often considered archenemies -- with the former able to outcompete the later -- by munching on their crops, the farmer fish keep the algae in check. This enables the coral to get through the stage where they're vulnerable to predation.

In fact, researchers rarely see staghorn corals in large colonies absent the protection of these fishy farmers, Kopecky added.

The authors thought density would have some effect on predation. "I think it just turns out that this coral is so tasty that predators simply mowed everything down," Stier remarked.

The team is considering running a similar experiment with cauliflower coral, which is more robust and slower-growing than the staghorn. Hopefully it's also slightly less scrumptious, as well.

"When these pufferfish eat enough, you can see their bellies weighted down by the coral rocks that are in their stomachs," Stier said. "I mean, they're oddly shaped fish to begin with; they're already having a hard time swimming without that ballast, but this makes it extra tricky."

"It really is a cartoonish dynamic," Kopecky added.

Staghorn coral is widely used in reef restoration efforts, especially in the Caribbean where this and a related species (elkhorn coral) are endangered. In fact, before joining UC Santa Barbara, Kopecky spent several months working as a coral restoration technician on the U.S. Virgin Islands.

"I had an opportunity to see coral restoration in action, but also see some of the limitations associated with it," Kopecky said. "And then I was able to go and conduct research, like this experiment, that can feed back into and inform how restoration might be improved."

Getting out-planted coral nubbins through this vulnerable life stage presents a major bottleneck to restoration efforts, Kopecky explained. He has already received feedback on the study from people engaged in coral reef restoration expressing how relevant his findings are to their work.

"When you protect these young, vulnerable corals from predators, the amount of growth is substantially higher than when they're not protected," Kopecky said. "It's clear that coral predators can really shape whether young corals actually reach the size where they're no longer vulnerable to predation."

Credit: 
University of California - Santa Barbara

Junctions between three cells serve as gateways for the transport of substances

image: Epithelial cells enclosing a developing egg cell in the ovary of a fruit fly. Scientists examined the ovary in tissue culture using confocal fluorescence microscopy. They found that, at the points where the membranes (pink) of three cells meet each other, the cells loosen their connections in an orderly way, thus allowing yolk-forming proteins to be transported through the intercellular spaces (green) into the egg.

Image: 
Isasti-Sanchez et al./Dev Cell 2021

Within multicellular organisms, cells build connections with each other forming cell layers that cover the surfaces of tissues and organs and separate structures in the body. For example, the skin forms a mantle around the entire organism, and the layer of cells lining the blood vessels creates a boundary between the bloodstream and tissues. Special connections between neighbouring cells ensure that these cellular barriers are, on the one hand, stable and tight - thus protecting the body and organs against pathogens - while, on the other hand, they remain permeable to specific substances or migrating cells. This is how the cells allow dissolved nutrients to be transported into organs, and how cells of the immune system are able to migrate from the blood across the blood vessel wall into inflamed tissue.

Scientists at the University of Münster have now investigated a comparable process in the ovary of the fruit fly (Drosophila melanogaster). Here, the maturing egg cells are enclosed by a layer of epithelial cells through which the egg absorbs yolk-forming proteins. The researchers found that, at the points where three cells meet, the epithelial cells loosen their connections in a controlled manner and the yolk proteins are transported to the egg cell through the resulting gaps. At places where only two cells connect, the connections are maintained, thus keeping the tissue integrity intact. "Our findings contribute to a better understanding of how cellular barriers function and are restructured during development, which provides a basis for deciphering the mechanisms of certain pathological processes," explains Prof Dr Stefan Luschnig, a developmental biologist and leader of the research project. The study has now been published in the scientific journal "Developmental Cell".

Investigations during egg development in the fruit fly

Whenever substances are transported through a cell layer, the cells either take them up through the membrane on one side of the cell layer and then release them on the other side - which requires a lot of energy - or the substances diffuse through gaps between the cells. From studies of other insects, it was already known that yolk proteins are usually transported into egg cells via spaces between the epithelial cells that enclose the eggs. The new investigation confirmed that this is also the case for the fruit fly. However, it was not previously clear how the cells manage to transiently open up intercellular spaces while at the same time keeping the tissue intact.

To visualize these cellular processes live and analyze the molecular mechanisms behind them, the scientists genetically labelled certain proteins of the fruit fly with fluorescent molecules, kept the ovaries in tissue cultures, and examined the living tissue using confocal laser scanning microscopy. They focused their attention on the proteins on the surface of the epithelial cells. These so-called adhesion proteins mechanically hold the cellular network together and seal the spaces between cells.

Cell barrier opens at junctions of three cells

The scientists found that the epithelial cells sequentially removed four different adhesion proteins from their membranes. "This process takes several hours, and only when the last protein is gone do the cell junctions open," explains biologist Jone Isasti-Sanchez, who is the first author of the study and a doctoral candidate in the Integrated Research Training Group within the Collaborative Research Center 1348 "Dynamic Cellular Interfaces" at the University of Münster. After the junctions have opened, the uptake of yolk proteins into the egg proceeds over about 16 hours and, subsequently, the process reverses - the intercellular spaces close again. The researchers demonstrated that the cells open their contact sites by taking up adhesion proteins from the surface into the cell interior, using a basic cellular process called endocytosis. An important new finding is that endocytosis seems to take place to a greater extent at those points where three cells meet, and, as a result, the cell junctions only open up at these points. Where only two cells meet, the contact is maintained. "The fact that this process occurs selectively at the three-cell contact-points and, moreover, in such an orderly fashion, is probably important for preventing the tissue from falling apart," says Stefan Luschnig. In addition, he adds that the process presumably has to take place in a very controlled manner because the opening of gates in a tissue comes with the risk that pathogens will enter.

In their experiments, the scientists also genetically increased or reduced the amount of the adhesion protein E-cadherin and were able to show that the amount of this protein determines how wide the intercellular spaces open. In addition, they found that the mechanical tension in the cytoskeleton plays a key role in the process. This tension is generated by a structure consisting of the proteins actin and myosin, which encircles the cell periphery and is able to contract, similar to a rubber band. When the researchers increased the activity of myosin in the cell, the cytoskeleton contracted more, which prevented the cell junctions from opening.

Credit: 
University of Münster

Scientists discover two new species of ancient, burrowing mammal ancestors

image: This portrait shows the tritylodontid Fossiomanus sinensis (upper right) and the eutriconodontan Jueconodon cheni in burrows; both lived the Early Cretaceous Jehol Biota (about 120 million years ago), northeastern China, and showed convergent skeletal features adapted to fossorial lifestyle.

Image: 
© Chuang Zhao

Paleontologists have discovered two new species of mammal-like, burrowing animals that lived about 120 million years ago in what is now northeastern China. The new species, described today in the journal Nature, are distantly related but independently evolved traits to support their digging lifestyle. They represent the first "scratch-diggers" discovered in this ecosystem.

"There are many hypotheses about why animals dig into the soil and live underground," said lead author Jin Meng, a curator in the American Museum of Natural History's Division of Paleontology. "For protection against predators, to maintain a temperature that's relatively constant--not too hot in the summer and not too cold in the winter--or to find food sources like insects and plant roots. These two fossils are a very unusual, deep-time example of animals that are not closely related and yet both evolved the highly specialized characteristics of a digger."

The fossil mammaliamorph species--predecessors to mammals--were discovered in the Jehol Biota, which represents the Early Cretaceous epoch, about 145 to 100 million years ago. One is a mammal-like reptile called a tritylodontid and is the first of its kind to be identified in this biota. About a foot in length, it was given the name Fossiomanus sinensis (Fossio, "digging" and manus "hand;" sinensis, "from China"). The other is named Jueconodon cheni (Jue, "digging"--Chinese pinyin--and conodon "cuspate tooth"; cheni for Y. Chen, who collected the fossil). It is a eutriconodontan, a distant cousin of modern placental mammals and marsupials, which were common in the habitat. It is about 7 inches long.

Mammals that are adapted to burrowing have specialized traits for digging. The researchers found some of these hallmark features--including shorter limbs, strong forelimbs with robust hands, and a short tail--in both Fossiomanus and Jueconodon. In particular, these characteristics point to a type of digging behavior known as "scratch digging," accomplished mainly by the claws of the forelimbs.

"This is the first convincing evidence for fossorial life in those two groups," Meng said. "It also is the first case of scratch diggers we know about in the Jehol Biota, which was home to a great diversity of life, from dinosaurs to insects to plants."

The animals also share another unusual feature: an elongated vertebral column. Typically, mammals have 26 vertebrae from the neck to the hip. However, Fossiomanus had 38 vertebrae--a staggering 12 more than the common state--while Jueconodon had 28. To try to determine how these animals got their elongated trunks, the paleontologists turned to recent studies in developmental biology, finding that the variation could be attributed to gene mutations that determine the number and shape of the vertebrae in the beginning of the animals' embryotic development. Variation in vertebrae number can be found in modern mammals as well, including in elephants, manatees, and hyraxes.

Credit: 
American Museum of Natural History

Framework could support more reliable electric power distribution systems

Imagine the process of distributing electricity to homes from the power grid is like travelers boarding a train.

There are multiple steps to take before they can reach their final destination. First, they have to buy a ticket at the ticketing booth - this is where the power is generated. Then, they board a train that departs from the station - the power is transmitted over distances using transmission lines. Finally, the train takes the travelers (electricity) to their final destination. This final step of sending power to homes and businesses is called the distribution system - and it is critical that it remain reliable.

Chanan Singh and doctoral student Arun Karngala from the Department of Electrical and Computer Engineering at Texas A&M University, are working to develop a reliability framework for the distribution system so that utility companies can be better prepared for uncertainties that may arise. Singh is a Regents Professor, the Irma Runyon Chair Professor and University Distinguished Professor.

By developing these models and methods to perform the analysis of the distribution level of the power grid, adverse effects of localized weather events or equipment failure can potentially be prevented.

The researchers' framework can be also used to test the systemwide impact of installing rooftop solar and energy storage by the customers in the distribution system.

"We found that with 40% of customers installing solar capacity, that amounts to 1.5 times the peak demand of the respective households," Karngala said. "With sufficient energy storage systems, the reliability indices measured significant improvements. For example, the system average interruption frequency index was improved by 50%, the system average interruption duration index was improved by 70% and the customer average interruption duration index was improved by 45%."

Karngala said that this framework can also be used to decide the capacity of solar rooftop installation: "If the installed solar capacity is increased from one time the peak demand to two times the peak demand, the reliability indices show steady improvement. The improvement in indices tapers off after the installed solar capacity is increased more than 2.5 times the peak demand."

Performing reliability studies can help create business cases for purchasing such storage, and ongoing research on storage technologies is helping to provide more affordable and reliable alternatives.

The research team is focused on the analysis and reliability at the distribution level as it is the most vulnerable of all stages of power allocation and therefore can cause the most trouble for customers. Further, unlike high-level sectors of the power grid - such as power generation and transmission - that have existing methods of analysis and procedures to ensure that the reliability will be maintained in the presence of uncertainties at specified levels, the distribution level generally does not have such standards.

Most independent system operators (ISOs) ensure they have enough power generation reserve so that if an unexpected issue arises (e.g., transmission line failure, generator failure, the load being higher than forecasted, etc.) resulting in the total load not being supplied, the load can be adjusted so that it is not lost completely for all customers. Many ISOs use criteria that ensure that, on average, this load curtailment would not occur more than one day in 10 years. Such standards are not typically used at the distribution level.

This work was published in IEEE Transactions on Sustainable Energy in January.

"The winter storm event that happened recently in Texas was of a different nature that spanned the entire state," Singh said. "But extreme weather can be in a variety of forms. For example, you can have tornadoes or hurricanes where the effect is not statewide but instead more limited areas are affected. We believe that in those situations these models and the tools that they will provide to us to manage the system will enhance the reliability of the distribution system because you don't have to rely only on the power that is coming from the grid, but also from other local sources such as solar and perhaps wind."

One challenge that the team is facing is the many different kinds of generating systems being integrated into distribution systems that must be accounted for. Karngala said distribution systems previously were considered the only consumers of energy, but today there are newer technologies and many more distributed energy resources coming into the distribution system such as solar panels, wind generation and storage.

"The exciting part about working on distribution systems is that these are in a phase of change now," Karngala said. "These are changing from traditional systems to much more advanced systems, and we are in that transition phase where we need to develop models and methodologies."

Ultimately, the team is looking to build a comprehensive framework of reliability analysis where approaches such as demand response, price strategies and operational strategies can be included and be expanded upon as the power grid evolves.

"There is no shortage of projects that can be developed around this framework as many models, methods and operational strategies can be included in the reliability evaluation," Karngala said.

Credit: 
Texas A&M University

Story tips from Johns Hopkins experts on COVID-19

STUDY SHOWS PANDEMIC LIMITATIONS ON SURGERIES CAN IMPACT PATIENT CARE

https://www.hopkinsmedicine.org/news/newsroom/news-releases/covid-19-story-tip-study-shows-pandemic-limitations-on-surgeries-can-impact-patient-care

Media Contact: Waun'Shae Blount, wblount1@jhmi.edu

One of the most serious consequences of the current COVID-19 pandemic has been the postponement of non-essential surgeries -- defined by the federal government's Center for Medicare and Medicaid Services (CMS) as "medical procedures that are not necessary to address an emergency or to preserve the health and safety of a patient."

When the CMS issued guidelines in April 2020 for medical centers nationwide to limit non-essential operations until facilities could be declared free of COVID-19, the American Academy of Otolaryngology -- Head and Neck Surgery followed suit, telling its physician members to delay performing non-essential ear, nose and throat surgeries.

What was the impact? That's the question Johns Hopkins Medicine researchers attempted to answer in a recently published study comparing inpatient and outpatient surgical volumes from March 2020 through September 2020 with data from the same timeframe in 2019. A surgical volume is defined as the number of times a hospital has done a specific surgical procedure during a defined time.

A report on the findings appeared February 2020 in JAMA Otolaryngology-Head & Neck Surgery.

To conduct their study, the researchers used the healthcare performance improvement tool called Vizient. Researchers used data from 609 hospitals across the United States. Data were collected for the period March 1, 2019, through Sept. 30, 2020. Hospitals were included if they had a minimum of 20 otolaryngology cases per month pre-COVID and had reported volumes for all months in the study period.

Data came from 174 inpatient and 295 outpatient community facilities. The researchers found that in April 2020, outpatient surgical case volumes dropped to 18% compared to April 2019. Inpatient volumes dropped as well, especially in the Middle Atlantic and parts of the southern US, where volumes fell to 40% of the level in the previous year. Areas most affected corresponded with regions where the pandemic initially hit hardest. These data, the researchers say, might be used to determine how quickly institutions can resume surgeries following a future crisis.

By September 2020, the data showed that outpatient volumes climbed back to 97% and inpatient volumes improved to 99% of pre-pandemic levels. These recoveries over such a short period of time, the researchers say, indicate that providers were diligent about meeting patient care needs even when the country was still at the height of the pandemic.

"As the pandemic continues, we've noted that otolaryngology surgeries are still backlogged and this impacts the health and wellbeing of patients," says C. Matthew Stewart, M.D., Ph.D., associate professor of otolaryngology-head and neck surgery at the Johns Hopkins University School of Medicine and senior author of the study. "To address this, we plan to keep monitoring trends in surgical volumes to develop helpful strategies for reducing or eliminating such backlogs during future pandemics or other crises."

Stewart is available for interviews.

PROFILES OF IMMUNE CELLS SHOW DEFENSE AGAINST COVID-19 MUTANTS SIMILAR TO ORIGINAL VIRUS

https://www.hopkinsmedicine.org/news/newsroom/news-releases/covid-19-story-tip-profiles-of-immune-cells-show-defense-against-covid-19-mutants-similar-to-original-virus

Media Contact: Michael E. Newman, mnewma25@jhmi.edu

In January, an international research team -- led by Johns Hopkins Medicine and in collaboration with the National Institute of Allergy and Infectious Diseases (NIAID) and ImmunoScape, a U.S.-Singapore biotechnology company -- published one of the most comprehensive profiles to date of T lymphocytes (more commonly known as T cells), the immune system cells that help protect us against SARS-CoV-2, the virus that causes COVID-19. The researchers felt that better defining which T cells interact with which specific portions of the virus could help accelerate the development of next-generation, more effective vaccines.

New variants of SARS-CoV-2 continue to spring up around the world, raising concerns that current vaccines -- designed to induce an immune response by recognizing spike proteins of the pandemic's original virus -- might not provide sufficient defense against a mutated strain. This could potentially make COVID-19 re-infection more likely or vaccination less effective.

To address these concerns, the same researchers who profiled the T cells responding to the original SARS-CoV-2 have done a second study, this time characterizing whether the immune cells also respond to three variant virus strains.

The team's findings, reported March 30 in the journal Open Forum Infectious Diseases, show that the T cells can get the job done.

The latest research used data generated from samples collected for the first study -- blood cells taken from 30 convalescent patients who had recovered from mild to moderate cases of COVID-19. The researchers used the data to assess how likely a specific type of T cell -- known as a CD8+ T cell (commonly called a "killer T cell" for its ability to eliminate cells that are infected with viruses) would recognize the three main SARS-CoV-2 variants that emerged in the past year in the United Kingdom (B.1.1.7), South Africa (B.1.351) and Brazil (B.1.1.248).

CD8+ T cells are covered in protein complexes called T cell receptors (TCRs) that bind to a specific protein fragment, known as an antigen, derived from a foreign body such as a virus. When this binding occurs, the T cell becomes activated and triggers an immune response against the invader. The ability of a specific TCR to recognize its target antigen defines that response.

In the earlier study assessing T cell response to the original SARS-CoV-2 in convalescent patients, the researchers tagged and identified the various types of CD8+ T cells specific for different parts of SARS-CoV-2. This enabled them to determine which of the viral antigens were targeted by the T cells.

Identifying those targeted antigens told the researchers which of the three SARS-CoV-2 variants to examine in the latest study. This time, they wanted to assess whether the genetic mutations associated with the variant strains might affect T cell recognition of the targets.
What they discovered was that the specific CD8+ T cells targets from the original SARS-CoV-2 remained virtually unchanged for all three mutant strains.

This finding is good news, the researchers say, because it suggests that T cell response to these viral targets in the convalescent patients studied -- and most likely, in people who have been fully vaccinated -- will not be greatly affected by the mutations found in the variants.
"Therefore, the vaccines currently being distributed worldwide should offer a reasonable measure of protection from either infection or serious disease caused by the three variant viruses and hopefully, any others that may emerge," says study lead author Andrew Redd, Ph.D., assistant professor of medicine at the Johns Hopkins University School of Medicine and staff scientist at NIAID.

Redd is available for interviews.

Credit: 
Johns Hopkins Medicine

Mounting hope for new physics

Today, the Muon g-2 Collaboration finally published the highly anticipated first result from its measurement of the anomalous magnetic moment of the muon, a precision quantity that offers physicists one of the most promising means to test predictions of the actual Standard Model of particle physics. The measured value, which is more precise than all values before, strengthens evidence for the emergence of new physics beyond the Standard Model, and thus for the existence of previously unknown particles or forces. The result was presented at an online seminar at Fermilab (FNAL) and published in four scientific articles.

"In 2014, I started working on the Muon g-2 experiment as a postdoctoral researcher at the University of Washington, Seattle," says Prof. Martin Fertl, who has been performing research in the field of low-energy particle physics at the PRISMA+ Cluster of Excellence since 2019. "That's why today's a truly special day. We can now announce a first result, while also stating that this result has opened the door even wider to a previously unknown physics."

The new experimental value published today of the anomalous magnetic moment of the muon is a(FNAL) = 116 592 040(54) x 10^(-11), with a relative uncertainty of 460 parts in a billion. Combined with the result of the experiment at Brookhaven National Laboratory completed more than 20 years ago, the new experimental mean value is a(Exp.,avg) = 116 592 061(41) x 10^(-11). This contrasts with the theoretical predicted value obtained from the Standard Model of
a(Theor.) = 116 591 810(43) x 10^(-11). Physicists classify the difference between these two values as 4.2 standard deviations. In other words, the probability that this discrepancy between experiment and theory is due to chance is 0.0025 percent (1 in 40,000). Physicists consider a discovery - in this case, the refutation of the Standard Model - has been made when the probability is less than 0.00005 percent corresponding to 5 standard deviations.

Numerous contributions from Mainz - both experimental and theoretical

Martin Fertl's PRISMA+ work group is the only one in Germany that is involved in the Muon g-2 Collaboration in an experimental capacity. The collaborations "count
erpart" is the "Muon g-2 Theory Initiative", a worldwide association of more than 130 physicists working on the theoretical prediction within the framework of the Standard Model. The initiative was established in 2017 as a means of joining forces to significantly reduce the uncertainty of the predicted value of the anomalous magnetic moment of the muon. "Just last year, we established a common standard for the first time and agreed on a new theoretical value worldwide," says Prof. Hartmut Wittig, theoretical physicist as well as spokesperson for the PRISMA+ Cluster of Excellence. "Our goal is, in parallel with the experiment, to keep refining the theoretical prediction as well." Physicists at PRISMA+ are making crucial contributions here, from the measurement of experimental input quantities to the high-precision calculation of the contributions of the strong interaction using lattice quantum chromodynamics methods on the MOGON-II supercomputer located in Mainz.

Is the experiment seeing something not predicted by the theory?

The first time a discrepancy - of 3.7 standard deviations - emerged was when the theoretical prediction was compared with the findings of the experiment at the Brookhaven National Laboratory, mentioned above. In the 20 years since then, the aim of research worldwide has been to establish whether this deviation is "real" or "merely" the result of systematic uncertainties in theory and experiment. The current Muon g-2 experiment was developed to measure the magnetic properties of the muon more accurately than ever before. The Muon g-2 Collaboration involves more than 200 scientists from 35 institutions in seven countries.

The muon is the heavy brother of the electron and survives for only a millionth of a fraction of a second. It possesses a magnetic moment, a kind of miniature internal bar magnet. It also possesses a quantum mechanical angular momentum, termed spin, similar to a spinning top. The g-factor is the ratio of the observed strength of the magnet to a simple estimate based on the electric charge, mass, and spin of the muon. The name of the Muon g-2 experiment is based on the fact that the "g" of the muon always deviates slightly - by about 0.1 percent - from the simple prediction that g=2. This anomaly is commonly referred to as the anomalous magnetic moment of the muon (a = (g-2)/2). The Muon g-2 experiment measures the rate of gyration of the "internal compass needle" of muons in a magnetic field, as well as the magnetic field itself, and from this it can determine the anomalous magnetic moment. The muon beam is generated at FNAL`s Muon Campus specifically for the experiment - it has a purity that has never been achieved before.

More than eight billion muons already measured

"Our first analysis, which we are presenting today, already achieves an accuracy that is somewhat better than that of the previous experiment - and we've managed this by evaluating only less than 6 percent of the planned data set," explains Martin Fertl. "As a result, we think our goal of using the Muon g-2 experiment to ultimately improve the accuracy of the value by a factor of four to reach 140 parts per billion seems very realistic."

The data currently being analyzed are from the first round of measurements in 2018 - whereby the Fermilab experiment already collected more data than all prior muon g-factor experiments combined. The second and third rounds are also already "in the can." The third round had to be abruptly cancelled due to the global COVID-19 pandemic, so the fourth round is currently being conducted under tight safety restrictions and, to a large extent, remotely. A fifth round is scheduled to start in autumn 2021.

To ensure the objectivity of the analyses, several analysis teams are working in parallel and independently of one another. The experiment is also using blinding techniques similar to those employed in clinical trials. First, the analysis teams relate the frequencies they measure to a clock whose pace has been slightly altered - and is now running too fast or too slow. A clock on the wall of this kind, for instance, would tick 60 times, but the time elapsed would be slightly more or less than one minute. Only two people beyond the experiment know the factor by which the clock has been adjusted - in the experiment, this corresponds to a particular signal on the frequency measuring devices. Only when the relative results of the individual teams are consistent with each other (known as "relative unblinding") is this factor announced and can then be factored into the calculation. This "absolute unblinding" occurred for the evaluation now being presented at the end of February 2021.

The specialty of Martin Fertl and his working group is extreme precision measurement of the magnetic field in the muon storage ring over the entire measurement period of several years. In his former laboratory, he had already led the development of an array of highly sensitive magnetometers based on the principle of pulsed nuclear magnetic resonance. Several hundred of these measuring heads are installed in the walls of the vacuum chambers surrounding the muons. Another 17 measuring heads remotely circle the storage ring, which has a diameter of 14 meters, to measure the applied magnetic field even more comprehensively. "With the help of further calibration systems, we aim to determine the magnetic field in the muon storage ring with unprecedented accuracy. Only once we understand the magnetic field extremely precisely, and can also measure it, will we be able to determine the anomalous magnetic moment of the muon to the highest degree of precision," says Martin Fertl. "To determine the value to an accuracy of 140 parts per billion - which would be four times more accurate than the previous experiment - we need to be able to measure the magnetic field in which the muons are moving to an accuracy of 70 parts per billion."

As they progressed towards this goal, the researchers encountered some highly interesting and hitherto unknown effects. "We recorded, for instance, small but significant temporal changes in the magnetic field for the first time - and developed special measuring heads to accurately measure this effect. These findings can help us to improve our understanding of the magnetic field and thus to continuously refine our Muon g-2 experiment. This "work in progress" approach will bring us ever closer over the next few years to our ultimate goal of definitively answering the question of whether the anomalous magnetic moment of the muon is the key to a new physics."

Credit: 
Johannes Gutenberg Universitaet Mainz

Crohn's disease may be caused by immune signaling failure

image: Scripps Research immunologist Mark Sundrud, PhD, finds a new direction for Crohn's disease research lies in the interplay between bile acids and T cells in the small intestine.

Image: 
Scott Wiseman for Scripps Research

JUPITER, FL - People with Crohn's disease are typically treated with powerful anti-inflammatory medications that act throughout their body, not just in their digestive tract, creating the potential for unintended, and often serious, side effects. New research from the lab of Mark Sundrud, PhD, at Scripps Research, Florida suggests a more targeted treatment approach is possible.

Crohn's disease develops from chronic inflammation in the digestive tract, often the small intestine. More than half a million people in the United States live with the disease, which can be debilitating and require repetitive surgeries to remove irreversibly damaged intestinal tissue.

Writing in the journal Nature on April 7, Sundrud's team finds that certain immune cells in the small intestine have evolved a molecular sensing mechanism to protect themselves from the toxic effects of high bile acid concentrations there. This sensory mechanism can be manipulated with small drug-like molecules, they find, and the treatment reduced small bowel inflammation in mice.

"It seems that these immune cells, called T effector cells, have learned how to protect themselves from bile acids," Sundrud says. "These T cells utilize an entire network of genes to interact safely with bile acids in the small intestine. This pathway may malfunction in at least some individuals with Crohn's disease."

Bile acids are made in the liver and released during a meal to help with digestion and absorption of fats and fat-soluble vitamins. They are actively recaptured at the end of the small intestine, in an area called the ileum, where they pass through layers of tissue that contain the body's dense network of intestinal immune cells, and ultimately re-enter the blood stream for return to the liver.

Because they are detergents, bile acids can cause toxicity and inflammation if the system becomes unbalanced. The whole process is kept humming along thanks to an intricate signaling system. Receptors in the nucleus of both liver cells and intestinal barrier cells sense the presence of bile acid and tell the liver to back off on bile acid production if there's too much, or to produce more if there aren't enough to digest a big steak dinner, for example.

Given how damaging bile acids can potentially be to cells, scientists have wondered how immune cells that live in or visit the small intestine tolerate their presence at all. Sundrud's team previously reported that a gene called MDR1, also known as ABCB1, becomes activated when an important subset of immune cells that circulate in blood, called CD4+ T cells, make their way into the small intestine. There, MDR1 acts in transitory T cells to suppress bile acid toxicity and small bowel inflammation.

In the new study, Sundrud's team uses an advanced genetic screening approach to uncover how T cells sense and respond to bile acids in the small intestine to increase MDR1 activity.

"The basic discovery that T cells dedicate so much of their time and energy to preventing bile acid-driven stress and inflammation highlights completely new concepts in how we think about and treat Crohn's disease," Sundrud says. "It's like we've been digging in the wrong spot for treasure, and this work gives us a new map showing where X marks the spot."

The T cells contain a receptor molecule in their nucleus known as CAR, short for constitutive androstane receptor. Acting in the small intestine, CAR promotes expression of MDR1, and also plays a role in activating an essential anti-inflammatory gene, IL-10, the team found.

"When we treated mice with drug-like small molecules that activate CAR, the result was localized detoxification of bile acids and reduction of inflammation," Sundrud says.

Sundrud says exploring the therapeutic potential of CAR activation will require caution and creativity, because CAR is also critical for breaking down and eliminating other substances in the liver, including many medicines.

"Ultimately, the Crohn's disease therapy that emerges from this work could be something that activates CAR locally in small intestinal T cells, or something that targets another gene that is similarly responsible for promoting the safe communication between small intestinal T cells and bile acids," Sundrud says.

Also interesting, the team found that the bile acid-inflammation feedback system worked somewhat differently in the colon in concert with gut microbiome factors. While gut flora had more influence on T cell development and function in the colon, it was the nuclear receptor CAR that had more influence on inflammation in the small intestine.

Inflammation plays both positive and negative roles in the body. It can damage tissue, but it also suppresses cancer growth and fights infections. The current anti-inflammatory treatments shut it down systemically, throughout the entire body. That can have potentially serious consequences, such as lowering resistance to infections or easing off the brake on cancer. Directing treatment for inflammatory diseases only to the affected tissue would be preferable whenever feasible, he says.

"The roughly 50 million people living in the US with some sort of autoimmune or chronic inflammatory disease are all treated the same, medically," Sundrud says. "The holy grail would be to come up with druggable approaches to treat inflammation in only specific tissues and leave the rest of the immune cells in your body untouched, and able to fend of cancer and microbial infections."

Credit: 
Scripps Research Institute

WHOI and NOAA release report on U.S. socio-economic effects of harmful algal blooms

image: Workshop on the Socio-economic Effects of Marine and Fresh Water Harmful Algal Blooms in the United States

Image: 
U.S. National Office for Harmful Algal Blooms

Harmful algal blooms (HABs) occur in all 50 U.S. states and many produce toxins that cause illness or death in humans and commercially important species. However, attempts to place a more exact dollar value on the full range of these impacts often vary widely in their methods and level of detail, which hinders understanding of the scale of their socio-economic effects.

In order to improve and harmonize estimates of HABs impacts nationwide, the National Oceanic and Atmospheric Administration (NOAA) National Center for Coastal Ocean Science (NCCOS) and the U.S. National Office for Harmful Algal Blooms at the Woods Hole Oceanographic Institution (WHOI) convened a workshop led by WHOI Oceanographer Emeritus Porter Hoagland and NCCOS Monitoring and Event Response (MERHAB) Program Manager Marc Suddleson. Participants focused on approaches to better assess the socio-economic effects of harmful algal blooms in the marine and freshwater (primarily Great Lakes) ecosystems of the United States. The workshop proceedings report describes the group's objectives, and presents recommendations developed by 40 participants, mostly economists and social scientists from a range of universities, agencies, and U.S. regions. Their recommendations fall under two broad categories: those intended to help establish a socio-economic assessment framework, and those to help create a national agenda for HABs research.

"This has been a goal of the research and response communities for a long time, but coming up with a robust national estimate has been difficult, for a number of reasons, mainly related to the diversity of algal species and the wide variety of ways they can affect how humans use the oceans and freshwater bodies," said Hoagland. "This gives us a strong base on which to build the insight that will vastly improve our estimates."

Framework recommendations call for enhancing interagency coordination; improving research communications and coordination among research networks; integrating socioeconomic assessments into HAB forecasts and observing networks; using open-access databases to establish baselines and identify baseline departures; facilitating rapid response socio-economic studies; improving public health outcome reporting and visibility of HAB-related illnesses; fostering the use of local and traditional ecological knowledge to improve HAB responses; engaging affected communities in citizen science; and engaging graduate students in HAB socio-economic research.

Research agenda recommendations include elements necessary for addressing gaps in our understanding of the social and economic effects of HABs. They include a suggested approach for obtaining an improved national estimate of the economic effects of HABs; supporting rapid ethnographic assessments and in depth assessments of social impacts from HABs; defining socioeconomic impact thresholds for triggering more detailed studies of impacts (such as in the case of designated HAB events of significance); sponsoring research on the value of scientific research leading to improved understanding of bloom ecology; assessing the value of HAB mitigation efforts, such as forecasts, and control approaches and their respective implementation costs; and supporting research to improve HAB risk communication and tracking and to better understand the incidence, severity, and costs of HAB-related human illnesses.

"These recommendations give us a strong series of next steps to increase focus on HAB-related socio-economic research," said Don Anderson, director of the U.S. National Office for Harmful Algal Blooms. "The report is certain to spur increased collaborations that will provide a better understanding of the many complex socio-economic effects of HABs and provide the tools to increase the effectiveness of efforts to minimize impacts on society and the environment."

The detailed final proceedings report and more information about the workshop is available on the U.S. National HAB Office website.

Credit: 
Woods Hole Oceanographic Institution

Field guides: Argonne scientists bolster evidence of new physics in Muon g-2 experiment

image: Argonne's Ran Hong (left) and Simon Corrodi (right) installing the calibration probe at the 4 Tesla Solenoid Facility.

Image: 
(Image by Mark Lopez/Argonne National Laboratory.)

Scientists are testing our fundamental understanding of the universe, and there’s much more to discover.

What do touch screens, radiation therapy and shrink wrap have in common? They were all made possible by particle physics research. Discoveries of how the universe works at the smallest scale often lead to huge advances in technology we use every day.

Scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory and Fermi National Accelerator Laboratory, along with collaborators from 46 other institutions and seven countries, are conducting an experiment to put our current understanding of the universe to the test. The first result points to the existence of undiscovered particles or forces. This new physics could help explain long-standing scientific mysteries, and the new insight adds to a storehouse of information that scientists can tap into when modeling our universe and developing new technologies.

“These findings could have major implications for future particle physics experiments and could lead to a stronger grasp on how the universe works.” — Ran Hong, Argonne postdoctoral appointee

The experiment, Muon g-2 (pronounced Muon g minus 2), follows one that began in the ‘90s at DOE’s Brookhaven National Laboratory, in which scientists measured a magnetic property of a fundamental particle called the muon.

The Brookhaven experiment yielded a result that differed from the value predicted by the Standard Model, scientists’ best description of the makeup and behavior of the universe yet. The new experiment is a recreation of Brookhaven’s, built to challenge or affirm the discrepancy with higher precision.

The Standard Model very precisely predicts the muon’s g-factor — a value that tells scientists how this particle behaves in a magnetic field. This g-factor is known to be close to the value two, and the experiments measure their deviation from two, hence the name Muon g-2.

The experiment at Brookhaven indicated that g-2 differed from the theoretical prediction by a few parts per million. This miniscule difference hinted at the existence of unknown interactions between the muon and the magnetic field — interactions that could involve new particles or forces.

The first result from the new experiment strongly agrees with Brookhaven’s, strengthening the evidence that there is new physics to discover. The combined results from Fermilab and Brookhaven show a difference from the Standard Model at a significance of 4.2 sigma (or standard deviations), slightly less than the 5 sigma that scientists require to claim a discovery, but still compelling evidence of new physics. The chance that the results are a statistical fluctuation is about 1 in 40,000.

Particles beyond the Standard Model could help to explain puzzling phenomena in physics, such as the nature of dark matter, a mysterious and pervasive substance that physicists know exists but have yet to detect.

“This is an incredibly exciting result,” said Argonne’s Ran Hong, a postdoctoral appointee who worked on the Muon g-2 experiment for over four years. “These findings could have major implications for future particle physics experiments and could lead to a stronger grasp on how the universe works.”

The Argonne team of scientists contributed significantly to the success of the experiment. The original team, assembled and led by physicist Peter Winter, included Argonne’s Hong and Simon Corrodi, as well as Suvarna Ramachandran and Joe Grange, who have since left Argonne.

“This team has an impressive and unique skill set with high expertise regarding hardware, operational planning and data analysis,” said Winter, who leads the Muon g-2 contributions from Argonne. “They made vital contributions to the experiment, and we could not have obtained these results without their work.”

To derive the muon’s true g-2, the scientists at Fermilab produce beams of muons that travel in a circle through a large, hollow ring in the presence of a strong magnetic field. This field keeps the muons in the ring and causes the direction of a muon’s spin to rotate. The rotation, which scientists call precession, is similar to the rotation of earth’s axis, only much, much faster.

To calculate g-2 to the desired precision, the scientists need to measure two values with very high certainty. One is the rate of the muon’s spin precession as it traverses the ring. The other is the strength of the magnetic field surrounding the muon, which influences its precession. That’s where Argonne comes in.

Field trip

Although the muons travel through an impressively constant magnetic field, ambient temperature changes and effects from the experiment’s hardware cause slight variations throughout the ring. Even these small shifts in field strength, if not accounted for, can significantly impact the accuracy of the g-2 calculation.

In order to correct for the field variations, the scientists constantly measure the drifting field using hundreds of probes mounted to the walls of the ring. In addition, they send a trolley around the ring every three days to measure the field strength where the muon beam actually passes through. Mounted on the trolley are probes that map out the magnetic field with incredibly high precision throughout the ring’s 45-meter circumference.

To reach the ultimate uncertainty goal of less than 70 parts per billion (around 2.5 times better than the field measurement in the previous experiment), Argonne scientists refurbished the trolley system used in the Brookhaven experiment with advanced communication abilities and new, ultraprecise magnetic field probes developed by the University of Washington.

The trolley goes around the ring in both directions, taking around 9,000 measurements per probe and direction. The scientists use the measurements to reconstruct slices of the magnetic field and then derive a full, 3D map of the field in the ring. Field values at points on the map go into the g-2 calculation for muons passing through those locations. The better the field measurements, the more meaningful the final result.

The scientists also converted some of the analog signals used in the old experiment into digital signals to increase the amount of data they could obtain from the probes. This required complex engineering of the trolley’s communications system to minimize disturbances to the sensitive probing mechanisms.

“It was quite challenging to make the trolley operate smoothly and safely. It required the control system to handle routine operations but also identify emergencies and react appropriately,” said Hong, whose background in both scientific research and engineering was crucial for designing the trolley to operate with limited disruption to the experiment.

The team plans to upgrade the trolley system for the next data taking period to further improve the measurements by reducing the uncertainty bit by bit.

Fine tuning

In precision experiments like Muon g-2, the main objective is to reduce any systematic uncertainty or error that could affect the measurements.

“Measuring the raw numbers is relatively easy — figuring out how well we know the numbers is the real challenge,” said Corrodi, a postdoctoral appointee in Argonne’s High Energy Physics division (HEP).

To ensure the accuracy of the magnetic field measurements, the scientists calibrated the probes using Argonne’s 4-Tesla Solenoid Facility, which houses a magnet from a former magnetic resonance imaging (MRI) scanner. The magnet produces a uniform and stable magnetic field with over 400 times the strength of a refrigerator magnet.

Argonne scientists calibrated the probes in the trolley against readings from a probe that was designed and tested inside the solenoid magnet. This process ensures the probes each read the same measurement when in the same magnetic field and enables the scientists to make accurate corrections. The test facility allowed the scientists to achieve field measurements down to several parts per billion — like measuring the volume of water in a swimming pool down to the drop.

“In addition to calibrating the probes, we improved the field measurements by adjusting operation settings on the fly,” said Corrodi, “During data analysis, we found some effects we did not expect.”

When Corrodi and team saw glitches in the data, they investigated the system to pinpoint the cause. For example, certain devices in the ring focus the muon beam to keep it centered. These devices, however, slightly disrupt the magnetic field in the ring. The scientists designed a way to measure this effect in order to remove it from the analysis.

To view related video, click here.

Putting it all together

The journey of the magnetic field data from probe to computer is complex. Corrodi, Hong and others configured the hardware and software to read the data from the field probes with the correct time and location stamps. They also had to make sense of the data, which start out in binary code, in order to integrate them with the common analysis framework for the experiment.

“We had to convert the raw data into something we could work with,” said Hong, “and we were in charge of the data quality control, determining what flawed data to discard in the ultimate g-2 analysis.”

Corrodi will lead the analysis team for the magnetic field, resolving conflicts with equipment and making sure the various teams in the experiment converge on the next result, said Winter. “You really need to understand the entire field analysis in order to reach our scientific goals.”

The future of muon experiments

The first thing the scientists plan to do is to double-check the results.

“So far, the precision of the ultimate g-2 measurement is comparable to that of the Brookhaven experiment, but that is dominated by the fact that the data are limited so far,” said Corrodi. “We have only analyzed 6% of the data we plan to take over the entire experiment. Those added data will reduce the uncertainty significantly.”

The first result is also encouraging to scientists conducting other present and planned muon experiments, including a future g-2 experiment that will be conducted in Japan, and the next muon experiment at Fermilab — the Mu2e experiment. These projects are already using Argonne’s Solenoid Facility to cross-calibrate their magnetic field probes with the ones used at Fermilab.

“There could be a renewed effort to look for muons at the Large Hadron Collider, searching for possible hints of the new physics behind the g-2 value,” said Carlos Wagner, a theoretical physicist in Argonne’s HEP, who works to try to explain these phenomena. “There could also be renewed interest in the construction of a muon collider, which could provide a direct way of checking this new physics.”

Once scientists get a handle on this new physics, it may be able to inform cosmological and quantum mechanical models, or even help scientists to invent new technologies down the road — the next shrink wrap, perhaps.

Credit: 
DOE/Argonne National Laboratory