Culture

Major revamp of SNAP could eliminate food insecurity in the US

image: Millions of Americans lack adequate access to healthy food. Expanding SNAP benefits and eligibility could eliminate food insecurity in the United States, suggests University of Illinois professor Craig Gundersen.

Image: 
College of ACES, University of Illinois.

URBANA, Ill. - Food insecurity is a major problem in the U.S., and it worsened during the COVID-19 pandemic. The Supplemental Nutrition Assistance Program (SNAP) provides some relief, but millions of Americans still lack adequate access to healthy food. A new study from the University of Illinois proposes a potential solution.

"Restructuring SNAP as a Universal Basic Income (UBI) program or modified UBI is a straightforward way to eliminate food insecurity in United States. It's expensive but it is not difficult," says Craig Gundersen, distinguished professor in the Department of Agricultural and Consumer Economics at U of I. Gundersen authored the study, published in Food Policy.

While the existing SNAP program effectively reduces food insecurity, it has some limitations. For some SNAP recipients, the amount of aid they receive is insufficient. Some people who are food insecure and eligible for SNAP do not participate. And finally, more than half of those who are food insecure are not eligible for SNAP, Gundersen explains.

Using data from the 2019 Current Population Survey, Gundersen estimates the effects of expanding SNAP to become a UBI program under three different scenarios.

"The first scenario is a standard UBI program, where everybody in the United States gets SNAP at the maximum level," Gundersen explains. "Under the current program, your SNAP benefits go down if your income increases. Under this proposal, the amount would remain the same. If people wanted to work more and earn more money, they wouldn't lose their SNAP benefits."

Under this scenario, food insecurity could decline by 88.8 %, assuming the $730 billion cost would be funded through higher taxes for top-earning households.

"With the current distribution of taxes in the United States, the top 10% of incomes pay 70% of the taxes, and the top 50% pay 97%. Even if you were to raise taxes on the higher-income brackets to implement this program, it is unlikely to influence the probability of these households becoming food insecure," he states.

The second scenario in Gundersen's study would be a modified UBI program where households with incomes up to four times the poverty line; that is, approximately $100,000 for a family of four, would receive SNAP benefits.

Compared to the first scenario, the decline in food insecurity would be almost the same - 88.5% ¬ - but at a much lower cost of $408.5 billion.

Gundersen's third scenario addresses the issue that current SNAP benefits are not enough for some recipients to become food secure.

"Under this scenario, I consider what would happen if we increase the maximum SNAP benefit by 25% and give it to all households with incomes up to about $100,000 a year. In that case, there would be a 98.2% decline in food insecurity, and the cost would be $564.5 billion," he notes.

"I believe the third scenario is the best one," Gundersen says. "While the second one is also good, it would not be adequate for some of the most vulnerable groups; that is, SNAP recipients who need more assistance than they currently receive. The third scenario would ensure that they get what they need to become food secure."

Gundersen acknowledges that his proposals are costly, but so are other government programs.

"Essentially, I propose a way of eliminating food insecurity in United States, and the cost of this would be about a half trillion dollars per year. That seems like a lot, but to put it in context, the cost of the COVID-19 pandemic stimulus packages from the Trump administration and the Biden administration were roughly $6 trillion. It's a lot of money. But if we want to be serious about alleviating food insecurity, this is a simple, straightforward way to do it," he concludes.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Neonatal meningitis: the immaturity of microbiota and epithelial barriers implicated

image: Choroid plexus of mouse neonates. Blue: cell nuclei, green: phalloidin-actin, red: fluorochrome.

Image: 
Biology of Infection Unit, Institut Pasteur

Meningitis is associated with high mortality and frequently causes severe sequelae. Newborn infants are particularly susceptible to this type of infection; they develop meningitis 30 times more often than the general population. Group B streptococcus (GBS) bacteria are the most common cause of neonatal meningitis, but they are rarely responsible for disease in adults. Scientists from the Institut Pasteur, in collaboration with Inserm, Université de Paris and Necker-Enfants Malades Hospital (AP-HP), set out to explain neonatal susceptibility to GBS meningitis. In a mouse model, they demonstrated that the immaturity of both the gut microbiota and epithelial barriers such as the gut and choroid plexus play a role in the susceptibility of newborn infants to bacterial meningitis caused by GBS. The findings were published in the journal Cell Reports on June 29, 2021.

Newborn infants are more likely to develop bacterial meningitis than children and adults. Group B streptococcus (GBS) is the pathogen responsible for a significant proportion of cases of neonatal meningitis. In most instances, infection is preceded by bacterial colonization of the gut. The commensal bacterial gut flora (known as the microbiota) plays a key physiological role, as it is involved in digestion, offers protection from gut pathogens and contributes to tissue differentiation and immune development. Newborns have no gut microbiota; it gradually develops in the first few weeks after birth.

In a new study, scientists from the Institut Pasteur, in collaboration with Inserm, Université de Paris and Necker-Enfants malades Hospital (AP-HP), demonstrated in a mouse model that the immaturity of the gut microbiota in neonates is involved in neonatal susceptibility to meningitis caused by GBS. In the absence of a mature microbiota, the bacteria can extensively colonize the gut. In the absence of a mature microbiota, the barrier function of blood vessels in the gut that the bacteria must cross to reach the brain through the bloodstream is also less effective, and the immune system is unable to control infection.

Unexpectedly, the scientists also demonstrated that, independently of the microbiota, the epithelial barriers formed by the gut and the choroid plexus (the interface between the blood and the cerebrospinal fluid that irrigates the brain) are not entirely mature in newborns, which facilitates bacterial access to the brain. The signaling pathway known as the Wnt pathway, which is involved in tissue growth and differentiation, is more active in newborns, resulting in a less effective barrier function at the gut and choroid plexus levels in neonates.

"In this study, we show how two factors associated with infancy - the immaturity of the gut microbiota and the growth of gut and choroidal epithelial tissues - play a role in the susceptibility of newborn infants to meningitis caused by GBS, at all stages of infection from gut colonization to dissemination in the brain," explains Marc Lecuit (university professor/hospital practitioner, Université de Paris and Necker-Enfants Malades Hospital), head of the Biology of Infection Unit at the Institut Pasteur and Inserm and last author of the study.

The results of this research illustrate the importance of the microbiota and its critical role in protecting against infection.

Credit: 
Institut Pasteur

Genetics: Biosynthesis pathway of a new DNA nucleobase elucidated

DNA is composed of nucleobases represented by the letters A, T, G and C. They form the basis of the genetic code and are present in all living beings. But in a bacteriophage, another base, represented by the letter Z, exists. This exception, the only one observed to date, has long remained a mystery. Scientists from the Institut Pasteur and the CNRS, in collaboration with the CEA, have now elucidated the biosynthesis pathway of this base. This work has been published in the April 30th, 2021 issue of Science.

DNA, or deoxyribonucleic acid, is a molecule that serves as the medium for storing genetic information in all living organisms. It is a double helix characterized by alternating purine nucleobases (adenine and guanine) and pyrimidine nucleobases (cytidine and deoxycytidine). The bases of each DNA strand are located at the center of the helix and are bonded together, thereby linking the two DNA strands: adenine forms two hydrogen bonds with thymine (A-T), and guanine forms three hydrogen bonds with cytosine (G-C). This applies to all living beings, with one exception.

Cyanophage S-2L, an exception to conventional genetics

Cyanophage S-2L is a bacteriophage, in other words a virus that infects bacteria. In this phage, adenine is completely replaced by another base, 2-aminoadenine (represented by the letter Z). The latter forms three hydrogen bonds with thymine (Z-T), instead of the usual two bonds between adenine and thymine. This higher number of bonds increases the stability of the DNA at high temperatures and changes its conformation, meaning that the DNA is less well recognized by proteins and small molecules

2-aminoadenine biosynthesis pathway elucidated

Since it was discovered in 1977, cyanophage S-2L has been the only known exception, and the biosynthesis pathway of 2-aminoadenine has remained unknown. Scientists from the Institut Pasteur and the CNRS, in collaboration with the CEA, recently elucidated this biosynthesis pathway and demonstrated its enzymatic origins. They achieved this by identifying a homolog of the known enzyme succinoadenylate synthase (PurA) in the genome of cyanophage S-2L. A phylogenetic analysis of this enzyme family revealed a link between the homolog, known as PurZ, and the PurA enzyme in archaea. This indicates that the homolog is an ancient enzyme that probably conferred an evolutionary advantage. The research was carried out using the Institut Pasteur's Crystallography Platform.

The new Z-T base pair and the discovery of the biosynthesis pathway show that new bases can be enzymatically incorporated into genetic material. This increases the number of coding bases in DNA, paving the way for the development of synthetic genetic biopolymers.

Credit: 
Institut Pasteur

The role of race and scientific trust on support for COVID-19 social distancing

image: Measuring latent scientific trust in the mass public.
A: Latent scientific trust as measured by factor analysis. B: Distribution of latent scientific trust by race.

Image: 
Kazemian et al, 2021, PLOS ONE (CC-BY 4.0)

Trust in science - but not trust in politicians or the media - significantly raises support across US racial groups for COVID-19 social distancing.

Credit: 
PLOS

New 3D printable phase-changing composites can regulate temperatures inside buildings

image: New phase-change material composites can regulate ambient temperatures inside buildings

Image: 
Texas A&M University College of Engineering

Changing climate patterns have left millions of people vulnerable to weather extremes. As temperature fluctuations become more commonplace around the world, conventional power-guzzling cooling and heating systems need a more innovative, energy-efficient alternative, and in turn, lessen the burden on already struggling power grids.

In a new study, researchers at Texas A&M University have created novel 3D printable phase-change material (PCM) composites that can regulate ambient temperatures inside buildings using a simpler and cost-effective manufacturing process. Furthermore, these composites can be added to building materials, like paint, or 3D printed as decorative home accents to seamlessly integrate into different indoor environments.

"The ability to integrate phase-change materials into building materials using a scalable method opens opportunities to produce more passive temperature regulation in both new builds and already existing structures," said Dr. Emily Pentzer, associate professor in the Department of Materials Science and Engineering and the Department of Chemistry.

This study was published in the June issue of the journal Matter.

Heating, ventilation and air conditioning (HVAC) systems are the most commonly used methods to regulate temperatures in residential and commercial establishments. However, these systems guzzle a lot of energy. Furthermore, they use greenhouse materials, called refrigerants, for generating cool, dry air. These ongoing issues with HVAC systems have triggered research into alternative materials and technologies that require less energy to function and can regulate temperature commensurate to HVAC systems.

One of the materials that have gained a lot of interest for temperature regulation is phase-change materials. As the name suggests, these compounds change their physical state depending on the temperature in the environment. So, when phase-change materials store heat, they convert from solid to liquid upon absorbing heat and vice versa when they release heat. Thus, unlike HVAC systems that rely solely on external power to heat and cool, these materials are passive components, requiring no external electricity to regulate temperature.

The traditional approach to manufacturing PCM building materials requires forming a separate shell around each PCM particle, like a cup to hold water, then adding these newly encased PCMs to building materials. However, finding building materials compatible with both the PCM and its shell has been a challenge. In addition, this conventional method also decreases the number of PCM particles that can be incorporated into building materials.

"Imagine filling a pot with eggs and water," said Ciera Cipriani, NASA Space Technology Graduate Research Fellow in the Department of Materials Science and Engineering. "If each egg has to be placed in an individual container to be hard-boiled, fewer eggs will fit in the pot. By removing the plastic containers, the veritable shell in our research, more eggs, or PCMs, can occupy a greater volume by packing closer together within the water/resin."

To overcome these challenges, past studies have shown that when using phase-changing paraffin wax mixed with liquid resin, the resin acts as both the shell and building material. This method locks the PCM particles inside their individual pockets, allowing them to safely undergo a phase change and manage thermal energy without leakage.

Similarly, Pentzer and her team first combined light-sensitive liquid resins with a phase-changing paraffin wax powder to create a new 3D printable ink composite, enhancing the production process for building materials containing PCMs and eliminating several steps, including encapsulation.

The resin/PCM mixture is soft, paste-like and malleable, making it ideal for 3D printing but not for building structures. Hence, by using a light-sensitive resin, they cured it with an ultraviolet light to solidify the 3D printable paste, making it suitable for real-world applications.

Additionally, they found that the phase-changing wax embedded within the resin was not affected by the ultraviolet light and made up 70% of the printed structure. This is a higher percentage when compared to most currently available materials being used in industry.

Next, they tested the thermoregulation of their phase-changing composites by 3D printing a small-scale house-shaped model and measuring the temperature inside the house when it was placed in an oven. Their analysis showed that the model's temperature differed by 40% compared to outside temperatures for both heating and cooling thermal cycles when compared to models made from traditional materials.

In the future, the researchers will experiment with different phase-change materials apart from paraffin wax so that these composites can operate at broader temperature ranges and manage more thermal energy during a given cycle.

"We're excited about the potential of our material to keep buildings comfortable while reducing energy consumption," said Dr. Peiran Wei, research scientist in the Department of Materials Science and Engineering and the Soft Matter Facility. "We can combine multiple PCMs with different melting temperatures and precisely distribute them into various areas of a single printed object to function throughout all four seasons and across the globe."

Credit: 
Texas A&M University

Study model explores impact of police action on population health

image: Conceptual model depicting the relationship between policing and population health.

Image: 
University of Washington

A specific police action, an arrest or a shooting, has an immediate and direct effect on the individuals involved, but how far and wide do the reverberations of that action spread through the community? What are the health consequences for a specific, though not necessarily geographically defined, population?

The authors of a new UW-led study looking into these questions write that because law enforcement directly interacts with a large number of people, "policing may be a conspicuous yet not-well understood driver of population health."

Understanding how law enforcement impacts the mental, physical, social and structural health and wellbeing of a community is a complex challenge, involving many academic and research disciplines such as criminology, sociology, psychology, public health and research into social justice, the environment, economics and history.

"We needed a map for how to think about the complex issues at the intersection of policing and health," said lead author Maayan Simckes, a recent doctoral graduate from UW's Department of Epidemiology who worked on this study as part of her dissertation.

So, Simckes said, she set out to create a conceptual model depicting the complex relationship between policing and population health and assembled an interdisciplinary team of researchers to collaborate.

"This model shows how different types of encounters with policing can affect population health at multiple levels, through different pathways, and that factors like community characteristics and state and local policy can play a role," said Simckes, who currently works for the Washington State Department of Health.

The study, published in early June in the journal Social Science & Medicine, walks through the various factors that may help explain the health impacts of policing by synthesizing the published research across several disciplines.

"This study provides a useful tool to researchers studying policing and population health across many different disciplines. It has the potential to help guide research on the critical topic of policing and health for many years to come," said senior author Anjum Hajat, an associate professor in the UW Department of Epidemiology

For example, the study points out when considering individual-level effects that "after physical injury and death, mental health may be the issue most frequently discussed in the context of police-community interaction ... One U.S. study found that among men, anxiety symptoms were significantly associated with frequency of police stops and perception of the intrusiveness of the encounter."

Among the many other research examples explored in the new model, the researchers also examine the cyclic nature of policing and population health. They point out that police stops tend to cluster in disadvantaged communities and "saturating these communities with invasive tactics may lead to more concentrated crime." Consequently, it may be "impossible" to determine whether police practices caused a neighborhood to experience more crime or if those practices were in response to crime. However, the model's aim is to capture these complex "bidirectional" relationships.

"Our model underscores the importance of reforming policing practices and policies to ensure they effectively promote population wellbeing at all levels," said Simckes. "I hope this study ignites more dialogue and action around the roles and responsibilities of those in higher education and in clinical and public-health professions for advancing and promoting social justice and equity in our communities."

Credit: 
University of Washington

How otters' muscles enable their cold, aquatic life

image: Otter floating on water's surface

Image: 
Tray Wright/Texas A&M University (Image obtained under USFWS Marine Mammal Permit No. MA-043219 to R. Davis)

Sea otters are the smallest marine mammal. As cold-water dwellers, staying warm is a top priority, but their dense fur only goes so far. We have long known that high metabolism generates the heat they need to survive, but we didn't know how they were producing the heat -- until now.

Researchers recently discovered that sea otters' muscles use enough energy through leak respiration, energy not used to perform tasks, that it accounts for their high metabolic rate. The finding explains how sea otters survive in cold water.

Physiologist Tray Wright, research assistant professor in Texas A&M University's College of Education & Human Development, conducted the study along with colleagues Melinda Sheffield-Moore, an expert on human skeletal muscle metabolism, Randall Davis and Heidi Pearson, marine mammal ecology experts, and Michael Murray, veterinarian at the Monterey Bay Aquarium. Their findings were published in the journal Science.

The team collected skeletal muscle samples from both northern and southern sea otters of varying ages and body masses. They measured respiratory capacity, the rate at which the muscle can use oxygen, finding that the energy produced by muscle is good for more than just movement.

"You mostly think of muscle as doing work to move the body," Wright said. "When muscles are active, the energy they use for movement also generates heat."

Wright said that because muscle makes up a large portion of body mass, often 40-50% in mammals, it can warm the body up quickly when it is active.

"Muscles can also generate heat without doing work to move by using a metabolic short circuit known as leak respiration," Wright said.

A form of muscle-generated heat we are more familiar with is shivering. Wright said this involuntary movement allows the body to activate muscle by contracting to generate heat, while leak respiration can do the same without the tremors.

Wright said one of the most surprising findings was that the muscle of even newborn sea otters had a metabolic rate that was just as high as the adults.

"This really highlights how heat production seems to be the driving factor in determining the metabolic ability of muscle in these animals," Wright said.

Sea otters require a lot of energy to live in cold water. They eat up to 25% of their body mass per day to keep up with their daily activities and fuel their high metabolism.

"They eat a lot of seafood, including crabs and clams that are popular with humans, which can cause conflict with fisheries in some areas," Wright said.

Wright said we know how critical muscle is to animals for activities like hunting, avoiding predators and finding mates, but this research highlights how other functions of muscle are also critical to animal survival and ecology.

"Regulating tissue metabolism is also an active area of research in the battle to prevent obesity," Wright said. "These animals may give us clues into how metabolism can be manipulated in healthy humans and those with diseases where muscle metabolism is affected."

As for future research, Wright said there is still a lot we don't know about otters, including how they regulate their muscle metabolism to turn up the heat on demand.

"This is really just the first look into the muscle of these animals, and we don't know if all the various muscle types are the same, or if other organs might also have an elevated ability to generate heat," Wright said.

Credit: 
Texas A&M University

To splice or not to splice...

image: Karan Bedi explains the complex data analysis pipeline. Professor Ljungman is on the left

Image: 
Elisabeth Paymal

To splice or not to splice...

In an article published in the journal RNA, Karan Bedi, a bioinformatician in Mats Ljungman's lab, Department of Radiation Oncology at the University of Michigan Medical School, investigated the efficiency of splicing across different human cell types. The results were surprising in that the splicing process appears to be quite inefficient, leaving most intronic sequences untouched as the transcripts are being synthesized. The study also reports variable patterns between the different introns within a gene and across cell lines, and it further highlights the complexity of how newly transcripts are processed into mature mRNAs.

Several processes take place to produce mature mRNAs that then can be exported to the cytoplasm and used as a template for protein synthesis. After initiation of transcription and the go-ahead of elongation to produce the pre-mRNA, introns need to be spliced out and the protein-coding exons connected. At first, pre-mRNA is made as a complementary sequence of the DNA but with slightly different chemistry and includes all the introns. Then the spliceosome machinery, made up of about 300 proteins, assembles "co-transcriptionally" at each intron junction as the RNA emerges from its synthesis. "Splicing is an incredibly complex process because of the great number of proteins involved that repeatedly need to assemble and disassemble at each junction. Also, the speed at which transcription generates RNA is quite fast so the splicing process has to be well organized. Many steps can go wrong and lead to various pathologies, which is why it is so important to have a better understanding of how splicing happens and how it is regulated," said Bedi.

The team started their study by analyzing the large set of Bru-seq data that the Ljungman lab has accumulated over the last 10 years and settled on six cell lines that had deep enough data for a comprehensive analysis of splicing efficiencies genome-wide. The Bru-seq technology was developed in the Ljungman lab and is based on the selective capturing of newly synthesized RNA tagged with bromouridine. Once collected, the nascent Bru-labeled RNAs were sequenced at the University of Michigan Advanced Genomics Core, and Bedi used a custom-designed computational analysis pipeline to analyze the splicing efficiencies across these data sets.

"You have to be able to use samples with a sufficient read-depth to analyze the splicing efficiencies at the junctions between introns and exons," explained Bedi. To do so, he combined sequencing data from many experiments.

In addition to the 300 proteins that remove the introns, other regulating factors participate in the splicing process. The authors identified a number of RNA-binding proteins that have been shown to have variable degrees of binding to introns, or the exon, or to the junction between them.

Bedi concluded his interview with a puzzling question opened for investigation: to have a protein, the cell needs mRNAs that are properly spliced. Why, then, would the cell waste so much energy into making RNAs that are imprecisely spliced? Ljungman points out that the inclusion of introns in our genes has served an important purpose during the evolution of higher eukaryotes in that it allowed for an increased protein diversity by the "re-shuffling" of the coding exon sequences. "If splicing was fully accurate and efficient every time, the diversity of protein-coding sequences would be much lower and thus, we believe, evolution must have shaped a certain degree of 'sloppiness' into the splicing process. Our study is the most comprehensive study of co-transcriptional splicing genome-wide to date and it clearly documents the variability of the splicing process across genes and cell types," added Ljungman.

Splicing is an area of research where much is still to be explored and discovered. Fundamental biology questions and technology development go hand in hand to find answers that contribute to the understanding of the splicing process and for the development of cures for various diseases caused by aberrant splicing.

Mats Ljugman is Professor of Radiation Oncology and of Environmental Health Sciences, and director of the Bru-Seq lab. He is also co-director of the University of Michigan Center for RNA Biomedicine of which the Bru-Seq lab is one of its two core facilities. Areas of expertise of this lab are RNA isolation, cDNA library preparation, and sequencing data analysis. The Bru-Seq lab serves researchers from the University of Michigan, and other institutions in the United States and around the world.

Credit: 
University of Michigan

New Alzheimer's treatment targets identified

image: Carlos Cruchaga, Ph.D., Washington University School of Medicine

Image: 
Washington University School of Medicine

A research team at Washington University School of Medicine in St. Louis has identified potential new treatment targets for Alzheimer's disease, as well as existing drugs that have therapeutic potential against these targets.

The potential targets are defective proteins that lead to the buildup of amyloid in the brain, contributing to the onset of problems with memory and thinking that are the hallmark of Alzheimer's. The 15 existing drugs identified by the researchers have been approved by the Food and Drug Administration (FDA) for other purposes, providing the possibility of clinical trials that could begin sooner than is typical, according to the researchers.

In addition, the experiments yielded seven drugs that may be useful for treating faulty proteins linked to Parkinson's disease, six for stroke and one for amyotrophic lateral sclerosis (ALS). The new study, funded by the National Institute on Aging of the National Institutes of Health (NIH), is published July 8 in the journal Nature Neuroscience.

Scientists have worked for decades to develop treatments for Alzheimer's by targeting genes rooted in the disease process but have had little success. That approach has led to several dead ends because many of those genes don't fundamentally alter proteins at work in the brain. The new study takes a different approach, by focusing on proteins in the brain, and other tissues, whose function has been altered.

"In this study, we used human samples and the latest technologies to better understand the biology of Alzheimer's disease," said principal investigator Carlos Cruchaga, PhD, the Reuben Morriss III Professor of Neurology and a professor of psychiatry. "Using Alzheimer's disease samples, we've been able to identify new genes, druggable targets and FDA-approved compounds that interact with those targets to potentially slow or reverse the progress of Alzheimer's."

The scientists focused on protein levels in the brain, cerebrospinal fluid and blood plasma of people with and without Alzheimer's disease. Some of the proteins were made by genes previously linked to Alzheimer's risk, while others were made by genes not previously connected to the disease. After identifying the proteins, the researchers compared their results to several databases of existing drugs that affect those proteins.

"They are FDA-approved, and all of the safety data on the drugs is available," said Cruchaga, . "With what is already known about the safety of these drugs, we may be able to jump directly to clinical trials."

Cruchaga said the team's focus on protein levels in key tissues has certain advantages over prior efforts to identify genes linked to Alzheimer's.

"The classic genetic studies of Alzheimer's have attempted to correlate genetic mutations with disease, but we know that genes carry the instructions to build proteins and that diseases such as Alzheimer's occur when those protein levels get too high or too low," Cruchaga explained. "To understand the biology of Alzheimer's disease, we should look at proteins rather than only at genes."

As an example, Cruchaga pointed to the APOE gene, which was first linked to Alzheimer's risk decades ago. But even after all this time, it still has not clear how that gene contributes to the disease.

"In this study, we were able to see that APOE alters levels of several proteins in brain tissue and CSF," Cruchaga said. "We also saw changes in proteins linked to another gene called TREM2 that was associated with Alzheimer's risk more recently. Understanding how the protein levels are affected by these risk genes is helping us to identify pathways that lead to disease."

Past research has helped identify about 50 genetic signals linked to Alzheimer's, but only a handful of the genes responsible for those signals have been found. Cruchaga said that focusing on protein levels in tissue may help reveal what's happening with the other 40-plus genetic signals that appear to be connected to Alzheimer's risk.

The researchers analyzed proteins and genes from brain tissue, cerebrospinal fluid and blood plasma from samples gathered from 1,537 people in the U.S. The samples are stored at the Knight Alzheimer's Disease Research Center at Washington University. Half of the samples came from people with a clinical diagnosis of Alzheimer's disease; the other half came from study participants who are considered cognitively normal.

The researchers measured protein levels in the samples from the brain, CSF and plasma. Using statistical techniques, they then connected the protein levels to disease. There were 274 proteins linked to disease in study subjects' CSF, 127 in blood plasma and 32 in brain tissue.

They applied those findings and machine learning techniques to distinguish among the protein differences and zero in on some of the proteins that contribute to damage that leads to Alzheimer's.

"We have targets -- although I'm not saying all of these targets are going to work or that all of the compounds we identified are going to stop the progress of the disease -- but we have a real hypothesis," Cruchaga said. "And we expect it may be possible to move from these genetic studies into real clinical trials quickly. That's a big jump."

Credit: 
Washington University School of Medicine

Study shows that antibodies generated by CoronaVac COVID-19 vaccine are less effective against the P.1 Brazil variant

*Note: this paper is being presented at the European Congress of Clinical Microbiology & Infectious Diseases (ECCMID) and is being published in The Lancet Microbe. Please credit both the congress and the journal in your stories*

A new study presented at this year's European Congress of Clinical Microbiology & Infectious Diseases (ECCMID) and published in The Lancet Microbe, shows that antibodies generated by CoronaVac, an inactivated COVID-19 vaccine, work less well against the P.1 Brazil (Gamma) variant.

It also suggests that the P.1 variant may be able to reinfect individuals who previously had COVID-19.

The SARS-CoV-2 virus spike protein is the primary target of neutralising antibodies - antibodies that block the entry of the virus into cells - including those induced by vaccines. The emergence of variants with multiple mutations in this protein has raised concerns that neutralising antibody responses, and so the effectiveness of vaccination programmes, could be compromised.

The P.1 variant was discovered in Manaus, Brazil, in early January 2021 and has 15 unique mutations. Previous studies have found evidence that it can escape neutralisation by antibodies. However, most of these studies used modified viruses and there is little information on the ability of the wild-type P.1 variant, with a complete set of mutations, to evade neutralising antibodies in individuals previously infected with COVID-19 or vaccinated with an inactivated vaccine.

Dr Jose Luiz Proença-Módena, Laboratory of Emerging Viruses, University of Campinas, Brazil, and colleagues, studied the sensitivity of the P.1 variant to neutralising antibodies present in the serum of 53 individuals who had been vaccinated and 21 who had been previously infected with SARS-CoV-2.

The results were compared with those for a B lineage virus, one of the dominant strains in Brazil before the emergence of the P.1 variant.

The vaccinated group was made up of 18 individuals who had received a single dose and 20 who had received two doses of the CoronaVac inactivated vaccine during the Brazilian vaccination programme, plus 15 participants who had received two doses during a trial.

CoronaVac is one of the main vaccines in Brazil's national immunisation programme and has been approved for emergency use in Brazil, China, Columbia, Indonesia, Mexico and Turkey. Previous research from Brazil has shown it to be 50.7% effective at preventing symptomatic infections.

All the blood samples from individuals with past infections had high concentrations of SARS-CoV-2-specific antibodies. Tests showed the P.1 variant to be less sensitive to these antibodies than the B lineage virus. Antibody concentrations needed to be around nine times higher to neutralise the P.1 variant than the B lineage virus.

Antibodies from vaccinated individuals were also less efficient at neutralising the P.1 variant than the B lineage virus. The antibodies from almost all of the people who had one dose of the vaccine (20-23 days earlier) as part of the immunisation programme and all of those who had two doses in the trial (134-260 days earlier) had no detectable effect on the P.1 variant.

In contrast, the P.1 variant was still sensitive to the antibodies in the plasma of those who had two doses in the immunisation programme (second dose 17-38 days earlier) but to a lesser extent than the B lineage virus.

The authors say: "This study shows that antibodies present in the plasma of people with a history of COVID-19 or individuals vaccinated with CoronaVac were less efficient at neutralising P.1 variant isolates than a B lineage isolate.

"Neutralising antibodies are an important component of the immune response against SARS-CoV-2. Therefore, the capacity of the P.1 variant to evade antibodies present in the plasma of CoronaVac-immunised individuals suggests that the virus can potentially circulate in vaccinated individuals - even in areas with high vaccination rates."

They add: "Our results also suggest that the P.1 variant can escape from neutralising antibody responses generated by previous SARS-CoV-2 infection and thus reinfection might be possible.

"Consequently, continued and enhanced genetic surveillance of SARS-CoV-2 variants worldwide, paired with plasma neutralising antibody assays is required to guide updates of immunisation programmes.

"However, a phase 3 clinical trial showed that CoronaVac can protect against severe COVID-19 and death. Therefore, neutralising antibodies might not be the only contributing factor - the T-cell response may also play an important role in reducing disease severity."

Credit: 
European Society of Clinical Microbiology and Infectious Diseases

When bosses are abusive, how employees interpret their motives makes a difference: study

A new UBC Sauder School of Business study shows that depending on how employees understand their boss' motivation, employees can feel anger or guilt, and consequently, react differently to abusive supervision.

Former Apple CEO Steve Jobs was a famously harsh corporate leader, one who pushed his employees to extremes to achieve the company's lofty aims.

But while many aspiring leaders still believe that the "tough love" approach is effective, a new study from UBC Sauder shows that, even when abusive leadership is meant to push employees to new heights, it can land them in deep lows in the long term.

Abusive supervision -- which includes behaviours like yelling at employees, giving them the silent treatment, or putting them down in front of their coworkers -- has long been linked with psychological distress, increased turnover and decreased performance.

But a key question hadn't been properly examined: do employees respond differently when their supervisor's abuse is motivated by different reasons?

For the study, titled The Whiplash Effect: The (Moderating) Role of Attributed Motives in Emotional and Behavioral Reactions to Abusive Supervision, researchers conducted three studies on three continents.

For the first, which involved 1,000 soldiers and officers in the Chinese military, subordinates filled out surveys about the supervision they experienced, the emotions they felt, and how they responded.

The second was a laboratory experiment that involved 156 students and employees at a large American university. There, participants were given different roles as subordinates in a consulting firm, and were subjected to different forms of supervision -- some abusive and some non-abusive -- and were given hints about their supervisors' motivations.

They were also given the opportunity to participate in deviant behaviours against the supervisor, or engage in more positive "organizational citizenship behaviours," or OCBs (helpful actions that go beyond an employee's contract, such as assisting a co-worker with a project, or participating in workplace charity drives).

A third study had 325 employees and supervisors at a Swedish luxury car company fill out daily surveys for three weeks -- for the subordinates, about the abusive supervision they experienced and the emotions they felt, and for the supervisors, about the OCBs and deviant behaviours they observed.

Across all three studies, the researchers found that when employees think their supervisors' abusive actions are motivated by a desire to inflict harm, they are more likely to feel angry.

When subordinates believe their leaders are prodding employees to improve performance, however, they are more likely to feel guilt.

"When you feel like your supervisor is pushing you really hard, it's abusive, and you feel angry. But when they want to motivate you and improve your performance, employees have a strong feeling of guilt," explains UBC Sauder Assistant Professor Lingtao Yu (he, him, his), who named the study after the Oscar-winning film Whiplash, which follows an abusive band teacher and a student he's pushing to extremes.

"They think, 'Maybe there is a gap between what I do and what they expect. Maybe there's room for me to improve.'"

Those different emotions, in turn, lead to different behaviours. Employees who feel their bosses are "out to get them" are more likely to engage in devious or destructive behaviours and less likely to engage in more positive organizational citizenship behaviours, or OCBs.

Those who feel their leaders are pushing them to do better are less likely to act deviously and more drawn to positive corporate behaviours.

"People feel there's something they've done, or that they haven't done enough, so it's not entirely attributed to the other person. They may take some responsibility," explains Professor Yu, who coauthored the study with University of Minnesota Professor Michelle Duffy.

"So, guilt will actually trigger more prosocial behaviors, because the employee wants to do something to rebuild the relationship with the supervisor."

The findings are especially important given that, according to previous research, a third of U.S. employees are estimated to experience abusive supervision, and 45 percent of Europeans can recall an instance when they were either the target of supervisory abuse or observed it.

The study also found people's feelings of guilt don't last, so Professor Yu emphasizes that while the results-driven form of abusive supervision can sometimes have short-term benefits, in the long run it simply doesn't pay -- especially since abusive leadership can cost companies millions in lawsuits, health expenses, and productivity loss.

"Even if you have good intentions, you still want to be more mindful about your leadership behaviour -- and there are many other tools you can use to stimulate your employees' performance," he says. "Abusive leadership should not be the one you choose."

Credit: 
University of British Columbia

Depression, suicidal thoughts plague ailing coal miners, study finds

image: "This study highlights the unrecognized crisis of mental illness in miners that warrants urgent attention, resources and expanded care," said researcher Drew Harris, MD, a pulmonary medicine expert at UVA Health. Harris serves as the medical director of the Black Lung Clinic at Southwest Virginia's Stone Mountain Health Services, the only federally funded black lung clinic in Virginia.

Image: 
UVA Health

More than a third of coal miners and former coal miners suffering from black lung disease struggle with depression, and more than one in 10 has recently considered suicide, a new study finds.

The study is believed to be the first to examine mental-health issues in a large population of coal miners in the United States. Based on the troubling results, the researchers are calling for more mental health resources and treatment for current and former miners. They also are urging further study of potential contributors to the problem, including social determinants of health, substance use and workplace safety.

"Although coal mining is on the decline, the rates of black lung in Southwest Virginia continue to increase. Coal miners in Central Appalachia face disparities in health related to a range of complex social, economic, occupational and behavioral factors," said researcher Drew Harris, MD, a pulmonary medicine expert at UVA Health. "This study highlights the unrecognized crisis of mental illness in miners that warrants urgent attention, resources and expanded care."

Miners' Mental Health

Harris serves as the medical director of the Black Lung Clinic at Southwest Virginia's Stone Mountain Health Services, the only federally funded black lung clinic in Virginia. Black lung is a progressive lung illness caused by inhaling toxic coal and rock dust within coal mines. The dust literally blackens the insides of the lungs and leaves patients struggling to breathe. This devastating disease has few treatment options and is increasingly being diagnosed in Central Appalachian coal miners: Out of more than 1,400 coal miners X-rayed in the last year at Stone Mountain, more than 15% had progressive massive fibrosis, the most severe form of black lung.

To gauge the mental wellbeing of black-lung patients, Harris and his colleagues reviewed data collected at the clinic since 2018 assessing patients for anxiety, depression and post-traumatic stress disorder (PTSD).

More than 2,800 miners voluntarily completed a mental-health survey. The average age was 66; 99.6% were white; and 99.7% were male.

Of the participants:

883 patients, or 37.4%, reported symptoms consistent with major depressive disorder.

1,005 patients, or 38.9% had clinically significant anxiety.

639 patients, or 26.2%, had symptoms of PTSD.

295 patients, or 11.4%, had considered suicide in the last year. (For comparison, this figure is only 2.9% among Virginia men overall.)

"These rates of mental illness far exceeded those documented in coal mining populations internationally," the researchers write in a new scientific paper outlining their findings.

Rates skewed highest among sicker patients who required supplemental oxygen to help them breathe. Among that group, 47.7% reported anxiety; 48.5% reported depression; and 15.9% reported considering suicide.

The researchers note that depression and other mental-health issues not only affect patients' quality of life but can reduce the likelihood they will stick with their medications.

"The rates of mental illness identified in this large population of U.S. coal miners is shocking," Harris said. "Improved screening and treatment of mental illness in this population is an urgent, unmet need that warrants urgent action."

Credit: 
University of Virginia Health System

Ancient ostrich eggshell reveals new evidence of extreme climate change thousands of years ago

image: mandible of small antelope in calcrete

Image: 
Philip Kiberd

Evidence from an ancient eggshell has revealed important new information about the extreme climate change faced by human early ancestors.

The research shows parts of the interior of South Africa that today are dry and sparsely populated, were once wetland and grassland 250,000 to 350,000 years ago, at a key time in human evolution.

Philip Kiberd and Dr Alex Pryor, from the University of Exeter, studied isotopes and the amino acid from ostrich eggshell fragments excavated at the early middle Stone Age site of Bundu Farm, in the upper Karoo region of the Northern Cape. It is one of very few archaeological sites dated to 250,000 to 350,000 in southern Africa, a time period associated with the earliest appearance of communities with the genetic signatures of Homo sapiens.

This new research supports other evidence, from fossil animal bones, that past communities in the region lived among grazing herds of wildebeest, zebra, small antelope, hippos, baboons and extinct species of Megalotragus priscus and Equus capensis, and hunted these alongside other carnivores, hyena and lions.

After this period of equitable climate and environment the eggshell evidence - and previous finds from the site - suggests after 200,000 years ago cooler and wetter climates gave way to increasing aridity. A process of changing wet and dry climates recognised as driving the turnover and evolution of species, including Homo sapiens.

The study, published in the South African Archaeological Bulletin, shows that extracting isotopic data from ostrich eggshells, which are commonly found on archaeological sites in southern Africa, is a viable option for open-air sites greater than 200,000 years old. The technique which involves grinding a small part of the eggshell, to a powder allows experts to analyse and date the shell, which in turn gives a fix on the climate and environment in the past.

Using eggshell to investigate past climates is possible as ostriches eat the freshest leaves of shrubs and grasses available in their environment, meaning eggshell composition reflects their diet. As eggs are laid in the breeding season across a short window, the information found in ostrich eggshell provides a picture of the prevailing environment and climate for a precise period in time.

Bundu Farm, where the eggshell was recovered is a remote farm 50km from the nearest small town, sitting within a dry semi-desert environment, which supports a small flock of sheep. The site was first excavated in the late 1990's the site with material stored at the McGregor Museum, Kimberley (MMK). The study helps fill a gap in our knowledge for this part of South Africa and firmly puts the Bundu Farm site on the map.

Philip Kiberd, who led the study, said: "This part of South Africa is now extremely arid, but thousands of years ago it would have been Eden-like landscape with lakes and rivers and abundant species of flora and fauna. Our analysis of the ostrich eggshell helps us to better understand the environments in which our ancestors were evolving and provides an important context in which to interpret the behaviours and adaptations of people in the past and how this ultimately led to the evolution of our species'.

Credit: 
University of Exeter

Experts recommend a varied and moderate consumption of sushi limiting quantities of tuna

Eight pieces of salmon-based maki, nigiri or sashimi or maki unagi (eel) is the safest combination of sushi for adult and adolescent populations. That is one of the findings of TecnATox (Centre for Environmental, Food and Toxicological Technology), a joint research group from the URV and the Pere Virgili Health Research Institute (IISPV), which has analysed the presence of arsenic and various heavy metals in sushi. The consumption of sushi has increased significantly since the start of the 21st century, as has the number of restaurants offering it throughout the region. Although eating fish is recommended because of its high nutritional value, it can also lead to exposure to contaminants, such as heavy metals. Likewise, rice is a food that provides many nutrients and fibre and is low in fat, but it too can be source of pollutants such as arsenic.

The research group analysed the concentrations of various toxic elements (cadmium, nickel, lead, mercury, inorganic arsenic and methylmercury) and iodine in a hundred pieces of sushi, specifically those known as sashimi (raw fish), maki (a seaweed roll stuffed with rice, raw fish or other ingredients) an nigiri (balls of rice with fish or seafood on top). The researchers also calculated dietary exposure to all of these contaminants in various population groups (infants, adolescents and adults) and evaluated the risks to health.

The main results show a significantly higher concentration of inorganic arsenic in maki and nigiri, compared to sashimi, a finding associated with the presence of rice. They also show higher levels of mercury and methylmercury in sushi that contains tuna due to the bioaccumulation and biomagnification of this metal.

The research group also wanted to determine how consumption of this foodstuff varied in different groups of the population. They examined an average intake of 8 pieces of sushi in adults and adolescents and an average intake of 3 pieces in infants and found an increase in exposure to nickel and lead, although this remained within safe established levels. "The most worrying finding concerns methylmercury, a highly neurotoxic compound, for which there was an estimated exposure of 0.242 μg per kg of bodyweight in adolescents, a value higher than the safe daily limit established by the European Food Safety Authority (EFSA)", explained Montse Marquès, one of the researchers who worked on the study. By the same token, although not as high as in adolescents, the exposure levels calculated for adults and infants also suggest a relatively high intake of methylmercury.

Finally, the results were analysed as a whole to determine which combinations of sushi do not represent a risk. "We recommend that people combine 8 pieces of salmon-based maki, nigiri or sashimi or maki containing unagi (eel) and limit their consumption of any type of sushi containing tuna", warned Marquès.

The researchers stressed that the amounts of sushi analysed constitute only one of the five recommended meals a day. This means that the consumption of other foods throughout the day may also lead to exposure to certain toxic elements, such as arsenic (present in rice and rice-based foods), mercury (present in tuna and swordfish) or nickel (present in vegetables, pulses and cereals).

Due to its nutritional benefits, the researchers still recommend the consumption of sushi, but they also stress the need to do so in moderation in order to minimise the intake of certain food toxins.

Credit: 
Universitat Rovira i Virgili

Study of indigenous language education in Russia leads to intercontinental collaboration

The rationale for the research is in the fact that despite the high number of recognized Indigenous groups who are struggling to maintain their languages, cultures, and identities in Russia, there is little research done on the matters of cultural and linguistic revitalization. This study sought to address this gap by exploring the views of two Indigenous groups, Karelian and Mari, on the development of their Indigenous languages and educational strategies to protect and revive their languages. The study relied on in-depth one-on-one interviews with 20 participants, ten from each Indigenous group.

The findings show that despite older generations' relative proficiency and interest in their respective Indigenous languages, motivation to master them is fading among younger Indigenous populations. There is also a lack of opportunities to learn the languages including informal settings despite protections within the federal legal system. The participants identified three reasons for the rapid decrease of language speakers which include assimilation of the Indigenous groups, differences in rural and urban development, and globalization.

We singled out three areas where the research results can be implemented. First, those responsible for education policy for the country's indigenous population can draw on the findings of the study to develop educational initiatives. Second, the study design can be used to conduct similar studies with a different sample. Third, the results of qualitative research will be used to design a tool for quantitative research on the problems of this project.

During the project, the team members held a number of meetings with overseas colleagues and agreed to start developing the design of a collaborative study planned for 2022-2023. The purpose of the future research is to conduct a comparative analysis of the ways of adapting the UN Declaration on Indigenous Peoples and the national educational laws adopted on its basis (including laws for cultural and linguistic revival) in Russia, Mexico, Taiwan, Bolivia, Australia, and Canada.

Credit: 
Kazan Federal University