Culture

What does 1.5 °C warming limit mean for China?

As part of the Paris Agreement, nearly all countries agreed to take steps to limit the average increase in global surface temperature to less than 2 °C, or preferably 1.5 °C, compared with preindustrial levels. Since the Agreement was adopted, however, concerns about global warming suggest that countries should aim for the "preferable" warming limit of 1.5 °C.

What are the implications for China of trying to achieve this lower limit?

Prof. DUAN Hongbo from the University of Chinese Academy of Sciences and Prof. WANG Shouyang from the Academy of Mathematics and Systems Science of the Chinese Academy of Sciences, together with their collaborators, have attempted to answer this question.

Their results were published in an article entitled "Assessing China's efforts to pursue the 1.5°C warming limit," which was published in Science on April 22.

The authors used nine different integrated assessment models (IAMs) to make their evaluation of China's effort to achieve the warming limit of 1.5 °C.

The various models show different emission trajectories for carbon and noncarbon emissions. The majority of the IAMs will achieve near-zero or negative carbon emissions by around 2050, with a range from -0.13 billion tonnes of CO2 (GtCO2) to 2.34 GtCO2 across models. However, one highly consistent finding among all models is that the 1.5°C warming limit requires carbon emissions decrease sharply after 2020.

The researchers discovered that a steep and early drop in carbon emissions reduces dependency on negative emission technologies (NETs), i.e., technologies that capture and sequester carbon. One implication of this finding is that there is a trade-off between substantial early mitigation of carbon emissions and reliance on NETs, which may have uncertain performance. At the same time, the model showing the lowest carbon emissions by 2050 shows the greatest reliance on carbon capture and storage (CCS) technology--suggesting that NETs have an important role in reducing carbon emissions.

Although carbon emissions were an important focus of the study, the researchers also noted that reducing noncarbon emissions is necessary to stay under the warming limit. Specifically, carbon emissions must be reduced by 90%, CH4 emissions by about 71% and N2O emissions by about 52% to achieve the 1.5 °C goal.

The study showed that mitigation challenges differ across sectors, e.g., industry, residential and commercial, transportation, electricity and "other." Among these sectors, industry plays a big role in end-use energy consumption. Therefore, substantial changes in industrial energy use must occur to reach deep decarbonization of the entire economy and realization of the given climate goals. Indeed, a highly consistent finding across all models is that the largest proportion of emission reduction will come from a substantial decline in energy consumption.

The study also highlights the importance of replacing fossil fuels with renewables, a strategy that plays the next most important role in emission reduction behind reducing energy consumption. The study suggests that China needs to decrease its fossil energy consumption (as measured by standard coal equivalent, or Gtce) by about 74% in 2050 in comparison with the no policy scenario.

The researchers estimate that achieving the 1.5 °C goal will involve a loss of GDP in 2050 in the range of 2.3% to 10.9%, due to decreased energy consumption and other factors.

The study also noted that China's recently announced plan to become carbon neutral by 2060 largely accords with the 1.5 °C warming limit; however, achieving the latter goal is more challenging.

Credit: 
Chinese Academy of Sciences Headquarters

Researchers trace spinal neuron family tree

image: Researchers discovered a genetic marker that differed between spinal cord neurons that only had short connections (green) and those that had more long-range connections (purple).

Image: 
Salk Institute

LA JOLLA--(April 22, 2021) Spinal cord nerve cells branching through the body resemble trees with limbs fanning out in every direction. But this image can also be used to tell the story of how these neurons, their jobs becoming more specialized over time, arose through developmental and evolutionary history. Salk researchers have, for the first time, traced the development of spinal cord neurons using genetic signatures and revealed how different subtypes of the cells may have evolved and ultimately function to regulate our body movements.

The findings, published in the journal Science on April 23, 2021, offer researchers new ways of classifying and tagging subsets of spinal cord cells for further study, using genetic markers that differentiate branches of the cells' family tree.

"A study like this provides the first molecular handles for scientists to go in and study the function of spinal cord neurons in a much more precise way than they ever have before," says senior author of the study Samuel Pfaff, Salk Professor and the Benjamin H. Lewis Chair. "This also has implications for treating spinal cord injuries."

Spinal neurons are responsible for transmitting messages between the spinal cord and the rest of the body. Researchers studying spinal neurons have typically classified the cells into "cardinal classes," which describe where in the spinal cord each type of neuron first appears during fetal development. But, in an adult, neurons within any one cardinal class have varied functions and molecular characteristics. Studying small subsets of these cells to tease apart their diversity has been difficult. However, understanding these subset distinctions is crucial to helping researchers understand how the spinal cord neurons control movements and what goes awry in neurogenerative diseases or spinal cord injury.

"It's been known for a long time that the cardinal classes, as useful as they are, are incomplete in describing the diversity of neurons in the spinal cord," says Peter Osseward, a graduate student in the Pfaff lab and co-first author of the new paper, along with former graduate student Marito Hayashi, now a postdoctoral fellow at Harvard University.

Pfaff, Osseward and Hayashi turned to single-cell RNA sequencing technologies to analyze differences in what genes were being activated in almost 7,000 different spinal neurons from mice. They used this data to group cells into closely related clusters in the same way that scientists might group related organisms into a family tree.

The first major gene expression pattern they saw divided spinal neurons into two branches: sensory-related neurons (which carry information about the environment through the spinal cord) and motor-related neurons (which carry motor commands through the spinal cord). This suggests that, in an ancient organism, one of the first steps in spinal cord evolution may have been a division of labor of spinal neurons into motor versus sensory roles, Pfaff says.

When the team analyzed the next branches in the family tree, they found that the sensory-related neurons then split into excitatory and inhibitory neurons--a division that describes how the neuron sends information. But when the researchers looked at motor-related neurons, they found a more surprising division: the cells clumped into two distinct groups based on a new genetic marker. When the team stained cells belonging to each group in the spinal cord, it became clear that the markers differentiated neurons based on whether they had long-range or short-range connections in the body. Further experiments revealed that the genetic patterns specific to long-range and short-range properties were common across all the cardinal classes tested.

"The assumption in the field was that the genetic rules of specifying long-range versus short-range neurons would be specific to each cardinal class," say Osseward and Hayashi. "So it was really interesting to see that it actually transcended cardinal class."

The observation was more than just interesting--it turned out to be useful as well. Previously, it might have taken many different genetic tags to narrow in on one particular neuron type that a researcher wanted to study. Using this many markers is technically challenging and largely prevented researchers from studying just one subtype of spinal cord neuron at a time.

With the new rules, just two tags--a previously known marker for cardinal class and the newly discovered genetic marker for long-range or short-range properties--can be used to flag very specific populations of neurons. This is useful, for instance, in studying which groups of neurons are affected by a spinal cord injury or neurodegenerative disease and, eventually, how to regrow those particular cells.

The evolutionary origin of the spinal neuron family tree studied in the new paper is likely very ancient because the genetic markers they discovered are conserved across many species, the researchers say. So, although they didn't study spinal neurons from animals other than mice, they predict that the same genetic patterns would be seen in most living animals with spinal cords.

"This is primordial stuff, relevant for everything from amphibians to humans," says Pfaff. "And in the context of evolution, these genetic patterns tell us what kind of neurons might have been found in some of the very earliest organisms."

Credit: 
Salk Institute

Salad or cheeseburger? Your co-workers shape your food choices

BOSTON -- The foods people buy at a workplace cafeteria may not always be chosen to satisfy an individual craving or taste for a particular food. When co-workers are eating together, individuals are more likely to select foods that are as healthy--or unhealthy--as the food selections on their fellow employees' trays. "We found that individuals tend to mirror the food choices of others in their social circles, which may explain one way obesity spreads through social networks," says Douglas Levy, PhD, an investigator at the Mongan Institute Health Policy Research Center at Massachusetts General Hospital (MGH) and first author of new research published in Nature Human Behaviour. Levy and his co-investigators discovered that individuals' eating patterns can be shaped even by casual acquaintances, evidence that corroborates several multi-decade observational studies showing the influence of people's social ties on weight gain, alcohol consumption and eating behavior.

Previous research on social influence upon food choice had been primarily limited to highly controlled settings like studies of college students eating a single meal together, making it difficult to generalize findings to other age groups and to real-world environments. The study by Levy and his co-authors examined the cumulative social influence of food choices among approximately 6,000 MGH employees of diverse ages and socioeconomic status as they ate at the hospital system's seven cafeterias over two years. The healthfulness of employees' food purchases was determined using the hospital cafeterias' "traffic light" labeling system designating all food and beverages as green (healthy), yellow (less healthy) or red (unhealthy).

MGH employees may use their ID cards to pay at the hospitals' cafeterias, which allowed the researchers to collect data on individuals' specific food purchases, and when and where they purchased the food. The researchers inferred the participants' social networks by examining how many minutes apart two people made food purchases, how often those two people ate at the same time over many weeks, and whether two people visited a different cafeteria at the same time. "Two people who make purchases within two minutes of each other, for example, are more likely to know each other than those who make purchases 30 minutes apart," says Levy. And to validate the social network model, the researchers surveyed more than 1,000 employees, asking them to confirm the names of the people the investigators had identified as their dining partners.

"A novel aspect of our study was to combine complementary types of data and to borrow tools from social network analysis to examine how the eating behaviors of a large group of employees were socially connected over a long period of time," says co-author Mark Pachucki, PhD, associate professor of Sociology at the University of Massachusetts, Amherst.

Based on cross-sectional and longitudinal assessments of three million encounters between pairs of employees making cafeteria purchases together, the researchers found that food purchases by people who were connected to each other were consistently more alike than they were different. "The effect size was a bit stronger for healthy foods than for unhealthy foods," says Levy.

A key component of the research was to determine whether social networks truly influence eating behavior, or whether people with similar lifestyles and food preferences are more likely to become friends and eat together, a phenomenon known as homophily. "We controlled for characteristics that people had in common and analyzed the data from numerous perspectives, consistently finding results that supported social influence rather than homophily explanations," says Levy.

Why do people who are socially connected choose similar foods? Peer pressure is one explanation. "People may change their behavior to cement the relationship with someone in their social circle," says Levy. Co-workers may also implicitly or explicitly give each other license to choose unhealthy foods or exert pressure to make a healthier choice.

The study's findings have several broader implications for public health interventions to prevent obesity. One option may be to target pairs of people making food choices and offer two-for-one sales on salads and other healthful foods but no discounts on cheeseburgers. Another approach might be to have an influential person in a particular social circle model more healthful food choices, which will affect others in the network. The research also demonstrates to policymakers that an intervention that improves healthy eating in a particular group will also be of value to individuals socially connected to that group.

"As we emerge from the pandemic and transition back to in-person work, we have an opportunity to eat together in a more healthful way than we did before," says Pachucki. "If your eating habits shape how your co-workers eat--even just a little--then changing your food choices for the better might benefit your co-workers as well."

Credit: 
Massachusetts General Hospital

Less is more for the next generation of CAR T cells

PHILADELPHIA--When researchers from Penn Medicine found that many patients with B-cell acute lymphoblastic leukemia (ALL) treated with the investigational chimeric antigen receptor (CAR) T cell therapy targeting the CD22 antigen didn't respond, they went back to the drawing board to determine why. They discovered that less is more when it comes to the length of what is known as the single-chain variable fragment -- the linker that bridges the two halves of the receptor that allows CAR T cells to latch onto tumor cells and attack them.

The findings are reported online today in Nature Medicine.

"A little difference of a few amino acids can make a huge difference for patients," said co-senior author Marco Ruella, MD, an assistant professor of Medicine in the division of Hematology-Oncology in the Perelman School of Medicine at the University of Pennsylvania and scientific director of the Lymphoma Program. "When we snipped 15 amino acids off the linker, the two halves of the CAR plastered together and pre-activated the CAR T cells."

Developed at Penn Medicine in collaboration with Novartis, the anti-CD19-CAR T cell (Kymriah™) is currently approved by the U.S. Food and Drug Administration for the treatment of pediatric and young adults with relapsed/refractory ALL; however, long-term follow up shows a significant fraction of patients who achieve remission ultimately relapse, often when the leukemia cells lose expression of the CD19 antigen and elude the new immune system. Researchers at Penn and elsewhere have developed a second approach for this group of patients: CAR T cells that target the CD22 antigen, also found on B cells.

Two clinical pilot studies conducted at Penn and Children's Hospital of Philadelphia with CAR-22 T cell therapy in six children and three adult patients with refractory/relapsed ALL showed overall poor clinical responses. Strikingly, those findings didn't align with the positive results from a similar trial at the National Cancer Institute (NCI). So, the team investigated further and found the only difference between the two treatments was the length of the single-chain variable fragment. The NCI CAR was five amino acids in length, while the Penn/CHOP trial CAR was 20 amino acids in length.

The Penn team in collaboration with Novartis then built a new 41BB-based CAR-22 T cell with a shorter linker and compared it to the previous CAR in both mouse and human cell studies. 41BB is a different co-stimulatory signal, along with CD28, needed for CAR T cell activation and survival. They showed that the shorter linker on the CAR construct was, in fact, more successful at pre-activating CAR T cells so that their response to B cells and subsequent anti-tumor activity improved. Researchers have generally assumed that when T cells are pre-activated prior to seeing their target they become easily exhausted.

"This finding really turns the current dogma on its head," said Saar I. Gill, MD, PhD, an assistant professor of Hematology-Oncology at Penn, who serves as co-senior author. "These days, most CARs come with one of two built-in turbo-charge systems: CD28 or 41BB. It turns out that the assumption was true only for CD28-stimulated CARs but not for the more commonly used 41BB-stimulated CARs. In fact, here we showed that the opposite is true."

Nathan Singh, MD, who conducted the study while at Penn and is now an assistant professor of Medicine in the Division of Oncology at Washington University School of Medicine in St. Louis, is the first author.

"These findings were rather unexpected, given what's been shown previously in the field," Singh said. "They highlight that even small molecular changes can have dramatic impact on the function of these synthetic molecules. Most exciting is that these data reveal a previously hidden path to improve CAR T cell function in a methodical manner."

The results from these studies have led to a new, ongoing clinical trial at Penn and CHOP in collaboration with Novartis for adult and pediatric patients with refractory/relapsed ALL investigating the shorter linker CAR. Additional studies are underway to determine if the length of linkers is important in other CAR T cells.

"Every CAR construct is different," Gill said. "But these findings suggest we need to modify the length of the linker when we test a new CAR to make sure we are generating the best version of it."

Credit: 
University of Pennsylvania School of Medicine

Scientists provide new insights into the citric acid cycle

image: Lydia Steffens and Eugenio Pettinato (University of Münster, left) and Thomas M. Steiner (TUM, right) in the laboratory; the three doctoral students share first authorship of the Nature publication. In the middle, a fermenter system for growing bacteria can be seen.

Image: 
AG Berg/AG Eisenreich

The citric acid cycle is an important metabolic pathway that enables living organisms to generate energy by degrading organic compounds into carbon dioxide (CO?). The first step in the cycle is usually performed by the enzyme citrate synthase, which builds citrate. But, in the absence of oxygen (under anaerobic conditions), some bacteria can perform the reverse cycle: They can build up biomass from CO?. In this so-called reversed citric acid cycle, citrate synthase is replaced by ATP-citrate lyase, which consumes cells' universal energy carrier ATP (adenosine triphosphate) to cleave citrate instead of forming it. However, a few years ago, a research team led by Ivan Berg (University of Münster) and Wolfgang Eisenreich (Technical University of Munich) discovered that instead of requiring ATP-citrate lyase for the reversed cycle, some anaerobic bacteria can use citrate synthase itself to catalyze citrate cleavage without consuming ATP. Now, the same team found that bacteria using this metabolic pathway (the reversed citric acid cycle through citrate synthase), depend on very high concentrations of both the enzyme and carbon dioxide.

As a comparison, the CO? concentration in air is around 0.04%, but bacteria using this pathway require at least 100 times more than that for their growth. The researchers assume that such CO? concentration-dependent pathways could have been widespread on the primordial earth, since the CO? concentration was high at the time. Therefore, this metabolic pathway may be a relic of early life. The results of the study have been published in the journal Nature (online in advance).

The team studied the anaerobic bacteria Hippea maritima and Desulfurella acetivorans. These organisms live without oxygen in hot springs, where the CO? concentration can be 90% and higher. "It is conceivable that many other organisms use this cycle to bind CO?," says Ivan Berg. "Our findings are in line with the results of several recent studies highlighting a potential widespread occurrence of this reversed oxidative citric acid cycle." Nevertheless, many bacteria use the energetically less efficient ATP-dependent reaction for citrate cleavage. "It was mysterious why this 'expensive' version of the pathway exists if an energetically much cheaper alternative through the backwards reaction of citrate synthase is feasible. Now we know that this is due to the low CO? concentrations in many environments. The cheap alternative doesn't work there." emphasizes Wolfgang Eisenreich.

These findings could also be of interest for biotechnology. With the knowledge that autotrophic organisms using this "backward cycle" depend on the CO? concentration, scientists can apply it to more efficiently convert substrates into value-added products.

The results in detail

The scientists wanted to understand what factor determines whether the citric acid cycle runs "forwards" or "backwards" in the bacteria. Cultivating the bacteria under different conditions, they noticed that the growth of these organisms was highly dependent on the CO? concentration in the gas phase. In detail, the high CO? concentration was needed to allow the function of another important enzyme, pyruvate synthase. This enzyme is responsible for assembling acetyl coenzyme A (acetyl-CoA), the product of the "reversed cycle". The high CO? concentration drives the pyruvate synthase reaction in the direction of carboxylation and the entire cycle backwards, enabling CO? to be converted into biomass. The studied Hippea maritima and Desulfurella acetivorans were able to grow very well at 20% and 40% CO? in the gas phase but only moderately at 5% CO?, and no growth was possible at 2% or 1% CO?. As a control, the scientists studied another autotrophic bacterium Desulfobacter hydrogenophilus, which uses the energetically more expensive ATP-citrate lyase version of the reversed citric acid cycle. In this bacterium, growth was not affected by the CO? concentration.

The "backward cycle" that uses citrate synthase for citrate cleavage cannot be bioinformatically predicted, as it does not have the key enzymes whose presence can be used as a marker for the functioning of the pathway. Therefore, as an identifying feature for bioinformatic analyzes, the scientists used the detected high levels of citrate synthase in these bacteria's protein cocktail. Using a special analysis tool, the researchers were able to predict the production levels of individual proteins. With this trick, it was possible to predict the functioning of the "backward cycle" for inorganic carbon fixation in many anaerobic bacteria.

The scientists also showed that no gene regulation was necessary for switching from the oxidative ("forward") to the reductive ("reverse") direction. "This means that the cells can react very quickly on the availability of the carbon source in the environment" says Ivan Berg. "They use either the reductive direction to fix CO?, if the concentration of CO? is high, or the oxidative direction, if another carbon source is available."

Methodological notes

The methods used in the study were mass spectrometry and 13C-isotope analyses, enzyme measurements, protein quantification as well as media and amino acid analyses using chromatographic and spectrometric methods (LC/MS or GC/MS). With bioinformatic methods, they examined the occurrence of certain nucleotide base combinations (codons) in order to make predictions about the production of individual proteins.

Credit: 
University of Münster

Frequent internet use by older people during lockdown linked to mental health benefits

A new study from the University of Surrey has found that among people aged 55 to 75 more frequent use of the internet was beneficial for mental health and quality of life under lockdown. Those who used the internet more, particularly for staying in touch with friends and family, were at lower risk of depression and reported a higher quality of life.

Loneliness and social isolation have been major problems for many under lockdown, and for older people in particular. Loneliness raises risk of depression and other negative health outcomes. In a paper published in the journal Healthcare, researchers from Surrey investigated whether more frequent internet use in older people helped reduce this risk.

Researchers studied 3,491 individual participants drawn from the English Longitudinal Study of Ageing in Summer 2020, whilst social distancing measures were in place across the country. Participants were surveyed on the frequency and type of their internet usage - such as information searching or for communication purposes.

Those who reported using the internet frequently (once a day or more) had much lower levels of depression symptoms and reported higher quality of life compared to those who used the internet only once a week or less. Using the internet for communication was particularly linked to these beneficial effects, suggesting that going online to stay connected with friends and family helped combat the negative psychological effects of social distancing and lockdown in adults aged 55-75.

Conversely, the study found that people who mostly used the internet to search for health-related information reported higher levels of depression symptoms. This might be due to a greater degree of worry triggered by reading Covid-19 and other health-related internet sources.

Dr Simon Evans, Lecturer in Neuroscience at the University of Surrey, said: "As social restrictions continue during the Covid-19 pandemic, older people are at greater risk of loneliness and mental health issues. We found that older adults who used the internet more frequently under lockdown, particularly to communicate with others, had lower depression scores and an enhanced quality of life. As the Covid-19 situation evolves, more frequent internet use could benefit the mental health of older people by reducing loneliness and risk of depression, particularly if further lockdowns are imposed in the future."

Credit: 
University of Surrey

Miniaturized models of neuron-muscle interactions give insight in ALS

image: Scanning electron microscopy image of a human neuromuscular junction generated in microfluidic devices. Human induced pluripotent stem cell-derived motor neuron axons fan out and embed themselves into the human muscle tissue, which creates a large connection surface.

Image: 
Dr. Pieter Baatsen, LiMoNe, Research Group Molecular Neurobiology and Katarina Stoklund Dittlau, Laboratory of Neurobiology at VIB-KU Leuven Center for Brain and Disease Research

Skeletal muscles enable voluntary movements and are controlled by a special type of neurons called motor neurons, which make direct contact with skeletal muscles through so-called neuromuscular junctions (NMJs). It is through NMJs that skeletal muscles receive signals making them contract or relax. In certain neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS), NMJs are destroyed, leading to progressive muscle weakness and ultimately death. Treatments for ALS mainly focus on alleviating symptoms but cannot stop or reverse its disease progression. To find more effective treatments, researchers require accurate and easily accessible lab-based models for ALS to understand its causes and to develop and test new therapies. One step in this direction was made by Ludo Van Den Bosch (ludo.vandenbosch@kuleuven.be) and colleagues in Belgium, who generated NMJs outside the human body in a so-called microfluidic device. In this sophisticated model, human motor neurons, which were derived from ALS patients, via induced pluripotent stem cells engineered from their own skin cells, and human skeletal muscle cells from healthy donors grew in separate mini-chambers on opposite sides of the device, whereby tiny channels connected the two chambers. Excitingly, with time, the neurons started to send connections, called axons, through the channels to form NMJs, which were able to transmit signals from the neurons to the muscle cells, similar to NMJs in the human body. However, when comparing motor neurons from ALS patients with healthy motor neurons in this setup, it was clear that ALS motor neurons sent less axons across the channels and formed less NMJs with the muscle cells. In addition, ALS motor neurons regenerated damaged axons less efficiently. Encouragingly, ALS motor neurons could be pushed to grow more axons, reaching levels similar to healthy motor neurons, by adding the chemical Tubastatin A to the cultures. Further studies will show how Tubastatin A, which inhibits a certain class of proteins in the cell, promotes axon growth in ALS motor neurons, and if similar effects can be achieved in animal models and ultimately in ALS patients. This new miniaturized model of NMJ formation, recently published in Stem Cell Reports, will find broad application in studying motor neuron pathology and the discovery of potential therapeutics in ALS.

Credit: 
International Society for Stem Cell Research

Adversity in early life linked to higher risk of mental health problems

Thursday, 22 April 2021 - New research has found that childhood adversity, such as parental conflict, death of a close family member or serious injury, before the age of nine was associated with mental health problems in late adolescence.

However, the research also shows that improving the relationship between parents and children could prevent subsequent mental health problems, even in children who have experienced severe adversities. The research also indicated that improving a child's self-esteem and increasing their levels of physical activity can help to reduce the risk of developing mental health problems.

The study, led by researchers from RCSI University of Medicine and Health Sciences, is recently published in Psychological Medicine.

The research team analysed data from over 6,000 children in Ireland who took part in the Growing Up in Ireland study. The results showed that just over a quarter of children had experienced childhood adversity before the age of nine.

At age 17 and 18, almost one in five of the young people were experiencing significant mental health difficulties. 15.2% had developed internalising problems, such as anxiety or depression, and 7.5% had developed externalising problems, such as conduct problems or hyperactivity.

Those who experienced childhood adversity were significantly more likely to report mental health problems in late adolescence.

Parent-child conflict explained 35% of the relationship between childhood adversity and late adolescent externalising problems. The conflict also accounted for 42% of the relationship between childhood adversity and internalising problems.

The child's self-esteem (also called self-concept) explained 27% of the relationship between child adversity and later internalising problems. The child's level of physical activity explained 9% of the relationship between childhood adversity and later internalising problems.

"Children who experience multiple or severe life events are at an increased risk of mental health problems, but not all of those exposed to such events develop such problems. Our research points to some factors that can be useful for off-setting the risk of mental health problems in those who have been exposed to difficult life events," said Dr Colm Healy, the study's lead author and postdoctoral researcher at RCSI University of Medicine and Health Sciences.

The work was funded by the Health Research Board in Ireland and the European Research Council.

"Among children who have experienced adversity, we found that reducing conflict between the parent and child and fostering a warm relationship can protect them from a broad range of later mental health problems," said Professor Mary Cannon, the study's principal investigator and professor of Psychiatric Epidemiology and Youth Mental Health at RCSI.

"We also found that improving a child's self-esteem and encouraging physical activity may also be useful intervention targets for preventing difficulties with mood and anxiety following earlier adversity. On the whole, this is a hopeful story that points towards effective interventions to improve outcomes for children who had experienced difficulties early in life."

Credit: 
RCSI

Nanofiltration membranes to treat industrial wastewater from heavy metals

NUST MISIS scientists together with Indian colleagues from Jain University and Sri Dharmasthala Manjunatheshwara College presented innovative membranes for the complete removal of heavy metals from industrial wastewater. The special nanostructure of zinc-modified aluminum oxide made it possible to remove arsenic and lead from water with an efficiency of 87% and 98%, respectively. The results of the work were published in the Chemosphere journal.

Industrialization is the main cause of water pollution due to the ingress of industrial waste. In particular, heavy metals -- arsenic, lead and cadmium -- can cause metabolic disorders and multiple critical effects to the body, which make them extremely toxic to the environment.

One of the most promising methods of purification and removal of heavy metal ions from water is the "Membrane technology". It acts as an effective barrier ("filter") and is relatively easy to manufacture. At the same time, it has some serious limitations such as high energy consumption, short membrane life, low productivity and selectivity.

The challenge for scientists is to make the membrane technology a more versatile and commercially available method of wastewater purification. An international team of researchers from Russia and India proposed a solution to the problem by synthesizing a new type of membranes -- especially porous nanoparticles of zinc-doped aluminium oxide.

"The nanoparticles that we obtained by solution combustion method have a very large surface area (261.44 m2/g) at a size of 50 nanometers. Cross-sectional images of nanoparticles obtained using scanning electron microscopy showed the finger-like morphology and porous nature of the membranes," said Vignesh Nayak, co-author of the work, a postdoc at NUST MISIS.

According to scientists, the synthesized membranes showed increased hydrophilicity (wettability), surface charge and "super porosity", which made it possible to remove arsenic and lead from an aqueous solution with an efficiency of 87% and 98%, respectively.

The second important advantage of the membranes obtained is antifouling properties. It means that the material is resistant to fouling by aquatic microorganisms, which disable devices that have been in aquatic environments for a long time.

The antifouling study conducted by developers at various pressures with a feed solution containing bovine serum albumin showed 98.4% recovery and reusability of membranes for up to three continuous cycles.

In the future, the membranes obtained can be used for effective treatment of industrial effluents, as well as in large city water treatment plants. The team is currently completing laboratory tests of the samples obtained.

Credit: 
National University of Science and Technology MISIS

Inspired by nature, the research to develop a new load-bearing material

image: The image shows the interface between the hydrogel (left-hand side) and the PDMS (on the right-hand side). The images was taken at 100,000 times magnification. Credit: University of Leeds

Image: 
University of Leeds

Inspired by nature, the researchers developing a new load-bearing material

Engineers have developed a new material that mimics human cartilage - the body's shock absorbing and lubrication system, and it could herald the development of a new generation of lightweight bearings.

Cartilage is a soft fibrous tissue found around joints which provides protection from the compressive loading generated by walking, running or lifting. It also provides a protective, lubricating layer allowing bones to pass over one another in a frictionless way. For years, scientists have been trying to create a synthetic material with the properties of cartilage.

To date, they have had mixed results.

But in a paper published in the journal Applied Polymer Materials, researchers at the University of Leeds and Imperial College London have announced that they have created a material that functions like cartilage.

The research team believes a cartilage-like material would have a wide-range of uses in engineering.

Cartilage is a bi-phasic porous material, meaning it exists in solid and fluid phases. It switches to its fluid phase by absorbing a viscous substance produced in the joints called synovial fluid. This fluid not only lubricates the joints but when held in the porous matrix of the cartilage, it provides a hydroelastic cushion against compressive forces.

Because the cartilage is porous, the synovial fluid eventually drains away and as it does, it helps dissipate the energy forces travelling through the body, protecting joints from wear and tear and impact injuries. At this point the cartilage returns to its sold phase, ready for the cycle to be repeated.

Dr Siavash Soltanahmadi, Research Fellow in the School of Mechanical Engineering at Leeds, who led the research, said: "Scientists and engineers have been trying for years to develop a material that has the amazing properties of cartilage. We have now developed a material for engineering applications that mimics some of the most important properties found in cartilage, and it has only been possible because we have found a way to mimic the way nature does it.

"There are many applications in engineering for a synthetic material that is soft but can withstand heavy loading with minimum wear and tear, such as in bearings. There is potential across engineering for a material that behaves like cartilage."

Earlier attempts at developing a synthetic cartilage system have focused on the use of hydrogels, materials that absorb water. Hydrogels are good at reducing friction but perform poorly when under compressive force.

One of the problems is that it takes time for the hydrogel to return to its normal shape after it has been compressed.

The researchers have overcome this problem by creating a synthetic porous material made of a hydrogel held in a matrix of polydimethylsiloxane or PDMS - a silicone-based polymer. The matrix keeps the shape of the hydrogel.

In the paper, the scientists report that the load-bearing behaviour of the hydrogel held in the PDMS matrix was 14 to 19 times greater than the hydrogel on its own. The equilibrium elastic modulus of the composite was 452 kPa at a strain range of 10%-30%, close to the values reported for the modulus of cartilage tested.

The hydrogel also provided a lubricating layer.

The scientists believe future applications of a new material based on the function of cartilage would challenge many traditional oil-lubricated engineering systems.

Dr Michael Bryant, Associate Professor in the School of Mechanical Engineering, who supervised the research, said; "The ability to use water as an effective lubricant has many applications from energy generation to medical devices. However this often requires a different approach when compared to traditional engineering systems which often use oil-based lubricants and hard-surface coatings.

"This project has helped us to better understand these requirements and develop new tools to address this need."

Credit: 
University of Leeds

Transport phenomena at the nanoscale

image: A pulsed beam of X-rays is incident on a diamond mask (blue), creating a grating of excitation on the sample. A laser beam (thin blue line on the right) is diffracted by the transient grating in the sample, and the diffracted beam (yellow line) is recorded as a function of time, with respect to the initial X-ray beam pulse.

Image: 
Ella Maru Studio (https://scientific-illustrations.com)

Transient grating spectroscopy is an elegant method that uses two laser pulses to activate a medium by creating an interference pattern made of parallel stripes of excitations that can be thermal, electronic, magnetic or even structural. The modulation depth of the pattern and its evolution can be measured by diffracting a third, time-delayed probe beam on the transient grating.

The modulation depth decays as the initial excitation propagates through the material. The distance between the stripes is determined by the wavelength of the pulses used to create the grating, which in the visible-ultraviolet part of the spectrum, is in the order of hundreds of nanometers.

Transient grating spectroscopy is a powerful tool for monitoring the transport properties of a material, be it heat, electric or magnetic flux, or structure. At a time of miniaturization of devices, the need to reach the regime of nanoscale transport is ever more pressing. Transport properties at the nanometer scale are completely unknown, and are expected to greatly differ from those at the micrometer or larger scales. This calls for the use of short wavelength radiation, and in particular X-rays. The main challenge is to cross two X-ray beams in order to generate a grating with nanometer step size.

Now, an international team of scientists have exploited the so-called Talbot effect to create the interference pattern with hard X-ray beams at a subnanometer wavelength (0.17 nm). The collaboration includes the LSU and LACUS (Majed Chergui) at EPFL, the PSI (Cris Svetina), the MIT (Keith Nelson), the FERMI free electron laser in Trieste (Claudio Masciovecchio), the Université Jean-Monnet-Saint-Étienne (Jérémy Rouxel), and the European Laboratory for Non-Linear Spectroscopy in Florence (Renato Torre), among others. The scientists used the Swiss X-ray Free Electron Laser (SwissFEL) at PSI.

The results are published in Nature Photonics, showing that the transient excitation grating decays in tens of femtoseconds to picoseconds, revealing the material's phonon response. The researchers probed the grating using an optical pulse at 400 nm. This is the first demonstration of hard X-ray transient grating spectroscopy, which opens the way to exciting and novel developments.

"Hard X-ray transient grating is uniquely suited for investigating nanoscale transport phenomena in bulk and nanostructured materials, disordered materials, and even in liquids," says Majed Chergui. "Future experiments can open the field to applications in the characterization of materials, especially for nano-electronics, nano-optics, and nano-magnetism."

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Romantic relationships mitigate effects of trauma on alcohol use among college students

Students who have been exposed to interpersonal trauma —  physical assault, sexual assault or unwanted sexual experiences — prior to college are more likely to engage in risky alcohol use. But romantic relationships mitigate these effects of trauma on a student’s drinking behavior, according to a new study led by Virginia Commonwealth University researchers.

The study investigates whether romantic relationships might play a role in mitigating or exacerbating the effects of trauma exposure on alcohol use among college students. It found that students who experienced interpersonal trauma during college consumed more alcohol than those without interpersonal trauma exposure, and that their drinking was more pronounced for those in a relationship with a partner with higher levels of alcohol use. It also found that a student’s satisfaction in their romantic relationship did not change the association between interpersonal trauma and alcohol use.

Previous research has found that college students who have been exposed to interpersonal trauma are more likely to engage in risky alcohol use. Yet not everyone who experiences interpersonal trauma goes on to misuse alcohol, raising questions about what factors might play a role in the interaction of trauma and drinking.

The study, “A Longitudinal Study of the Moderating Effects of Romantic Relationships on the Associations Between Alcohol Use and Trauma in College Students,” will be published in a forthcoming issue of the journal Addiction. It explores whether three aspects of romantic relationships — relationship status, relationship satisfaction and partner alcohol use — change the associations between interpersonal trauma and alcohol use.

“These findings are important because they help elucidate the ways that romantic relationships can improve or undermine health habits, particularly concerning alcohol consumption,” said lead author Rebecca Smith, a doctoral student in the Department of Psychology in the College of Humanities and Sciences. “A better understanding of the ways that social relationships can influence health behaviors might encourage people to carefully consider the people with whom they spend time. Moreover, these findings help us better understand alcohol use risk and protective factors across the lifespan, which can be used to inform prevention and treatment programs.”

Jessica Salvatore, Ph.D., an assistant professor in the Department of Psychology and the senior author on the study, said the findings “underscore the double-edged role that relationships and partners have on health behaviors in college.”

“On the one hand, we found that involvement in a committed relationship buffered the effects of interpersonal trauma exposure on students’ alcohol use,” she said. “On the other, we found that involvement with a heavier drinking partner amplified the association between exposure and alcohol use.”

Smith said she was surprised that relationship satisfaction was not a significant moderator of the associations between interpersonal trauma and alcohol use.

“Based on previous research suggesting that involvement in satisfying relationships is protective against stress and problematic drinking, we had hypothesized that high relationship satisfaction would buffer against the effects of interpersonal trauma on alcohol use,” she said.

The study relied on data collected through Spit for Science, a universitywide project at VCU in which student volunteers provide information on alcohol, substance use, emotional health and more, and contribute DNA samples that provide insight into the role of genetics. The study involved nearly 9,000 students who participated in Spit for Science between 2011 and 2014.

Participants completed baseline assessments during the fall of their freshman year and were invited to complete follow-up assessments every spring thereafter. Participants were included in the study if they completed surveys at baseline and at least one follow-up assessment.

“Each year, participants answered questions about stressful life events they may have experienced, their quantity and frequency of alcohol consumption, and their romantic relationships,” Smith said. “This allowed us to look at the interplay between interpersonal trauma, alcohol use and romantic relationship characteristics over time.”

The study’s findings could be valuable for efforts to increase awareness and education for college students about the ways in which our social ties can promote or undermine health behaviors, like alcohol use, Smith said.

Additionally, she said, the findings could be applied as part of treatment to reduce unsafe drinking.

“We know from previous research that exposure to interpersonal trauma is associated with risky alcohol use, so romantic partners can be included in treatment planning and aftercare to help trauma survivors cope with traumatic events in healthier ways and reduce engagement in risky drinking behaviors,” she said.

In addition to Smith and Salvatore, the study’s co-authors include Danielle Dick, Ph.D., Distinguished Commonwealth Professor in the Departments of Psychology and Human and Molecular Genetics at VCU; Ananda Amstadter, Ph.D., associate professor in the Virginia Institute for Psychiatric and Behavioral Genetics at VCU; Nathaniel Thomas, a doctoral student in the Department of Psychology; and the Spit for Science Working Group.

Journal

Addiction

DOI

10.1111/add.15490

Credit: 
Virginia Commonwealth University

How we know whether and when to pay attention

image: Fast reactions are based on estimates of whether and when events will occur.

Image: 
Max Planck Institute for Empirical Aesthetics

Fast reactions to future events are crucial. A boxer, for example, needs to respond to her opponent in fractions of a second in order to anticipate and block the next attack. Such rapid responses are based on estimates of whether and when events will occur. Now, scientists from the Max Planck Institute for Empirical Aesthetics (MPIEA) and New York University (NYU) have identified the cognitive computations underlying this complex predictive behavior.

How does the brain know when to pay attention? Every future event carries two distinct kinds of uncertainty: Whether it will happen within a given time span, and if so, when it will likely occur. Until now, most research on temporal prediction has assumed that the probability of whether an event will occur has a stable effect on anticipation over time. However, this assumption has not been empirically proven. Furthermore, it is unknown how the human brain combines the probabilities of whether and when a future event will occur.

An international team of researchers from MPIEA and NYU has now investigated how these two different sources of uncertainty affect human anticipatory behavior. Using a simple but elegant experiment, they systematically manipulated the probabilities of whether and when sensory events will occur and analyzed human reaction time behavior. In their recent article in the journal Proceedings of the National Academy of Sciences (PNAS), the team reports two novel results. First, the probability of whether an event will occur has a highly dynamic effect on anticipation over time. Second, the brain's estimations of whether and when an event will occur take place independently.

"Our experiment taps into the basic ways we use probability in everyday life, for example when driving our car," explains Matthias Grabenhorst of the Max Planck Institute for Empirical Aesthetics. "When approaching a railroad crossing, the probability of the gates closing determines our overall readiness to hit the brakes. This is intuitive and known."

Georgios Michalareas, also MPIEA, adds: "We found, however, that this readiness to respond drastically increases over time. You become much more alert, although the probability of the gates closing objectively does not change." This dynamic effect of whether an event will occur is independent of when it will happen. The brain knows when to pay attention based on independent computations of these two probabilities.

The research team's findings indicate that the human brain dynamically adjusts its readiness to respond based on separate probability estimates of whether and when events occur. The results of this study add significantly to our understanding of how the human brain predicts future events in order to interact accordingly with the environment.

Credit: 
Max-Planck-Gesellschaft

Using exoplanets as dark matter detectors

COLUMBUS, Ohio - In the continuing search for dark matter in our universe, scientists believe they have found a unique and powerful detector: exoplanets.

In a new paper, two astrophysicists suggest dark matter could be detected by measuring the effect it has on the temperature of exoplanets, which are planets outside our solar system.

This could provide new insights into dark matter, the mysterious substance that can't be directly observed, but which makes up roughly 80% of the mass of the universe.

"We believe there should be about 300 billion exoplanets that are waiting to be discovered," said Juri Smirnov, a fellow at The Ohio State University's Center for Cosmology and Astroparticle Physics.

"Even finding and studying a small number of them could give us a great deal of information about dark matter that we don't know now."

Smirnov co-authored the paper with Rebecca Leane, a postdoctoral researcher at the SLAC National Accelerator Laboratory at Stanford University. It was published today (April 22, 2021) in the journal Physical Review Letters.

Smirnov said that when the gravity of exoplanets captures dark matter, the dark matter travels to the planetary core where it "annihilates" and releases its energy as heat. The more dark matter that is captured, the more it should heat up the exoplanet.

This heating could be measured by NASA's James Webb Space Telescope, an infrared telescope scheduled to launch in October that will be able to measure the temperature of distant exoplanets.

"If exoplanets have this anomalous heating associated with dark matter, we should be able to pick it up," Smirnov said.

Exoplanets may be particularly useful in detecting light dark matter, Smirnov said, which is dark matter with a lower mass. Researchers have not yet probed light dark matter by direct detection or other experiments.

Scientists believe that dark matter density increases toward the center of our Milky Way galaxy. If that is true, researchers should find that the closer planets are to the galactic center, the more their temperatures should rise.

"If we would find something like that, it would be amazing. Clearly, we would have found dark matter," Smirnov said.

Smirnov and Leane propose one type of search that would involve looking close to Earth at gas giants - so called "Super Jupiters" - and brown dwarfs for evidence of heating caused by dark matter. One advantage of using planets like this as dark matter detectors is that they don't have nuclear fusion, like stars do, so there is less "background heat" that would make it hard to find a dark matter signal.

In addition to this local search, the researchers suggest a search for distant rogue exoplanets that are no longer orbiting a star. The lack of radiation from a star would again cut down on interference that could obscure a signal from dark matter.

One of the best parts of using exoplanets as dark matter detectors is that it doesn't require any new types of instrumentation such as telescopes, or searches that aren't already being done, Smirnov said.

As of now, researchers have identified more than 4,300 confirmed exoplanets and an additional 5,695 candidates are currently under investigation. Gaia, a space observatory of the European Space Agency, is expected to identify tens of thousands more potential candidates in the next few years.

"With so many exoplanets being studied, we will have a tremendous opportunity to learn more than ever before about dark matter," Smirnov said.

Credit: 
Ohio State University

Individuals in lower-income US counties or high support for former President Trump continue to be less likely to socially distance

image: County-level distance traveled has been averaged by month and normalized to pre-COVID-19 levels. Negative values represent greater physical distancing. Dates range from March 9, 2020 to January 17, 2021. Based on physical distancing data from 15-17 million anonymized cell phone users per day in 3,037 US counties from Unacast. County-level characteristics were obtained from the American Community Survey and MIT Election Data and Science Lab.

Image: 
American Journal of Preventive Medicine

Ann Arbor, April 22, 2021 - Using nearly a year of anonymous geolocation data from 15-17 million cell phone users in 3,037 United States counties, investigators have found that individuals with lower income per capita or greater Republican orientation were associated with significantly reduced social distancing throughout the study period from March 2020 through January 2021. Their findings are reported in the American Journal of Preventive Medicine, published by Elsevier.

The associations persisted after adjusting for a variety of county-level demographic and socioeconomic characteristics. Other county-level characteristics, such as the share of Black and Hispanic residents, were also associated with reduced distancing at various points during the study period.

"We started this project in April 2020 because we wanted to understand the social, economic, and political factors that drive people to engage in social distancing. We ended up tracking these factors for almost a year," explained lead investigator Nolan M. Kavanagh, MPH, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.

"A year is a long time to prepare policy responses and educate the public," he said. "Yet, the same kinds of communities that struggled to physically distance early on continue to struggle now."

To measure social distancing, the investigators used county-level averages of distance traveled per person using cell phone location data. They looked at the percentage change in average movement from the start of the pandemic relative to four pre-COVID-19 reference weeks. Statistical analysis was used to estimate the association between a variety of county-level demographic, socioeconomic, and political characteristics. Socioeconomic status was based on income per capita, and political orientation was based on the 2016 vote share for President Trump. Other county-level characteristics examined included the percentage of males, Black and Hispanic residents; share of residents over age 65; share of foreign-born residents; share of the workforce in industries most affected by COVID-19, such as retail, transportation and health, educational and social services; and the share of rural residential plots. These county-level characteristics were chosen based on their expected contribution to a community's ability to physically distance.

Investigators found a sharp reduction in average movement among US counties at the start of the COVID-19 pandemic and declaration of a national emergency in March 2020. Social distancing was greatest from late March to early June and then returned to baseline levels by June 2020. Distancing began to increase again in early September. However, even as a national trend evolved, the investigators found substantial variability in social distancing across counties.

While the share of racial and ethnic minorities, immigration, rurality, and employment in transportation were correlated with changes in average movement on many days, the single most consistent predictor of engagement in distancing across the study was higher per capita income. The single most consistent predictor of lack of engagement with distancing across the study was share of vote for President Trump.

Other county level-characteristics varied over time in their degree of association with physical distancing. During the early months of the study, counties with greater shares of Black and Hispanic residents were less likely to engage in social distancing. These adjusted racial and ethnic differences closed during the summer months before re-emerging in the fall. Similarly, rural counties were less likely to engage in distancing early on; by the end of the summer, rurality became the strongest negative predictor of physical distancing.

The investigators suggest a number of barriers that may underlie these findings. For example, lower-income or gig jobs may be incompatible with working from home, and lower income households may not have the necessary liquidity to shop in bulk, requiring more trips for groceries and essential household items. They observe that partisanship has dramatically shaped the government and public response to COVID-19 in the US. These findings show that political differences have continued to shape social distancing behavior, months into the pandemic and extending beyond the 2020 general election. As a result, both low-income and Republican-leaning communities are at greater risk for COVID-19.

"These results suggest that policy responses to the pandemic have failed to level the playing field," said Mr. Kavanagh. "We have not addressed the challenges to physical distancing faced by low-income Americans, such as working from home. And messages by political and public health leaders have not reached populations who may have different beliefs about disease risk. Analyses such as this study that monitor disparities over time can help us target public health and economic interventions to the communities that need them the most."

Credit: 
Elsevier