Culture

Discovery of the role of a key gene in the development of ALS

image: INRS Professor Kessen Patten, specialist in amyotrophic lateral sclerosis (ALS) and holder of the Anna Sforza Djoukhadjian Research Chair.

Image: 
Christian Fleury

Amyotrophic lateral sclerosis, or ALS, attacks nerve cells known as motor neurons in the brain and spinal cord, gradually leading to paralysis. The loss of function of an important gene, C9orf72, may affect communication between motor neurons and muscles in people with this disease. These findings were revealed by the team of Dr Kessen Patten of the Institut national de la recherche scientifique (INRS) in the prestigious journal Communications Biology.

A mutation in the C9orf72 gene is the primary genetic cause of ALS. The mutation in C9orf72 consists of an expansion of a sequence of six DNA bases (GGGGCC) that is very unusual, going from a few copies (less than 20 in a healthy person) to more than 1000 copies. The mutation, in part resulting in a loss of function, may be responsible for 40% to 50% of hereditary cases of ALS, and 5% to 10% of cases without family history.

Dr Patten's team investigated this gene's loss of function in genetically modified zebrafish models. In their work, led by PhD student Zoé Butti, the group noted symptoms similar to ALS, namely motor disorders, muscle atrophy, loss of motor neurons, and mortality of individuals.

Synaptic transmission

The study showed the effect of the loss of function induced by the mutation of the C9orf72 gene on communication between motor neurons and muscles. "This synaptic dysfunction is observed in all people with the disease and occurs before the death of motor neurons," noted the researcher and holder of the Anna Sforza Djoukhadjian Research Chair.

The research group also revealed the gene's role on the protein TDP-43 (transactive response DNA binding protein 43) which plays an important role in ALS. The C9orf72 gene may affect the protein TDP-43's location within the cell. "In approximately 97% of ALS patients, the TDP-43 protein is depleted from the nucleus and forms aggregates in the cytoplasm rather than being in the nucleus, as is the case in healthy people. We want to investigate this relationship between the two proteins further," explained Professor Patten.

Now that the team has developed a model, it will be able to test therapeutic molecules. The aim is to find a drug to restore the synaptic connection between neurons and muscles. It may also lead to a therapeutic target to rectify the abnormality of the TDP-43 protein.

Credit: 
Institut national de la recherche scientifique - INRS

In-situ structural evolution of Zr-doped Na<sub>3</sub>V<sub>2</sub>(PO<sub>4</sub>)<sub>2</sub>F<sub>3</sub> coated by N-doped carbon for SIB

image: Analysis of the nanostructure and in-situ structural evolution of Zr-doped NVPF completely coated with nitrogen-doped carbon.

Image: 
Journal of Energy Chemistry

Na3V2(PO4)2F3(NVPF), a cathode material used in sodium-ion batteries (SIB), features ultrafast Na+ migration and high structural stability because of its three-dimensional open framework. However, the poor intrinsic electronic conductivity of NVPF often leads to high polarization, low Coulombic efficiency, and unsatisfactory rate performance, which hinder its commercial application.

Recently, a group led by Prof. Shuangqiang Chen and Prof. Yong Wang from Shanghai University synthesized zirconium-doped NVPF nanoparticles coated with a nitrogen-doped carbon layer and demonstrated a synergistic effect on the overall electrochemical performance. Specifically, the optimized NVPF-Zr-0.02/NC electrode led to high reversible capacity (119.2 mA h g-1 at 0.5 C), superior rate capacity (98.1 mA h g-1 at 20 C), and excellent cycling performance (capacity retention of 90.2% in 1000 cycles at 20 C). In situ XRD characterization of the NVPF-Zr-0.02/NC electrode was performed to monitor the real-time structural evolution in different charge/discharge states. The results confirmed the presence of several intermediates with new phases, following a step-wise Na-extraction/intercalation mechanism with reversible multiphase changes. In addition, NVPF-Zr-0.02/NC//hard carbon full cells demonstrated a high reversible capacity of 99.8 mA h g-1 at 0.5C, with an average output voltage of 3.5 V, high energy density of ~194 Wh kg-1, and good cycling stability, thus indicating excellent potential for practical application.

"Such attempts provide meaningful guidance and reference for practical SIBs with high capacity, long cycle life, and good structural stability," said Prof. Chen.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Breakthrough for tracking RNA with fluorescence

image: Researchers at Chalmers University of Technology, Sweden, have succeeded in developing a method to label mRNA molecules, and thereby follow, in real time, their path through cells, using a microscope - without affecting their properties or subsequent activity. The breakthrough could be of great importance in facilitating the development of new RNA-based medicines.

Image: 
Chalmers University of Technology

Researchers at Chalmers University of Technology, Sweden, have succeeded in developing a method to label mRNA molecules, and thereby follow, in real time, their path through cells, using a microscope - without affecting their properties or subsequent activity. The breakthrough could be of great importance in facilitating the development of new RNA-based medicines.

RNA-based therapeutics offer a range of new opportunities to prevent, treat and potentially cure diseases. But currently, the delivery of RNA therapeutics into the cell is inefficient. For new therapeutics to fulfil their potential, the delivery methods need to be optimised. Now, a new method, recently presented in the highly regarded Journal of the American Chemical Society, can provide an important piece of the puzzle of overcoming these challenges and take the development a major step forward.

"Since our method can help solve one of the biggest problems for drug discovery and development, we see that this research can facilitate a paradigm shift from traditional drugs to RNA-based therapeutics," says Marcus Wilhelmsson, Professor at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology, and one of the main authors of the article.

Making mRNA fluorescent without affecting its natural activity

The research behind the method has been done in collaboration with chemists and biologists at Chalmers and the biopharmaceuticals company AstraZeneca, through their joint research centre, FoRmulaEx as well as a research group at the Pasteur Institute, Paris.

The method involves replacing one of the building blocks of RNA with a fluorescent variant, which, apart from that feature, maintains the natural properties of the original base. The fluorescent units have been developed with the help of a special chemistry, and the researchers have shown that it can then be used to produce messenger RNA (mRNA), without affecting the mRNA's ability to be translated into a protein at natural speed. This represents a breakthrough which has never before been done successfully. The fluorescence furthermore allows the researchers to follow functional mRNA molecules in real time, seeing how they are taken up into cells with the help of a microscope.

A challenge when working with mRNA is that the molecules are very large and charged, but at the same time fragile. They cannot get into cells directly and must therefore be packaged. The method that has proven most successful to date uses very small droplets known as lipid nanoparticles to encapsulate the mRNA. There is still a great need to develop new and more efficient lipid nanoparticles - something which the Chalmers researchers are also working on. To be able to do that, it is necessary to understand how mRNA is taken up into cells. The ability to monitor, in real time, how the lipid nanoparticles and mRNA are distributed through the cell is therefore an important tool.

"The great benefit of this method is that we can now easily see where in the cell the delivered mRNA goes, and in which cells the protein is formed, without losing RNA's natural protein-translating ability," says Elin Esbjörner, Associate Professor at the Department for Biology and Biotechnology and the second lead author of the article.

Crucial information for optimising drug discovery

Researchers in this area can use the method to gain greater knowledge of how the uptake process works, thus accelerating and streamlining the new medicines' discovery process. The new method provides more accurate and detailed knowledge than current methods for studying RNA under a microscope.

"Until now, it has not been possible to measure the natural rate and efficiency with which RNA acts in the cell. This means that you get the wrong answers to the questions you ask when trying to develop a new drug. For example, if you want an answer to what rate a process takes place at, and your method gives you an answer that is a fifth of the correct, drug discovery becomes difficult," explains Marcus Wilhelmsson.

On the way to utilisation - directly into IVA's top 100 list

When the researchers realised what a difference their method could make and how important the new knowledge is for the field, they made their results available as quickly as possible. Recently, the Royal Swedish Academy of Engineering Sciences (IVA) included the project in its annual 100 list and also highlighted it as particularly important for increasing societal resilience to crises. To ensure useful commercialisation of the method, the researchers have submitted a patent application and are planning for a spin-off company, with the support of the business incubator Chalmers Ventures and the Chalmers Innovation Office.

Credit: 
Chalmers University of Technology

Prevalence of COVID-19 among hospitalized infants varies with levels of community transmission

How common COVID-19 is among infants may depend on the degree of the pandemic virus circulating in a community, a new study finds.

Published online June 30 in the journal Pediatrics, the study found specifically that rates of the infection with the virus that causes COVID-19 were higher among infants hospitalized, not for COVID-19 - but instead because they were being evaluated for a potential serious bacterial infection (SBI) - during periods of high COVID-19 circulation in New York City. The study also found rates of COVID-19 positivity in this age group were lower when infection rates in the city were low.

Led by researchers from NYU Langone Health, the study also examined the clinical course of the infection in young infants and found that the most common presentation of COVID-19 was a fever without other symptoms.

"Enhancing our knowledge of how COVID-19 infection affects young infants is important for informing clinical practice, and for planning public health measures such as vaccination distribution," says Vanessa N. Raabe, MD, assistant professor in NYU Langone's Departments of Medicine and Pediatrics, in the Division of Pediatric Infectious Diseases, and one of the study's principal investigators.

New York City was the early epicenter of COVID-19 in the United States, with more than 190,000 reported infections during the peak of the NYC epidemic between March and May in 2020. Three percent of the reported cases were in children under 18 years of age, although these numbers may underestimate the true incidence given the lack of adequate testing. Most children infected with the disease were asymptomatic or had mild symptoms. However, cases of severe illness have been reported and some reports suggest young infants may be at higher risk for severe disease than older children.

Young babies are often treated with antibiotics in the hospital when they run a fever until doctors can make sure they don't have a serious bacterial infection, such as meningitis or a bloodstream infection, say the study authors.

"Because fever is a common symptom of COVID-19 in children, clinicians must consider COVID-19 as a potential cause of fever and not solely rely on laboratory or imaging results to guide decision-making on whether or not to test hospitalized infants for COVID-19," says Dr. Raabe.

The current study analyzed data from infants less than 90 days of age admitted for SBI evaluation at NYU Langone Health hospitals and NYC Health + Hospitals/Bellevue Hospital between March and December 2020. Among 148 infants, 15 percent tested positive for COVID-19, and two of the 22 infants with COVID-19 required ICU admission, but were discharged safely. Specifically, the team found that only 3 percent of infants tested positive during periods of low community circulation, compared to 31 percent in communities with high infection rates.

The team also found a relatively low incidence (six percent) of infection of the hospitalized infants with other commonly occurring viruses, whether or not they had COVID-19. "This likely reflects community-wide decreases in other respiratory viruses reported in New York during the study period due to enhanced infection control practices, like social distancing and mask wearing, at the height of the pandemic," says Raabe.

The researchers recommend clinicians continue to assess young infants that present with fevers for bacterial infections, regardless of the COVID-19 status, and given the potential severe consequences if not treated.

"It may be intuitive that what is happening in children reflects conditions in the surrounding community, but we find it reassuring that the evidence confirms this relationship," says lead author Michal Paret, MD, a fellow in the Department of Pediatrics, Division of Pediatric Infectious Diseases at NYU Langone. "The epidemiology of COVID-19 continues to evolve with the emergence of variant viruses and vaccination implementation. In the face of these changes physicians need to continue studying this age group, with the goal of determining ultimately whether a selective or universal testing strategy best serves the health of infants long-term."

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

'There may not be a conflict after all' in expanding universe debate

image: A red giant star, Camelopardalis, emits a shell of gas as a layer of helium around its core begins to fuse. Such events help scientists calculate how fast the universe is expanding.

Image: 
ESA/NASA

Our universe is expanding, but our two main ways to measure how fast this expansion is happening have resulted in different answers. For the past decade, astrophysicists have been gradually dividing into two camps: one that believes that the difference is significant, and another that thinks it could be due to errors in measurement.

If it turns out that errors are causing the mismatch, that would confirm our basic model of how the universe works. The other possibility presents a thread that, when pulled, would suggest some fundamental missing new physics is needed to stitch it back together. For several years, each new piece of evidence from telescopes has seesawed the argument back and forth, giving rise to what has been called the 'Hubble tension.'

Wendy Freedman, a renowned astronomer and the John and Marion Sullivan University Professor in Astronomy and Astrophysics at the University of Chicago, made some of the original measurements of the expansion rate of the universe that resulted in a higher value of the Hubble constant. But in a new review paper accepted to the Astrophysical Journal, Freedman gives an overview of the most recent observations. Her conclusion: the latest observations are beginning to close the gap.

That is, there may not be a conflict after all, and our standard model of the universe does not need to be significantly modified.

The rate at which the universe is expanding is called the Hubble constant, named for UChicago alum Edwin Hubble, SB 1910, PhD 1917, who is credited with discovering the expansion of the universe in 1929. Scientists want to pin down this rate precisely, because the Hubble constant is tied to the age of the universe and how it evolved over time.

A substantial wrinkle emerged in the past decade when results from the two main measurement methods began to diverge. But scientists are still debating the significance of the mismatch.

One way to measure the Hubble constant is by looking at very faint light left over from the Big Bang, called the cosmic microwave background. This has been done both in space and on the ground with facilities like the UChicago-led South Pole Telescope. Scientists can feed these observations into their 'standard model' of the early universe and run it forward in time to predict what the Hubble constant should be today; they get an answer of 67.4 kilometers per second per megaparsec.

The other method is to look at stars and galaxies in the nearby universe, and measure their distances and how fast they are moving away from us. Freedman has been a leading expert on this method for many decades; in 2001, her team made one of the landmark measurements using the Hubble Space Telescope to image stars called Cepheids. The value they found was 72. Freedman has continued to measure Cepheids in the years since, reviewing more telescope data each time; however, in 2019, she and her colleagues published an answer based on an entirely different method using stars called red giants. The idea was to cross-check the Cepheids with an independent method.

Red giants are very large and luminous stars that always reach the same peak brightness before rapidly fading. If scientists can accurately measure the actual, or intrinsic, peak brightness of the red giants, they can then measure the distances to their host galaxies, an essential but difficult part of the equation. The key question is how accurate those measurements are.

The first version of this calculation in 2019 used a single, very nearby galaxy to calibrate the red giant stars' luminosities. Over the past two years, Freedman and her collaborators have run the numbers for several different galaxies and star populations. "There are now four independent ways of calibrating the red giant luminosities, and they agree to within 1% of each other," said Freedman. "That indicates to us this is a really good way of measuring the distance."

"I really wanted to look carefully at both the Cepheids and red giants. I know their strengths and weaknesses well," said Freedman. "I have come to the conclusion that that we do not require fundamental new physics to explain the differences in the local and distant expansion rates. The new red giant data show that they are consistent."

University of Chicago graduate student Taylor Hoyt, who has been making measurements of the red giant stars in the anchor galaxies, added, "We keep measuring and testing the red giant branch stars in different ways, and they keep exceeding our expectations."

The value of the Hubble constant Freedman's team gets from the red giants is 69.8 km/s/Mpc--virtually the same as the value derived from the cosmic microwave background experiment. "No new physics is required," said Freedman.

The calculations using Cepheid stars still give higher numbers, but according to Freedman's analysis, the difference may not be troubling. "The Cepheid stars have always been a little noisier and a little more complicated to fully understand; they are young stars in the active star-forming regions of galaxies, and that means there's potential for things like dust or contamination from other stars to throw off your measurements," she explained.

To her mind, the conflict can be resolved with better data.

Next year, when the James Webb Space Telescope is expected to launch, scientists will begin to collect those new observations. Freedman and collaborators have already been awarded time on the telescope for a major program to make more measurements of both Cepheid and red giant stars. "The Webb will give us higher sensitivity and resolution, and the data will get better really, really soon," she said.

But in the meantime, she wanted to take a careful look at the existing data, and what she found was that much of it actually agrees.

"That's the way science proceeds," Freedman said. "You kick the tires to see if something deflates, and so far, no flat tires."

Some scientists who have been rooting for a fundamental mismatch might be disappointed. But for Freedman, either answer is exciting.

"There is still some room for new physics, but even if there isn't, it would show that the standard model we have is basically correct, which is also a profound conclusion to come to," she said. "That's the interesting thing about science: We don't know the answers in advance. We're learning as we go. It is a really exciting time to be in the field."

Credit: 
University of Chicago

Researchers hone in on the best software for detecting microRNAs in plants

Almost twenty years ago, the process of RNA silencing was discovered in plants, whereby small fragments of RNA inactivate a portion of a gene during protein synthesis. These fragments--called microRNAs (abbreviated as miRNAs)--have since been shown to be essential at nearly every stage of growth and development in plants, from the production of flowers, stems, and roots to the ways plants interact with their environment and ward off infection.

The detection and characterization of miRNAs is an active field of research. In the decade following their discovery in plants, over 1,000 bioinformatic tools were used to identify miRNAs and map out potential places to look for them.

In a new study published in the journal Applications in Plant Sciences, researchers set out to simplify the process of miRNA discovery and characterization by testing eight of the most commonly used miRNA applications, evaluating each based on accuracy, sensitivity, speed, and the amount of computer memory used by the software.

The sheer number of applications available can make studying miRNAs a daunting task. The majority of these tools (77%) were also developed and tested with animal systems in mind, making it unclear whether their utility can be extended for the detection and analysis of miRNAs in plants.

"The biosynthetic pathways of miRNA in plants and animals are different," said Dr. Qi You, a lecturer at the Agricultural College of Yangzhou University and senior author of the study. "At the same time, the stem-loop structure of plant miRNA precursors is larger than that of animals. Plant miRNAs will be modified by methylation, but animal miRNAs will not."

Things get even trickier when considering that individuals of the same plant species can have varying genome sizes, making it difficult to develop a single standardized approach. Additionally, some programs only look for miRNAs that are already known to exist, while others comb through genomes to find new additions to the list.

Finally, while most tools use search criteria to directly identify miRNA sequences in a genome, others are designed to search for miRNAs in their early developmental stages (precursor miRNA), when more base pairs are attached at either end before a molecular cleaving process trims them down to size.

You and her colleagues took eight miRNA applications (one that was developed for use in animals, and two that are exclusively used to locate precursor miRNA) and vigorously tested them on four different plant species, each with varying genome sizes: thale cress (Arabidopsis thaliana), rice (Oryza sativa), maize (Zea mays), and wheat (Triticum aestivum).

While all eight programs had similar performance in terms of accuracy, there were some obvious winners and losers for all other metrics scored, analyzed both together and separately.

First, the software developed for miRNA detection in animals scored low on sensitivity and on average took longer to run than most of the other programs. It was particularly bad at identifying known miRNAs in maize and had a high rate of true to false positives in wheat, indicating it likely isn't reliably suitable for use in plants.

Two programs stood out from the rest for their high sensitivity and low run times. The first, miRExpress--a program developed in 2009 for the detection of putative novel miRNAs--had the highest sensitivity for three of the four species tested and used the least amount of memory. The runner-up was the program sRNAbench, which had the highest sensitivity for wheat and similar run times.

Of the two, sRNAbench has many of the same functions and utility as the program developed for animal systems tested in this study, making it a solid stand-in for use in plants.

Ultimately, however, there's no one-size-fits-all option when it comes to choosing the best program, and researchers should choose based on their needs and available resources, You said.

"Researchers should consider the size of the genome of the tested species, the size of the sample data, and the configuration of their own computer equipment."

For labs that might not have access to high-performance computing or that don't have ample computer memory space, the authors recommend miRExpress, which maintains high accuracy rates while being less resource-intensive.

For those with more RAM, sRNAbench is the next step up, with high accuracy and applicability to several species with varying genome sizes. Similarly, detecting precursor miRNA requires large amounts of RAM, for which the authors recommend the program miRkwood.

Credit: 
Botanical Society of America

Scientists risk overestimating numbers of wild bonobos

image: Bonobos, like other great apes, build sleeping platforms each night. These nests are used by scientists to estimate numbers of apes in the wild.

Image: 
MPI of Animal Behavior/ Barbara Fruth

There might be fewer bonobos left in the wild than we thought. For the last 40 years, scientists have estimated the abundance of endangered bonobos by counting the numbers of sleeping nests left by the apes in forests of the Congo Basin. Now, researchers from the Max Planck Institute of Animal Behavior report that the rate of sleeping nest "decay" has lengthened by 17 days over the last 15 years as a result of declining rainfall in the Congo Basin. The study warns that longer nest decay times have serious implications for ape conservation: failure to account for these changes would lead to overestimating population density by up to 60 percent, subsequently jeopardizing conservation of these endangered great apes in the wild.

Bonobos (Pan paniscus) are a species of great ape similar to chimpanzees that are found only in the rainforests of the Congo Basin--the second largest forest area of the planet and known as one of Earth's "green lungs."

The study of the LuiKotale Bonobo Project, which is part of an ongoing collaboration with scientists at Liverpool John Moores University, the Centre for Research and Conservation, and the Max Planck Institute of Animal Behavior, aimed to assess the influence of weather on the decomposition of bonobo sleeping platforms, also called nests. In the LuiKotale research site in the Democratic Republic of the Congo, the scientists studied 15 years of climate data, including rainfall, and 1,511 bonobo nests observed from construction to disappearance. "While climate change is known to be impacting Central African rainforests, a lack of real data from the Congo Basin meant that we didn't know how this was affecting the region and bonobos," says Barbara Fruth, Group Leader at the Max Planck Institute of Animal Behavior in Konstanz, Germany, who is senior author on the study. "Our study provides evidence that the world's largest freshwater reservoir is suffering from climate change," she says.

How to convert nest counts into ape counts

This troubling trend has added importance for bonobos, and great apes in general, because their populations are estimated not by counting actual apes, but rather the nests left by the primates each night. Knowing how fast these nests disappear is essential for converting nest counts to ape counts. The study's results showed a steady decline of rainfall across years and the impact of this on bonobo nest decay times. Less rain meant that nests persisted for longer in the forest. Further, the scientists observed bonobos using more solid construction types in response to more unpredictable storms.

The connection between nest decay and climate change is relevant to conservation of all great apes because nest counts are used as the "gold standard for estimating their populations," says first author Mattia Bessone, a PhD student at Liverpool John Moores University who is working on validating cutting-edge methods for assessing threatened species. "As climate change continues to affect both the process of nest decay and ape nest building behaviour, great ape nest decomposition times are likely only to increase further in future years. "We stress the absolute necessity to take into account the specific impact of climate on nests in areas where these are being used to surveying great apes. Failure to do so will invalidate biomonitoring estimates of global significance and subsequently jeopardize the conservation of great apes in the wild."

Credit: 
Max-Planck-Gesellschaft

Cardio health decline tied to midlife wealth

image: Andrew Sumarsono, M.D.

Image: 
UT Southwestern Medical Center

DALLAS - June 30, 2021 - A relative decline in wealth during midlife increases the likelihood of a cardiac event or heart disease after age 65 while an increase in wealth between ages 50 and 64 is associated with lower cardiovascular risk, according to a new study in JAMA Cardiology.

Although the association between socioeconomic status and cardiovascular outcomes is well established, little research has been done to determine whether longitudinal changes in wealth are associated with cardiovascular health. In the study, Andrew Sumarsono, M.D., an assistant professor of internal medicine at UT Southwestern, along with colleagues from Harvard-affiliated Brigham and Women's Hospital Heart & Vascular Center and the London School of Economics, investigated the cardiovascular toll that changes in monetary health can have in the U.S., where there is a 10- to 15-year difference in life expectancy between the population's richest 1 percent and the poorest 1 percent.

Examining a cohort of more than 5,500 adults without cardiovascular disease, they found that middle-aged participants who experienced upward wealth mobility - defined as relative increases in the total value of assets excluding primary residence - had lower cardiovascular risk after age 65 compared with peers of similar age. Conversely, participants who experienced downward wealth mobility in the latter parts of their careers had higher cardiovascular risk later in life. Cardiovascular events cited as outcomes include acute myocardial infarction, heart failure, cardiac arrhythmia, and stroke, or cardiac-related death.

"We already know that wealth relates to health, but we show that wealth trajectories also matter. This means that the cardiovascular risk associated with wealth is not permanent and can be influenced," says Sumarsono, a faculty member in the Division of Hospital Medicine.

The researchers estimate a 1 percent swing in cardiovascular risk for every $100,000 gained or lost by individuals. Notably, participants who started in the top 20 percent of wealth and experienced downward wealth mobility still had similar cardiovascular risk as those who remained fixed in the top quintile. However, those who started in the bottom fifth of wealth accumulation and experienced upward wealth mobility had lower cardiovascular risk than those fixed in the bottom quintile. The investigators suggest that this may indicate a potential legacy protection present among the wealthiest, but not the poorest. These findings linking wealth change and downstream cardiovascular events were similar across all racial or ethnic subgroups.

"We found that irrespective of one's baseline wealth, upward wealth mobility relative to peers in late-middle age was associated with lower risk of a new cardiac event or death after age 65. This suggests that upward wealth mobility may offset some of the risk associated with past economic hardship," Sumarsono says. "We also found the inverse was true - that people who experienced downward wealth mobility relative to one's peers faced a higher risk of a new cardiac event or death after 65, potentially offsetting some of the benefit associated with prior economic thriving.

"We live in a system where people can experience catastrophic losses in wealth from situations beyond their control and that opportunities to accrue wealth are not equally available across racial or socioeconomic groups," Sumarsono adds. "Policies that build resilience against large wealth losses and that address these opportunity gaps should be prioritized and may be considered a public health measure to improve overall health while also potentially narrowing racial, socioeconomic, and cardiovascular health disparities."

Credit: 
UT Southwestern Medical Center

Multitalented filaments in living cells

video: Researchers at Göttingen University have now succeeded for the first time in precisely measuring which physical effects determine the properties of the individual filaments, and which specific features only occur through the interaction of many filaments in networks. The results were published in PNAS.

Image: 
Dr Markus Osterhoff

The cells that make up our bodies are constantly exposed to a wide variety of mechanical stresses. For example, the heart and lungs have to withstand lifelong expansion and contraction, our skin has to be as resistant to tearing as possible whilst retaining its elasticity, and immune cells are very squashy so that they can move through the body. Special protein structures, known as "intermediate filaments", play an important role in these characteristics. Researchers at Göttingen University have now succeeded for the first time in precisely measuring which physical effects determine the properties of the individual filaments, and which specific features only occur through the interaction of many filaments in networks. The results were published in PNAS.

One of the most important systems that cells use to ensure their stability, elasticity and resistance to mechanical stress is the cytoskeleton: an intricate network of proteins and filaments, predominantly formed by three types of thread-like protein structures, each of which has different functions and properties. The intermediate filaments belong to this group of protein structures. These filaments form networks that can be subjected to a tremendous force without being damaged: they are the shock absorbers of the cells. At the same time, these intermediate filaments can serve as an inner tether in the case of powerful deformation and this can prevent the cell from being torn apart.

To investigate these properties, the Göttingen team created artificial networks of intermediate filaments in the laboratory and used the movement of small embedded spheres to study how the entire network behaves. In fact, there are overlapping effects in the networks: the stretching behaviour of the individual filaments; and the force and frequency with which the filaments interact at points where they cross (intersections). For this reason, the researchers investigated these aspects separately by first stretching individual filaments to determine the forces required for the stretching. They then brought two of the filaments into contact with each other in a criss-cross pattern and pulled on the point of intersection by moving one of the filaments. By arranging the filaments as if they were a "microscopic violin", they were able to determine the exact strength and frequency with which the filaments bind together. They were also able to replicate these results with computer simulations. In addition, the team observed that the networks transform over an unexpectedly long period of time and slowly "age" over the course of a week, because the filaments become longer and longer or join together to form bundles.

"All these observations increase our understanding of how our cells manage to be so incredibly robust and yet flexible," explains first author Anna Schepers from the Institute for X-ray Physics at Göttingen University. "In addition, a clearer picture of intermediate filaments helps us to understand how and why the mechanical properties of cells change, for example, during wound healing or in metastasising cancer cells," adds the research lead, Professor Sarah Köster.

Credit: 
University of Göttingen

Physicists observationally confirm Hawking's black hole theorem for the first time

There are certain rules that even the most extreme objects in the universe must obey. A central law for black holes predicts that the area of their event horizons -- the boundary beyond which nothing can ever escape -- should never shrink. This law is Hawking's area theorem, named after physicist Stephen Hawking, who derived the theorem in 1971.

Fifty years later, physicists at MIT and elsewhere have now confirmed Hawking's area theorem for the first time, using observations of gravitational waves. Their results appear in Physical Review Letters.

In the study, the researchers take a closer look at GW150914, the first gravitational wave signal detected by the Laser Interferometer Gravitational-wave Observatory (LIGO), in 2015. The signal was a product of two inspiraling black holes that generated a new black hole, along with a huge amount of energy that rippled across space-time as gravitational waves.

If Hawking's area theorem holds, then the horizon area of the new black hole should not be smaller than the total horizon area of its parent black holes. In the new study, the physicists reanalyzed the signal from GW150914 before and after the cosmic collision and found that indeed, the total event horizon area did not decrease after the merger -- a result that they report with 95 percent confidence.

Their findings mark the first direct observational confirmation of Hawking's area theorem, which has been proven mathematically but never observed in nature until now. The team plans to test future gravitational-wave signals to see if they might further confirm Hawking's theorem or be a sign of new, law-bending physics.

"It is possible that there's a zoo of different compact objects, and while some of them are the black holes that follow Einstein and Hawking's laws, others may be slightly different beasts," says lead author Maximiliano Isi, a NASA Einstein Postdoctoral Fellow in MIT's Kavli Institute for Astrophysics and Space Research. "So, it's not like you do this test once and it's over. You do this once, and it's the beginning."

Isi's co-authors on the paper are Will Farr of Stony Brook University and the Flatiron Institute's Center for Computational Astrophysics, Matthew Giesler of Cornell University, Mark Scheel of Caltech, and Saul Teukolsky of Cornell University and Caltech.

An age of insights

In 1971, Stephen Hawking proposed the area theorem, which set off a series of fundamental insights about black hole mechanics. The theorem predicts that the total area of a black hole's event horizon -- and all black holes in the universe, for that matter -- should never decrease. The statement was a curious parallel of the second law of thermodynamics, which states that the entropy, or degree of disorder within an object, should also never decrease.

The similarity between the two theories suggested that black holes could behave as thermal, heat-emitting objects -- a confounding proposition, as black holes by their very nature were thought to never let energy escape, or radiate. Hawking eventually squared the two ideas in 1974, showing that black holes could have entropy and emit radiation over very long timescales if their quantum effects were taken into account. This phenomenon was dubbed "Hawking radiation" and remains one of the most fundamental revelations about black holes.

"It all started with Hawking's realization that the total horizon area in black holes can never go down," Isi says. "The area law encapsulates a golden age in the '70s where all these insights were being produced."

Hawking and others have since shown that the area theorem works out mathematically, but there had been no way to check it against nature until LIGO's first detection of gravitational waves.

Hawking, on hearing of the result, quickly contacted LIGO co-founder Kip Thorne, the Feynman Professor of Theoretical Physics at Caltech. His question: Could the detection confirm the area theorem?

At the time, researchers did not have the ability to pick out the necessary information within the signal, before and after the merger, to determine whether the final horizon area did not decrease, as Hawking's theorem would assume. It wasn't until several years later, and the development of a technique by Isi and his colleagues, when testing the area law became feasible.

Before and after

In 2019, Isi and his colleagues developed a technique to extract the reverberations immediately following GW150914's peak -- the moment when the two parent black holes collided to form a new black hole. The team used the technique to pick out specific frequencies, or tones of the otherwise noisy aftermath, that they could use to calculate the final black hole's mass and spin.

A black hole's mass and spin are directly related to the area of its event horizon, and Thorne, recalling Hawking's query, approached them with a follow-up: Could they use the same technique to compare the signal before and after the merger, and confirm the area theorem?

The researchers took on the challenge, and again split the GW150914 signal at its peak. They developed a model to analyze the signal before the peak, corresponding to the two inspiraling black holes, and to identify the mass and spin of both black holes before they merged. From these estimates, they calculated their total horizon areas -- an estimate roughly equal to about 235,000 square kilometers, or roughly nine times the area of Massachusetts.

They then used their previous technique to extract the "ringdown," or reverberations of the newly formed black hole, from which they calculated its mass and spin, and ultimately its horizon area, which they found was equivalent to 367,000 square kilometers (approximately 13 times the Bay State's area).

"The data show with overwhelming confidence that the horizon area increased after the merger, and that the area law is satisfied with very high probability," Isi says. "It was a relief that our result does agree with the paradigm that we expect, and does confirm our understanding of these complicated black hole mergers."

The team plans to further test Hawking's area theorem, and other longstanding theories of black hole mechanics, using data from LIGO and Virgo, its counterpart in Italy.

"It's encouraging that we can think in new, creative ways about gravitational-wave data, and reach questions we thought we couldn't before," Isi says. "We can keep teasing out pieces of information that speak directly to the pillars of what we think we understand. One day, this data may reveal something we didn't expect."

Credit: 
Massachusetts Institute of Technology

Employed individuals more likely to contract the flu, study shows

image: Don Koh, University of Arkansas

Image: 
University of Arkansas

A University of Arkansas researcher and international colleagues found that employed individuals, on average, are 35.3% more likely to be infected with the flu virus.

The findings confirm a long-held assumption about one prevalent way illness spreads and could influence government policy on public health and several issues for private companies, from optimal design and management of physical work spaces to policy decisions about sick leave and remote work.

To track influenza incidence, Dongya "Don" Koh, assistant professor of economics in the Sam M. Walton College of Business, and colleagues relied on nationally representative data from the Medical Expenditure Panel Survey, which provide comprehensive health care information about families and individuals, their medical providers and U.S. employers. The survey is the most complete source of data on the cost and use of health care and health insurance coverage.

Koh and his colleagues found significant differences in flu incidence across various occupations and industries. With the former, for example, people working in sales had a 40.5% higher probability of infection than farmers. In terms of industries, for example, education, health and social services showed a 52.2% higher probability of infection than mining. The results considered individual characteristics, including vaccinations, health insurance and other circumstances.

"Cross-industry differences in flu incidence cannot be fully explained by differences within an industry-specific occupational structure," Koh said. "So we had to look at the extent of human contact and interaction at work as a potential mechanism for contagion."

To do this, the researchers constructed a measure of occupation-specific and industry-specific human exposure and interaction, based on data gleaned from O*NET OnLine, a comprehensive source for the description of jobs, occupational information and workforce development. The researchers found that higher human contact at work was positively associated with higher contagion rates.

The results were larger in years of high aggregate flu incidence and consistent with regard to firm size, number of jobs and hours worked.

"These results shouldn't surprise anyone," Koh said. "We hope they are relevant for an understanding of the spread of flu and other infectious diseases transmitted via respiratory droplets or close human contact, including SARS and COVID. The fact that contagion risk varies across occupations and industries opens the door for an assessment of nonpharmaceutical policies to combat contagion and possibly pandemics. In this sense, we think these results provide a basis for an organizational policy that both protects workers and optimizes production and efficiency."

Credit: 
University of Arkansas

Jackdaws don't console traumatized mates

video: Nest-box footage of a jackdaw pair allopreening (one bird preening another).

Image: 
Cornish Jackdaw Project

Male jackdaws don't stick around to console their mate after a traumatic experience, new research shows.

Jackdaws usually mate for life and, when breeding, females stay at the nest with eggs while males gather food.

Rival males sometimes visit the nest and attack the lone female, attempting to mate by force.

In the new study, University of Exeter researchers expected males to console their partner after these incidents by staying close and engaging in social behaviours like preening their partner's feathers.

However, males focussed on their own safety - they still brought food to the nest, but they visited less often and spent less time with the female.

"Humans often console friends or family in distress, but it's unclear whether animals do this in the wild," said Beki Hooper, of the Centre for Ecology and Conservation on Exeter's Penryn Campus in Cornwall.

"Previous studies have found consolation-like behaviour in some animals, but there is often another explanation - other than consolation - that could motivate this.

"We designed this study to rule out other explanations, so we could find out whether jackdaws, which have complex social lives and life-long partnerships, would console each other.

"We found that male jackdaws could detect their partner's distress, but did not console them.

"The decreased time males spent in the nest after an attack may be for self-preservation, where they respond to distress in their partner by avoiding the nest, a potential place of danger.

"Further work is needed to figure out why exactly males responded as they did, though."

The research team used both experimental nest "visits" of rival males - playing the sounds of an approaching male to a female alone at the nest - and recordings of males attacking nesting females in the wild.

Females became agitated after such experiences, and their returning mates clearly sensed this as their behaviour changed - decreasing "affiliative" behaviour and visiting the nest less regularly afterwards.

"Jackdaw pairs depend on each other for survival and reproduction, so based on current theory we would expect to see consolation after a traumatic event," said Dr Alex Thornton.

"The absence of consolation is striking, and it challenges common conceptions about where consolation should evolve.

"It also chimes with concerns that current theory may be influenced by anthropomorphic (attributing human characteristics to animals) expectations of how social relationships work.

"To properly understand such social behaviour among animals, we need to take account of the dangers and trade-offs they face in the wild."

Jackdaws are an extremely sociable and intelligent corvid species, with a similar number of neurons in their brains to nonhuman primates.

They show impressive levels of social intelligence, such as individual human recognition, and the ability to socially learn from group members about dangerous people.

The paper, published in the journal Royal Society Open Science, is entitled: "Wild jackdaws respond to their partner's distress, but not with consolation."

Credit: 
University of Exeter

A world first! Visualizing atomic-scale structures with the optical force

image: (a) Schematic image of photoinduced force microscopy. (b)(c) Photo-induced force microscopy images of a quantum dot measured using different wavelengths (600 nm, 520 nm). (d) Photoinduced force profiles for the images. This reflects the electronic energy structure designed for photocatalysis.

Image: 
Osaka University

Osaka, Japan - A team of scientists led by the Department of Applied Physics at Osaka University, the Department of Physics and Electronics at Osaka Prefecture University, and the Department of Materials Chemistry at Nagoya University used photoinduced force microscopy to map out the forces acting on quantum dots in three dimensions. By eliminating sources of noise, the team was able to achieve subnanometer precision for the first time ever, which may lead to new advances in photocatalysts and optical tweezers.

Force fields are not the invisible barriers of science fiction but are a set of vectors indicating the magnitude and direction of forces acting in a region of space. Nanotechnology, which involves making and manipulating tiny devices such as quantum dots, sometimes uses lasers to optically trap and move these objects. However, the ability to analyze and handle such small systems requires a better way to visualize the 3D forces acting on them.

Now, a team of researchers at Osaka University, Osaka Prefecture University, and Nagoya University has shown for the first time how photoinduced force microscopy can be used to obtain 3D force field diagrams with subnanometer resolution. "We succeeded in imaging the optical near-field of nanoparticles using a photoinduced force microscope. This measures the optical force between the sample and the probe caused by light irradiation," first author Junsuke Yamanishi says.

Laser light was directed on a quantum dot placed underneath an atomic force microscopy tip. Moving the dot relative to the tip allowed the microscope to map out the 3D photoinduced force field. The team was able to achieve such a high level of precision using a few experimental improvements. They used ultra-vacuum conditions to increase the force sensitivity, and employed heterodyne frequency modulation, which involves mixing two other frequencies, to greatly reduce the impact of thermal heating. "We reduced the photothermal effect with this unique technology and achieved a resolution of less than one nanometer for the first time ever," senior author Yasuhiro Sugawara says.

This research may represent a fundamentally new technology for the design and evaluation of functional nanomaterials. It can also help supplement the toolbox of methods available to scientists working with photocatalysts and optical functional devices to move them using lasers.

Credit: 
Osaka University

What makes vets feel good at work?

image: Uni of Adelaide vet scientists in training.

Image: 
The University of Adelaide

Receiving a simple thank you, spending time with peers and further developing their expertise, are all factors that make veterinarians feel good at work, according to a new study by researchers at the University of Adelaide.

In the study published by Vet Record, researchers investigated the positive side of veterinary work and specifically what brings vets pleasure in their job.

Lead author Madeleine Clise, a psychologist and Adjunct Lecturer at the University of Adelaide's School of Psychology says: "At a time in Australia when there are national shortages of vets, particularly in regional areas, and increased publicity about the risks and challenges in the profession, it's important to focus on what can be done to retain those in the profession and attract more people to the field.

"By focusing on what contributes to vets experiencing positive emotions, we can better understand how to improve wellbeing of those who care for our beloved pets, livestock and wildlife."

In a questionnaire completed by 273 Australian veterinarians, participants were asked to provide up to 10 responses to the prompt, 'I derive pleasure from my work as a veterinarian when...'. Over 2500 responses were grouped into themes and sub-themes and categorised using the 'Job Demands-Resources Model', which focuses on both the positive and negative aspects of a job that are indicative of employee wellbeing.

"The results highlight that there is an abundance of factors related to pleasure at work for veterinarians, above and beyond working with and helping animals," Ms Clise said.
"In fact, positive relationships between clients and vets, and vets and their colleagues, was a more frequent response than positive relationships with animals.

"Vets, just like all of us, feel good when they are shown trust and respect. And a simple 'thank you' goes a long way."

Other findings from the study suggest that having opportunities to use and develop their specialised skillsets is highly pleasurable for veterinarians in practice. A positive workplace culture, successful outcomes with patients and opportunities to collaborate with other vets were also highlighted.

Senior author Dr Michelle McArthur, Associate Professor at the University of Adelaide's School of Animal and Veterinary Sciences, says: "Managers and practice managers can use the results to enhance the work environment for employees.

"This could include introducing an informal and formal recognition system and increasing time spent with colleagues.

"Further beneficial changes could include the introduction of a peer supervision or mentoring program to support veterinary expertise and increase connectedness across the profession."

The results also showed experiencing certain positive beliefs about oneself, such as flexibility, having a positive attitude and accomplishment are associated with pleasure at work.

"So further developing personal resources, for example in the university curriculum or as ongoing professional development, could increase the overall wellbeing of veterinarians," said Dr McArthur.

The researchers hope the results will spark discussion and further focus on the positive aspects of veterinary work, which they say are often overshadowed by the negative.

"Veterinarian work is such a rewarding profession and it's important that we share the many positives with new veterinarians and those in training as reassurance, and to encourage others to join the profession," said Dr McArthur.

Credit: 
University of Adelaide

Using artificial intelligence to overcome mental health stigma

Tsukuba, Japan - Depression is a worldwide problem, with serious consequences for individual health and the economy, and rapid and effective screening tools are thus urgently needed to counteract its increasing prevalence. Now, researchers from Japan have found that artificial intelligence (AI) can be used to detect signs of depression.

In a study published this month in BMJ Open, researchers from University of Tsukuba have revealed that an AI system using machine learning could predict psychological distress among workers, which is a risk factor for depression.

Although many questionnaires exist that screen for mental health conditions, individuals may be hesitant to answer truthfully questions about subjective mood due to social stigma regarding mental health. However, a machine learning system could be used to screen depression/psychological distress without such data, something the researchers at University of Tsukuba aimed to address.

"We wanted to see if the AI system could detect psychological distress in a large population from sociodemographic, lifestyle, and sleep factors, without data about subjective states, such as mood," says lead author of the study Professor Shotaro Doki.

To investigate this, the researchers asked a group of researchers and office workers to complete an online survey about sociodemographic, lifestyle, and sleep factors. Then, they developed an AI model that predicted psychological distress according to data from 7251 participants, and compared the results obtained from the AI model with predictions made by a team of 6 psychiatrists.

"The results were surprising," explains Professor Doki. "We found that even without data about mood, the AI model and the team of psychiatrists produced similar predictions regarding moderate psychological distress."

Furthermore, for participants with severe psychological distress, the predictions made by the AI model were more accurate than the predictions of the psychiatrists.

"This newly developed model appears to easily be able to predict psychological distress among large numbers of workers, without data regarding their subjective mood," says Professor Doki. "This effectively avoids the issue of social stigma concerning mental health in the workplace, and eliminates the risk of inappropriate responses to questions asking about respondents' mood."

Thus, screening tools that do not require individuals to report their subjective mood may be more accurate, and thus better able to identify individuals who would not otherwise receive treatment. Earlier interventions to treat depression and psychological distress are likely to lessen the severity of mental illness, with important benefits for both individuals and society.

Credit: 
University of Tsukuba