Culture

RUDN University chemist strengthens the catalyst for oxidiazoles synthesis by 3 times

image: RUDN and Shahid Beheshti University (SBU) chemist proposed a protocol for converting cellulose into a catalyst for the synthesis of oxadiazoles. The new approach makes the catalyst 3 times more stable compared to the same catalyst obtained by the traditional method.

Image: 
RUDN University

RUDN and Shahid Beheshti University (SBU) chemist proposed a protocol for converting cellulose into a catalyst for the synthesis of oxadiazoles. The new approach makes the catalyst 3 times more stable compared to the same catalyst obtained by the traditional method. The results are published in Carbohydrate Polymers

One of the directions of green chemistry is the biopolymers functionalization. Chemists modify polymers obtained from plants and animals -- they add functional molecular groups to them to get useful substances. For example, the catalysts for oxadiazoles synthesis are created from cellulose. They are necessary to produce polymers, dyes, medicines, and photographic materials. To do this, metals are added to the structure of cellulose. However, during use, especially at high temperatures, metal ions are leaching. It causes the reduction of catalytic activity, and environmental pollution. The RUDN University and SBU chemist with his colleagues from Iran created a method of cellulose functionalization, to strengthen the connection of cellulose with metal ions and prevents them from leaching.

"Cellulose as the eco-friendly, non-toxicity, cost-effectively, and renewability features biopolymer was employed in a three-component Betti reaction since its ability to generate functional ligands on the cellulose to complex with Cu(II)", Dr. Ahmad Shaabani from RUDN University and SBU.

Chemists used the Betti reaction, which is the addition of functional groups to phenols using aldehydes and aromatic amines. RUDN University researchers obtained dialdehyde cellulose, for this they mixed 0.5 g of cellulose and 0.01 grams of potassium periodate in water. The solution was mixed at 85? for 12 hours. After that, the reaction product was filtered out, washed with water, and left to dry in a vacuum at 70?. The resulting dialdehyde was used in the Betti reaction. To do this, the chemists added 0.11 g of benzylamine to the dialdehyde and dissolved them in acetic acid, stirring at room temperature for 4 hours. Then naphthol was added to the mixture and mixed for another day. The resulting product was filtered, washed, and dried in a vacuum at 50?. Finally, a similar procedure was repeated, mixing the functionalized cellulose with copper acetate. As a result, a Cu(II)@DAC-Betti catalyst was obtained.

RUDN University and SBU chemists studied the obtained cellulose using spectroscopy and nuclear magnetic resonance. Its structure was examined under a scanning electron microscope. For comparison, the scientists took a copper-modified cellulose, which was obtained according to a standard protocol, without using the Betti reaction (Cu (II)@cellulose). The electron microscope images show that, in comparison with the usual one, the cellulose obtained through the Betti reaction does not have a smooth structure -- it has many outgrowths that firmly hold on to each other and to the cellulose fibers. Chemists compared how firmly the functional elements of cellulose obtained by new and standard methods hold. It turned out that the Betti reaction makes the modified cellulose 3 times more stable -- at a temperature of 600?, 17.2% of its mass was preserved, compared with 6.1% for cellulose obtained by the standard method.

"The result showed a high residual mass for Cu(II)@DAC-Betti, which established a high amount of Cu(II) loading in Cu(II)@DAC-Betti in comparison with Cu(II)@cellulose, thus reveals an efficiency of Betti functionalization in complexing of Cu(II) ions", Dr. Ahmad Shaabani from RUDN University and SBU.

Credit: 
RUDN University

Human environmental genome recovered in the absence of skeletal remains

image: Overview of the excavation works of Satsurblia cave in 2017.

Image: 
© Anna Belfer-Cohen

The cave of Satsurblia was inhabited by humans in different periods of the Paleolithic: Up to date a single human individual dated from 15,000 years ago has been sequenced from that site. No other human remains have been discovered in the older layers of the cave.

The innovative approach used by the international team led by Prof. Ron Pinhasi and Pere Gelabert with Susanna Sawyer of the University of Vienna in collaboration with Pontus Skoglund and Anders Bergström of the Francis Crick Institute in London permits the identification of DNA in samples of environmental material, by applying extensive sequencing and huge data analysis resources. This technique has allowed the recovery of an environmental human genome from the BIII layer of the cave, which is dated before the Ice Age, about 25,000 years ago.

This new approach has evidenced the feasibility of recovering human environmental genomes in the absence of skeletal remains. The analysis of the genetic material has revealed that the SAT29 human environmental genome represents a human extinct lineage that contributed to the present day West-Eurasian populations. To validate the results, the researchers compared the recovered genome with the genetic sequences obtained from bone remains of the nearby cave of Dzudzuana, obtaining definitive evidence of genetic similarities. This fact validate the results and excludes the possibility of modern contamination of the samples.

Along with the identified human genome, other genomes such as wolf and bison have also been recovered from the environmental samples. The sequences have been used to reconstruct the wolf and bison Caucasian population history and will help better understand the population dynamics of these species.

The team now plans to perform further analyses of soil samples from the cave of Satsurbia with the objective of revealing interactions between extinct fauna and humans and the effect of climatic changes on mammalian populations. The ability to recover DNA from soil samples allows us the reconstruction of the evolution of whole past ecosystems .

Credit: 
University of Vienna

Training an AI eye on the moon

video: KAUST scientists have developed a machine learning method to explore the surface of the moon.

Image: 
© 2021 KAUST; Anastasia Serin.

A Moon-scanning method that can automatically classify important lunar features from telescope images could significantly improve the efficiency of selecting sites for exploration.

There is more than meets the eye to picking a landing or exploration site on the Moon. The visible area of the lunar surface is larger than Russia and is pockmarked by thousands of craters and crisscrossed by canyon-like rilles. The choice of future landing and exploration sites may come down to the most promising prospective locations for construction, minerals or potential energy resources. However, scanning by eye across such a large area, looking for features perhaps a few hundred meters across, is laborious and often inaccurate, which makes it difficult to pick optimal areas for exploration.

Siyuan Chen, Xin Gao and Shuyu Sun, along with colleagues from The Chinese University of Hong Kong, have now applied machine learning and artificial intelligence (AI) to automate the identification of prospective lunar landing and exploration areas.

"We are looking for lunar features like craters and rilles, which are thought to be hotspots for energy resources like uranium and helium-3 -- a promising resource for nuclear fusion," says Chen. "Both have been detected in Moon craters and could be useful resources for replenishing spacecraft fuel."

Machine learning is a very effective technique for training an AI model to look for certain features on its own. The first problem faced by Chen and his colleagues was that there was no labeled dataset for rilles that could be used to train their model.

"We overcame this challenge by constructing our own training dataset with annotations for both craters and rilles," says Chen. "To do this, we used an approach called transfer learning to pretrain our rille model on a surface crack dataset with some fine tuning using actual rille masks. Previous approaches require manual annotation for at least part of the input images --our approach does not require human intervention and so allowed us to construct a large high-quality dataset."

The next challenge was developing a computational approach that could be used to identify both craters and rilles at the same time, something that had not been done before.

"This is a pixel-to-pixel problem for which we need to accurately mask the craters and rilles in a lunar image," says Chen. "We solved this problem by constructing a deep learning framework called high-resolution-moon-net, which has two independent networks that share the same network architecture to identify craters and rilles simultaneously."

The team's approach achieved precision as high as 83.7 percent, higher than existing state-of-the-art methods for crater detection.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Near the toys and the candy bars

image: Hebrew University Dr. Yael Bar-Zeev

Image: 
Hadas Parush/Flash 90

Smoking among young teens has become an increasingly challenging and costly public healthcare issue. Despite legislation to prevent the marketing of tobacco products to children, tobacco companies have shrewdly adapted their advertising tactics to circumvent the ban and maintain their access to this impressionable--and growing--market share.

How they do it is the subject of a recent study led by Dr. Yael Bar-Zeev at Hebrew University of Jerusalem (HU)'s Braun School of Public Health and Community Medicine at HU-Hadassah Medical Center. She also serves as Chair of the Israeli Association for Smoking Cessation and Prevention, and teamed up with colleagues at HU and George Washington University. They published their findings in Nicotine and Tobacco Research.

Their study focused on Philip Morris' IQOS products--the popular electronic cigarette that heats tobacco--and specifically how they are marketed in Israeli stores. These heated tobacco products are often touted as being safer for smokers than are regular cigarettes. However, the medical establishment has deemed these devices harmful to users' health. While the tobacco companies comply with the letter of the law when it comes to the advertising ban, they target younger consumers in other ways: namely, by placing their products near toys, candy, and slush machines, and in stores that are in walking distance of local elementary- and high schools.

"These attempts by the tobacco companies are particularly alarming because we know that being exposed to tobacco products and advertisements at the point-of-sale is associated with significantly higher chances of experimenting with smoking and creating long-lasting smoking habits among youth," Bar-Zeev warned.

IQOS was introduced to the Israeli market in 2016. The tobacco industry repeatedly lobbied Israeli policy makers to regulate these products differently than they do cigarettes. However, Israel's Ministry of Health determined IQOS' causes significant negative health effects and therefore they will be regulated as a tobacco product and subject to the ban of advertising tobacco products in stores, aka at "points of sale".

As part of the study, the team performed a concealed audit of 80 point-of-sales locations that sell IQOS or its accompanying tobacco sticks (HEETS) in four of Israel's largest cities: Tel Aviv, Jerusalem, Haifa and Beersheba. They found that despite overall compliance with the advertisement ban, tobacco companies found ways to circumvent the ban with special displays and signs that read "WE SELL TOBACCO". Further, a substantial number of stores placed IQOS/HEETS and other tobacco products near (within 30cm) other merchandise that targets young consumers, such as toys, candy, and slush machines. Nearly 70% of these stores were near (less than 300 meters) high schools, and 40% were in walking distance to elementary schools.

Further findings from this study suggest that IQOS products are being marketed to consumers in higher socio-economic demographic, as evidenced by the higher price tag and the higher-end neighborhoods that carry the product. This reality contradicts IQOS' claims that they've designed their product to help smokers quit cigarettes. Past research has shown that it is lower--not higher--socio-economic populations that have higher smoking rates and smoking cessation rates.

"The purpose of advertisement bans is to prevent children and youth exposure to all tobacco products. However, tobacco companies prove that they don't care and will do whatever it takes to continue marketing their products and increasing their revenue", shared Bar-Zeev. "Israel's government needs to act decisively, institute a license to sell tobacco, and ban the sale of tobacco products in locations that are in close proximity to schools."

One of the study researchers, Professor Hagai Levine at HU, added, "Israel's Minister of Health Nitzan Horowitz announced that he is committed to advancing tobacco legislation. These findings show that the current legislation has serious loopholes that need to be closed to protect public health here in Israel and, in particularly, to protect our youth from being exposed to tobacco products advertisements". This study has implications for smoking legislation globally, as it reveals how the tobacco industry uses marketing tactics designed to circumvent government' attempts to protect their publics' health".

Credit: 
The Hebrew University of Jerusalem

Rise in Southeast Asia forest clearance increasing greenhouse gases

image: Forests cleared for agriculture on Vietnam mountains

Image: 
Dominick Spracklen

Forest clearance in Southeast Asia is accelerating, leading to unprecedented increases in carbon emissions, according to new research.

The findings, revealed by a research team including University of Leeds academics, show that forests are being cut down at increasingly higher altitudes and on steeper slopes in order to make way for agricultural intensification.

As a result, more than 400 million metric tons of carbon are released into the atmosphere every year as forests are cleared in the region, with that emissions figure increasing in recent years.

The study, "Upward expansion and acceleration of forest clearance in the mountains of Southeast Asia", is published in Nature Sustainability.

Co-author Professor Dominick Spracklen, of Leeds' School of Earth and Environment, said: "Most lowland tropical forests in Southeast Asia have already been cleared for agriculture.

"In the past, mountain forests were often spared from clearance because steep slopes and high elevations made deforestation more difficult.

"Our work shows that deforestation has now moved into these mountain regions and has accelerated rapidly in the past 10 years.

"These mountain forests are amazingly rich in biodiversity and are crucial stores of carbon, so it is worrying to see that the frontier of deforestation is now moving upwards into the mountains of Southeast Asia.

"Loss of these forests will be a devasting blow for nature and will further accelerate climate change."

Southeast Asia contains about half of all tropical mountain forests, which are rich in biodiversity and contain a large amount of the planet's carbon.

The authors found that forest clearance in Southeast Asia's mountains has accelerated during the 21st century, accounting for a third of total forest loss in the region. New plantations primarily drove deforestation at high elevations.

Analysing high-resolution satellite data, the researchers found that average annual forest loss in the region was 3.22?million hectares per year during 2001-2019, with 31% occurring on mountains.

Over the last decade the average altitude of forest loss increased by 150 m and advanced onto steeper slopes that have high forest carbon density relative to the lowlands.

These shifts led to unprecedented annual forest carbon loss of 424?million metric tons of carbon?per year, but at an accelerating rate in recent years.

Co-author Professor Joseph Holden, from Leeds' School of Geography, said: "Forested mountains are critical zones for biodiversity, future climate resilience, water supplies and carbon storage.

"The loss of forests at higher elevations in mountain regions of southeast Asia over the last 20 years is therefore of major concern, particularly given that these regions are also concentrated zones of sensitive species and where carbon stocks are high.

"This research shows the value of high resolution satellite data for detecting change, and also highlights that the international community needs to continue to work hard to support forest conservation and carbon management."

The research was led by Associate Professor Zhenzhong Zeng at the Southern University of Science and Technology (SUSTech) in Shenzhen, China.

Combining forest loss data with a forest biomass carbon map, they discovered that carbon loss resulting from forest clearance was mainly in the lowlands in the 2000s, for example in Indonesia.

In the 2010s, however, lowland forest carbon loss decreased while mountain forest carbon loss in regions like Myanmar and Laos increased significantly.

Credit: 
University of Leeds

Songbirds like it sweet!

image: A New Holland honeyeater (Phylidonyris novaehollandiae) feeding on Wilson's grevillea (Grevillea wilsonii), a favorite honeyeater food endemic to Western Australia.

Image: 
Photograph by Gerald Allen (Macaulay Library 271643651).

Humans can easily identify sweet-tasting foods - and with pleasure. However, many carnivorous animals lack this ability, and whether birds, descendants of meat-eating dinosaurs, can taste sweet was previously unclear. An international team of researchers led by Max Planck Institute for Ornithology including Dr Simon SIN from Research Division for Ecology and Biodiversity, The University of Hong Kong (HKU), has now shown that songbirds, a group containing over 4,000 species, can sense sweetness regardless of their primary diets. The study highlights a specific event in the songbird ancestors that allowed their umami (savoury) taste receptor to recognise sugar. This ability has been conserved in the songbird lineage, influencing the diet of nearly half of all birds living today.

Bitter, salty, sweet, sour and umami are the five basic tastes that we humans perceive. The sense of taste has an enormous influence on our diet - what we think tastes good often ends up on our plates. The rest of the animal kingdom is no different, as taste reliably helps to distinguish what is nutritious from what is poisonous. But what exactly do other animals taste?

It is well known that the sweet taste receptor is widespread among mammals. Birds, however, descend from carnivorous dinosaurs and lack an essential subunit of this receptor - presumably leaving most unable to detect sugar. The only known exception are the hummingbirds, who re-purposed their umami taste receptor to recognise carbohydrates.

But are all other birds unable to taste sugars?

First, the researchers from the international team systematically studied the diets of birds. Certain songbird lineages, such as sunbirds, sugarbirds, and honeycreepers, are known to regularly consume large quantities of nectar. However, the team found an above-average number of other songbird species across the entire radiation that occasionally consume nectar or fruit. "This was the first hint that we should concentrate on a range of songbirds, not only the nectar-specialised ones when searching for the origins of avian sweet taste," explains Dr Maude BALDWIN, an evolutionary biologist from the Max Planck Institute for Ornithology who led the research. Indeed, their behavioural experiments showed that both a nectar specialist as well as a grain-eating songbird preferred sugar water to regular water.

The researchers dug deeper and found that the umami receptors of nectar specialist honeyeaters, as well as those of other songbirds with varying diets, also respond to sugar. They concluded that songbirds really do sense sweetness and, like hummingbirds, use the umami receptor to do so.

To identify the origin of this ability, the researchers reconstructed ancestral umami receptors at different locations on the songbird family tree. It turned out that the early ancestors of songbirds evolved the ability to sense sugar, even before they radiated out of Australia and spread across the planet. "We were very surprised by this result. Sweet perception emerged very early within the songbird radiation and then persisted even in species that do not rely primarily on sugary food," says Dr Baldwin.

In addition to the timing of the sensory change, the researchers were able to uncover its molecular basis. By comparing the sugar-indifferent and sugar-responding receptor sequences, they identified the modifications enabling sweet perception. Interestingly, these exact changes coincide only slightly with those seen in the distantly-related hummingbirds, even though similar areas of the receptor are modified. So, over evolutionary time, these distant bird groups converged on the same solution of re-purposing their umami taste receptors to sense sugar. However, each group modified the receptors in distinct ways to achieve the same outcome. "It is intriguing that hummingbirds and songbirds convergently evolved to taste sugar, but via different ways. This is an important study that shows how different evolutionary pathways could lead to the same adaptation," says Dr Sin.

Based on their findings, the scientists suspect that the new sensory percept of ancestral songbirds had far-reaching effects on their subsequent evolution. In Australia, where songbirds evolved, many different sugar sources are common, including insect secretions and tree sap. Sugary food sources may have helped songbirds to spread to other continents and successfully occupy a variety of ecological niches.

Future studies now aim to learn how sweet perception has coevolved with other physiological traits, such as changes in digestion and metabolism, across bird evolution.

Credit: 
The University of Hong Kong

Danish student solves how the Universe is reflected near black holes

image: A disk of glowing gas swirls into the black hole "Gargantua" from the movie Interstellar. Because space curves around the black hole, it is possible to look round its far side and see the part of the gas disk that would otherwise be hidden by the hole. Our understanding of this mechanism has now been increased by Danish master's student at NBI, Albert Sneppen

Image: 
interstellar.wiki/CC BY-NC License.

In the vicinity of black holes, space is so warped that even light rays may curve around them several times. This phenomenon may enable us to see multiple versions of the same thing. While this has been known for decades, only now do we have an exact, mathematical expression, thanks to Albert Sneppen, student at the Niels Bohr Institute. The result, which even is more useful in realistic black holes, has just been published in the journal Scientific Reports.

You have probably heard of black holes -- the marvelous lumps of gravity from which not even light can escape. You may also have heard that space itself and even time behave oddly near black holes; space is warped.

In the vicinity of a black hole, space curves so much that light rays are deflected, and very nearby light can be deflected so much that it travels several times around the black hole. Hence, when we observe a distant background galaxy (or some other celestial body), we may be lucky to see the same image of the galaxy multiple times, albeit more and more distorted.

Galaxies in multiple versions

The mechanism is shown on the figure below: A distant galaxy shines in all directions -- some of its light comes close to the black hole and is lightly deflected; some light comes even closer and circumvolves the hole a single time before escaping down to us, and so on. Looking near the black hole, we see more and more versions of the same galaxy, the closer to the edge of the hole we are looking.

How much closer to the black hole do you have to look from one image to see the next image? The result has been known for over 40 years, and is some 500 times (for the math aficionados, it is more accurately the "exponential function of two pi", written e2π).

Calculating this is so complicated that, until recently, we had not yet developed a mathematical and physical intuition as to why it happens to be this exact factor. But using some clever, mathematical tricks, master's student Albert Sneppen from the Cosmic Dawn Center -- a basic research center under both the Niels Bohr Institute and DTU Space -- has now succeeded in proving why.

"There is something fantastically beautiful in now understanding why the images repeat themselves in such an elegant way. On top of that, it provides new opportunities to test our understanding of gravity and black holes," Albert Sneppen clarifies.

Proving something mathematically is not only satisfying in itself; indeed, it brings us closer to an understanding of this marvelous phenomenon. The factor "500" follows directly from how black holes and gravity work, so the repetitions of the images now become a way to examine and test gravity.

Spinning black holes

As a completely new feature, Sneppen's method can also be generalized to apply not only to "trivial" black holes, but also to black holes that rotate. Which, in fact, they all do.

"It turns out that when the it rotates really fast, you no longer have to get closer to the black hole by a factor 500, but significantly less. In fact, each image is now only 50, or 5, or even down to just 2 times closer to the edge of the black hole", explains Albert Sneppen.

Having to look 500 times closer to the black hole for each new image, means that the images are quickly "squeezed" into one annular image, as seen in the figure on the right. In practice, the many images will be difficult to observe. But when black holes rotate, there is more room for the "extra" images, so we can hope to confirm the theory observationally in a not-too-distant future. In this way, we can learn about not just black holes, but also the galaxies behind them:

The travel time of the light increases, the more times it has to go around the black hole, so the images become increasingly "delayed". If, for example, a star explodes as a supernova in a background galaxy, one would be able to see this explosion again and again.

Credit: 
University of Copenhagen - Faculty of Science

India national school meal program linked to improved growth in children of beneficiaries

July 12, New Delhi - Women who received free meals in primary school have children with improved linear growth, according to a new study by researchers at the International Food Policy Research Institute (IFPRI).

India is home to the highest number of undernourished children and the largest school feeding program in the world--the Mid-Day Meal (MDM) scheme--yet long-term program benefits on nutrition are unknown. As school feeding programs target children outside the highest-return "first 1000-days" window spanning from conception until a child's second birthday, they have not been a focal point in the global agenda to address stunting. School meals benefit education and nutrition in participants, but no studies have examined whether benefits carry over to their children.

"Findings from previous evaluations of India's MDM scheme have shown a positive association with beneficiaries' school attendance, learning achievement, hunger and protein-energy malnutrition, and resilience to health shocks such as drought--all of which may have carryover benefits to children born to mothers who participated in the program," says study co-author, Harold Alderman.

The study, "Intergenerational nutrition benefits of India's national school feeding program", co-authored by University of Washington's Suman Chakrabarti and IFPRI's Samuel Scott, Harold Alderman, Purnima Menon and Daniel Gilligan, was published in Nature Communications. The authors used nationally representative data on mothers and their children spanning 1993 to 2016 to assess whether MDM supports intergenerational improvements in child linear growth. Further, they suggest a potential pathway through which school feeding programs may have intergenerational effects on child nutrition outcomes.

The study found that investments made in school meals in previous decades were associated with improvements in future child linear growth. "Our findings suggest that intervening during the primary school years can make important contributions to reducing future child stunting, particularly given the cumulative exposure that is possible through school feeding programs," explains study co-author Suman Chakrabarti.

Study results also show that school meals may contribute to education, later fertility decisions, and access to health care, reducing the risk of undernutrition in the next generation. "School feeding programs such as India's MDM scheme have the potential for stimulating population-level stunting reduction as they are implemented at scale and target multiple underlying determinants of undernutrition in vulnerable groups," explains study co-author Samuel Scott.
Importantly, further research is required to understand whether improving the quality or quantity of meals provided and extending the program beyond primary school might further enhance its benefits.

Credit: 
International Food Policy Research Institute

Progress towards new treatments for tuberculosis

image: Neutrophil Extracellular Traps (NETs) are released by a type of white blood cell (neutrophils) when infected with mycobacterium tuberculosis, the bacterium responsible for causing tuberculosis.

Image: 
WEHI

Boosting the body's own disease-fighting immune pathway could provide answers in the desperate search for new treatments for tuberculosis.

Tuberculosis still represents an enormous global disease burden and is one of the top 10 causes of death worldwide.

Led by WEHI's Dr Michael Stutz and Professor Marc Pellegrini and published in Immunity, the study uncovered how cells infected with tuberculosis bacteria can die, and that using new medicines to enhance particular forms of cell death decreased the severity of the disease in a preclinical model.

At a glance

Researchers were able to demonstrate that cells infected with tuberculosis bacteria have functional apoptosis cell death pathways, and showed their importance in combatting severe disease.

Using preclinical models, researchers were able to pinpoint the critical apoptotic pathways as targets for new therapies, whereby infected cells can be killed by drugs called IAP inhibitors.

The study demonstrated that host-directed therapies were viable for infections such as tuberculosis, which is important in the era of extensive antibiotic resistance.

Fighting antibiotic resistance

Tuberculosis is caused by bacteria that infect the lungs, spreading from person to person through the air. A challenge in the fight against tuberculosis is that the bacteria that cause the disease have developed resistance to most antibiotic treatments, leading to a need for new treatment approaches.

Tuberculosis bacteria grow within immune cells in the lungs. One of the ways that cells protect against these 'intracellular' pathogens is to undergo a form of cell death called apoptosis - destroying the cell as well as the microbes within it.

Using preclinical models, researchers sequentially deleted key apoptosis effectors, to demonstrate their roles in controlling tuberculosis infections. This demonstrated that a proportion of tuberculosis-infected cells could die by apoptosis - opening up new opportunities for controlling the disease.

Using host-directed therapies to reduce disease burden

Dr Stutz said researchers then tested new drugs that force cells to die. This revealed that a drug-like compound that inhibits cell death-regulatory proteins called IAPs could promote death of the infected cells.

"When we treated our infection models with this compound, we were able to significantly reduce the amount of tuberculosis disease," he said.

"The longer the treatment was used, the greater the reduction of disease."

The research team was able to replicate these results using various different IAP inhibitors.

"Excitingly, many of these compounds are already in clinical trials for other types of diseases and have proven to be safe and well-tolerated by patients," Dr Stutz said.

"We predict that if these compounds were progressed for treating tuberculosis, they would be most effective if used alongside existing antibiotic treatments."

Opening the door to new treatment methods

Professor Marc Pellegrini said until now, antibiotics were the only treatment for tuberculosis, which were limited in their application due to increasing antibiotic resistance.

"Unlike antibiotics, which directly kill bacteria, IAP inhibitors kill the cells that the tuberculosis bacteria need to survive," he said.

"The beauty of using a host-directed therapy is that it doesn't directly target the microbe, it targets a host process. By targeting the host rather than the microbe, the chances of resistance developing are incredibly low."

The team hope the research will lead to better treatments for tuberculosis.

"This research increases our understanding of the types of immune responses that are beneficial to us, and this is an important step towards new treatments for tuberculosis, very few of which have been developed in the last 40 years," Dr Stutz said.

"We have demonstrated that host-directed therapies are viable for infections such as tuberculosis, which is particularly important in the era of extensive antibiotic resistance."

Credit: 
Walter and Eliza Hall Institute

Sea-level rise may worsen existing Bay Area inequities

image: An aerial view of East Palo Alto, which borders San Francisco Bay. New research shows about half the households in East Palo Alto are at risk of financial instability from existing social factors or anticipated flooding through 2060. (Image credit: Wikimedia Commons)

Image: 
Wikimedia Commons

Rather than waiting for certainty in sea-level rise projections, policymakers can plan now for future coastal flooding by addressing existing inequities among the most vulnerable communities in flood zones, according to Stanford research.

Using a methodology that incorporates socioeconomic data on neighborhood groups of about 1,500 people, scientists found that several coastal communities in San Mateo County, California - including half the households in East Palo Alto - are at risk of financial instability from existing social factors or anticipated flooding through 2060. Even with coverage from flood insurance, these residents would not be able to pay for damages from flooding, which could lead to homelessness or bankruptcy among people who are essential to the diversity and economic function of urban areas. The paper was published in the journal Earth's Future on July 12.

"These are workers that make a city run, they're the heart and soul of an urban operation. If you displace a significant majority too far outside the urban area, the functionality of that city crumbles," said senior co-author Jenny Suckale, an assistant professor of geophysics at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "How can we make sure that we provide a future to these communities that does not entail their disintegration?"

Flood damage estimates are typically calculated by civil engineers in terms of monetary damage to physical structures. With their new model, called the Stanford Urban Risk Framework (SURF), the researchers bring a human-centered approach to risk assessment that focuses on the residents most likely to lose their livelihoods when water inundates their homes. While every household within the projected flood plain will be burdened by flood damages, the socioeconomic context determines how harmful the costs will be. In several coastal communities in San Mateo County, more than 50 percent of households will be facing financial instability.

"If you just look at the dollar amount, you're missing one major component of the problem," Suckale said. "What might be a nuisance in some communities is life-changing in other communities - it's really about the proximity to a tipping point."

The researchers determined which communities are near a fiscal tipping point by calculating their social risk, or financial instability, a metric intended to complement existing assessments of monetary risk from hazards. They overlaid coastal flood maps and building footprints with structural information, incorporated projected annual damages from sea-level rise and estimated household discretionary income based on labor and economic data in order to calculate losses based on census block groups - geographical units used by the U.S. Census Bureau to publish demographic estimates.

"It was surprising to see in the data how much more lower-income households were affected as a proportion of their income and just how unsustainable it is for those types of households to absorb these costs," said lead study author Avery Bick, a PhD student with the Norwegian Institute for Nature Research who worked on the project as a graduate student at Stanford.

Despite uncertainty over the magnitude of future climate change, researchers agree that sea-level rise will increase coastal flooding - a hazard that residents from Foster City to East Palo Alto have already experienced in the past several decades. Many of the neighborhoods with the highest social risk comprise single-parent households and are more racially diverse than the San Mateo County average, according to the research.

"Climate change isn't just about getting hotter or sea-level rise - it's literally going to change the entire fabric of society, especially if we continue to ignore it," Bick said. "This gave me a sense of how vulnerable the social fabric is to change and how we need to be proactive; otherwise it will change in favor of those that have more resources."

The researchers collaborated with local stakeholders to develop an equitable approach to sea-level rise adaptation planning. Through conversations with organizations like the San Mateo County Office of Sustainability and the U.S. Army Corps of Engineers, as well as community-based groups like Climate Resilient Communities, El Concilio and North Fair Oaks Community Alliance, they learned that what was missing from the climate discourse was a framing of dollar-amount damage relative to what people are able to pay. The team's approach involved a mind shift, from the amount of money you lose to the value of the goods and services you can no longer purchase because of the disaster, said study co-author Derek Ouyang, a geophysics lecturer at Stanford Earth.

"Any investment now can be directly linked to the resilience that prepares communities for future climate hazards, whatever they are," Ouyang said.

Because San Mateo County includes both very wealthy and low-income residents, averaging the costs of flooding in comparison to the income of its residents at the county scale "makes it look like you don't have much of a problem," Suckale said. By assessing impacts at a smaller scale, however, the researchers were able to highlight the areas of concern in a way that is more directly useful to policymakers.

The co-authors hope this new quantitative method for assessing social risk on a census block group scale can be used in other regions vulnerable to coastal flooding or for understanding different climate hazards through an equitable lens.

"I think it's useful to have a metric of social risk that is entirely agnostic of the actual hazard, because then we can just improve the ability of the household to absorb the disruption, no matter what it is," Suckale said.

Credit: 
Stanford University

People given 'friendly' bacteria in nose drops protected against meningitis

Led by Professor Robert Read and Dr Jay Laver from the NIHR Southampton Biomedical Research Centre and the University of Southampton, the work is the first of its kind.

Together they inserted a gene into a harmless type of a bacteria, that allows it to remain in the nose and trigger an immune response. They then introduced these bacteria into the noses of healthy volunteers via nose drops.

The results, published in the journal Science Translational Medicine, showed a strong immune response against bacteria that cause meningitis. Published in Science Translational Medicine, those data also show long-lasting protection.

Meningitis occurs in people of all age groups but affects mainly infants, young children and the elderly. Meningococcal meningitis, is a bacterial form of the disease, causing 1,500 cases a year in the UK. It can lead to death in as little as four hours after symptoms start.

Around 10% of adults carry N. meningitidis in the back of their nose and throat with no signs or symptoms. However, in some people it can invade the bloodstream. That can lead to life-threatening conditions including meningitis and blood poisoning ('septicaemia).

The 'friendly' bacteria Neisseria lactamica (N. lactamica) also lives in some people's noses naturally. By occupying the nose, it protects from a severe type of meningitis. It does so by denying a foothold to its close cousin Neisseria meningitidis (N. meningitidis).

The new data build on the team's previous work aiming to exploit this natural phenomenon. That study showed nose drops of N. lactamica prevented N. meningitidis from settling in 60% of participants.

For those people, N. lactamica had locked out its deadly cousin. That drove work to make N. lactamica even more effective in displacing N. meningitidis.

The team did so by handing it one of N. meningitidis' key weapons; a 'sticky' surface protein that grips the cells lining the nose. By inserting a copy of the gene for this protein into N. lactamica's DNA, it could also it - levelling the playing field.

As well as inducing a stronger immune response, those modified bacteria stayed longer. Present for at least 28 days, with most participants (86%) still carrying it at 90 days, it caused no adverse symptoms.

The results of the study, which was funded by the Medical Research Council, are promising for this new way of preventing life-threatening infections, without drugs. It's an approach that could be critical in the face of growing antimicrobial resistance.

Dr Jay Laver, Senior Research Fellow in Molecular Microbiology at the University of Southampton, commented: "Although this study has identified the potential of our recombinant N. lactamica technology for protecting people against meningococcal disease, the underlying platform technology has broader applications.

"It is theoretically possible to express any antigen in our bacteria, which means we can potentially adapt them to combat a multitude of infections that enter the body through the upper respiratory tract. In addition to the delivery of vaccine antigens, advances in synthetic biology mean we might also use genetically modified bacteria to manufacture and deliver therapeutics molecules in the near future."

Prof Read, Director of the NIHR Southampton Biomedical Research Centre said: "This work has shown that it is possible to protect people from severe diseases by using nose drops containing genetically modified friendly bacteria. We think this is likely to be a very successful and popular way of protecting people against a range of diseases in the future."

Journal

Science Translational Medicine

DOI

10.1126/scitranslmed.abe8573

Credit: 
University of Southampton

Training helps teachers anticipate how students with learning disabilities might solve problems

North Carolina State University researchers found that a four-week training course made a substantial difference in helping special education teachers anticipate different ways students with learning disabilities might solve math problems. The findings suggest that the training would help instructors more quickly identify and respond to a student's needs.

Published in the Journal of Mathematics Teacher Education, researchers say their findings could help teachers in special education develop strategies to respond to kids' math reasoning and questions in advance. They also say the findings point to the importance of mathematics education preparation for special education teachers - an area where researchers say opportunities are lacking.

"Many special education programs do not include a focus on mathematics for students with disabilities, and few, if any, focus on understanding the mathematical thinking of students with disabilities in particular," said the study's first author Jessica Hunt, associate professor of mathematics education and special education at NC State. "This study was based on a course experience designed to do just that - to heighten teacher knowledge of the mathematical thinking of students with learning disabilities grounded in a stance of neurodiversity."

In the study, researchers evaluated the impact of a four-week course on 20 pre-service special education teachers. Researchers wanted to know if the course impacted the educators' ability to anticipate the mathematical reasoning of students with learning disabilities, and help teachers adjust tasks to make them more accessible. The course also emphasized neurodiversity, which defines cognitive differences as a natural and beneficial outgrowth of neurological and biological diversity.

"Neurodiversity says that all human brains are highly variable, with no average or 'normal' learners," Hunt said. "This means that we all have strengths and challenges, and as humans we use what makes sense to us to understand the world. It's a way to challenge pervasive deficit approaches to looking at disability, and to instead use an asset-based approach that positions students with learning disabilities as mathematically capable."

Before and after the course, the teachers took a 40-question assessment. In the test, researchers asked teachers to use words, pictures or symbols to describe a strategy that elementary school students with learning disabilities might use to solve a problem. They compared teachers' responses to see how well they anticipated students' thinking, and also how they might modify tasks for students.

After the course, they saw more anticipation of what they called "implicit action," which is using strategies like counting, halving, grouping, or predicting the number of people sharing a certain item to solve a problem. It's often represented by pictures or words. Before the test, many teachers used "static representations" in which they used mathematical expressions to show solutions. While static representations are abstract representations of solutions, researchers argued implicit actions can reflect how students with learning disabilities themselves might work through a problem.

They found teachers' use of implicit action increased from 32 percent to 82 percent of answers before and after the test, while static representation decreased from 50 percent of answers to 17 percent. Their responses didn't add up to 100 percent because some teachers left some answers blank.

"The course helped teachers move from a top-down, one-size-fits-all view of 'this is how you solve these problems,' to an anticipation of how actual students who are learning these concepts for the first time might think through these problems," Hunt said. "That's a very different stance in terms of educating teachers to anticipate student thinking so they can meet it with responsive instruction."

Researchers also tracked how teachers modified math problems to make them more accessible to
students before and after taking the course. After participating in the course, researchers saw that
more teachers changed the problem type. They saw a shift in 50 percent of answers.

"The benefit of anticipating students' thinking is to help teachers to be responsive and support students' prior knowledge as they're teaching, which is a really hard thing to do," Hunt said. "It's even harder if you don't yet appreciate what that thinking could be."

Credit: 
North Carolina State University

A third of teens, young adults reported worsening mental health during pandemic

COLUMBUS, Ohio -- As typical social and academic interaction screeched to a halt last year, many young people began experiencing declines in mental health, a problem that appeared to be worse for those whose connections to family and friends weren't as tight, a new study has found.

In June 2020, researchers invited participants in an ongoing study of teenage boys and young men in urban and Appalachian Ohio to complete a survey examining changes to mood, anxiety, closeness to family and friends, and other ways the pandemic affected their lives. The study, co-led by researchers at The Ohio State University and Kenyon College, appears in the Journal of Adolescent Health.

Nearly a third of the 571 participants reported that their mood had worsened or their anxiety had increased between March 2020 and June 2020. The study found that worsening mood and increased anxiety during the pandemic were more likely in those with higher socioeconomic status, those who felt decreasing closeness to friends and family and those who were older. Self-reported increases in anxiety were more common among those with a history of depression and/or anxiety.

One example of feedback from a participant: "A return to a much more introverted, anxious and sedentary lifestyle, after recently making attempts to become more social, outgoing and level-headed."

The research team said the study shines light on those who could be most vulnerable to mental health struggles during a pandemic, and potentially during other situations in which they find themselves isolated from their typical social interaction.

"Though serious cases of COVID-19 have been rare among young people, the pandemic appears to have taken another toll on them," said study senior author Amy Ferketich, a professor of epidemiology at Ohio State.

Eleanor Tetreault, the study's lead author and a recent graduate of Kenyon College, said the existing relationships formed within the ongoing Buckeye Teen Health Study provided an opportunity to quickly assess any perceived changes in mood or anxiety at the onset of the pandemic.

Though the findings about worsening mental health are concerning, Tetreault said there were some surprising positive themes that emerged as she and fellow researchers dived into the respondents' answers to open-ended survey questions.

"The group that had really positive experiences talked about the opportunity for self-exploration, having more time to sit and think or get more connected to their family -- at this age, most people are just going, going, going all the time and all the sudden they had this period of time where they could slow down," said Tetrault, who completed a Pelotonia Summer Research Internship at Ohio State's Comprehensive Cancer Center in 2020.

Though the researchers can't be sure what contributed to the worsening mood and anxiety among some respondents, they do have theories.

Being cooped up with parents who were struggling to work from home and manage the stress of the pandemic could be distressing to young people, Ferketich said, adding that those whose home lives weren't stable to begin with would be hardest hit. Participants from higher socioeconomic groups may have been more likely to have parents who were able to work from home and were more likely to report worsening mental health in the first months of the pandemic.

And though the break from the usual routine "might have been kind of nice at first, it did seem that for some people that changed over time, leading them more toward social isolation, anxiety and depression," Tetreault said.

Though pandemics are rare, the findings from the study don't apply just to a global crisis, she said.

"I think this could apply to any kind of really big change or change in routine for an adolescent or group of adolescents. It highlights the importance of finding ways to maintain social connection, and to help young people maintain those connections when normal social interactions are disrupted."

Credit: 
Ohio State University

Study shows mental health, support, not just substance misuse key in parental neglect

LAWRENCE -- Substance use disorder has long been considered a key factor in cases of parental neglect. But new research from the University of Kansas shows that such substance abuse does not happen in a vacuum. When examining whether parents investigated by Child Protective Services engaged in neglectful behaviors over the past year, a picture emerges that suggests case workers should look at substance misuse within the context of other factors, like mental health and social supports, to better prevent child neglect and help families.

KU researchers analyzed data of parents investigated for neglectful behavior toward children aged 2-17 and gauged the level of their substance use as well as if they met the criteria for clinical depression. Researchers also studied whether parents had positive social supports such as friends or family, help with children or financial assistance. The results showed that the relationship between parental substance use behaviors and neglect behaviors varied depending on whether the parent was also experiencing clinical depression in the past year and the types of social supports present in their life. For example, substance use disorder among parents with no co-occurring clinical depression contributes to higher annual neglect frequencies compared to substance use disorder among parents with co-occurring clinical depression.

Nancy Kepple"Substance use may matter differently across different contexts. When a parent is already experiencing clinical levels of depression, does substance misuse exacerbate already present neglect behaviors? Nobody really knows; the evidence is mixed," said Nancy Kepple, associate professor of social welfare at KU and lead author of the study. "This study is a part of building a case that it's not one single story when it comes to thinking about how parental substance use is associated with neglectful behaviors."

The study, co-written with recent KU doctoral graduate Amittia Parker, was published in the journal Children and Youth Services Review.

The study analyzed data from 3,545 parents of children from Wave 4 of the National Survey of Child and Adolescent Well-Being. Parents in the survey reported their levels of substance use as well as symptoms of depression and data on different types of social supports. Previously, little research had been done on the interaction of substance use, clinical depression and social support for parental neglect, as substance use has been viewed as the primary factor in such behaviors. Neglect is a difficult topic to study, Kepple said, as it is the omission of a behavior -- providing care and basic needs for a child -- as opposed to enacting physical or emotional harm.

Findings showing that the presence of clinical depression and varying types of social support alters the established relationship between substance use disorder and child neglect, suggesting that treatment should look beyond simply promoting abstinence among parents misusing alcohol and other drugs, the researchers said.

For parents without clinical depression, the types of social support alone did not explain neglectful behaviors. But, for their counterparts, it did. Parents experiencing clinical depression were associated with lower neglect frequency when they had people present in their life who they perceived could help raise their children, but those who reported having more friends to spend time with socially had higher rates of neglect.

"For parents who have clinical depression, their substance use does not seem to have as large of an effect if they have social supports that can provide tangible resources to help care for the child," Kepple said. "Interestingly, having more people to spend time with and who can pull parents out of their home may create opportunities for neglect. People in our lives can pull us away from our responsibilities as much as they can help us navigate through challenges."

The relationship between substance use and social supports is more complicated for parents with no co-occurring clinical depression. Social companionship could be protective or risky, depending on the type of substance use behaviors that a parent reported. For example, the study found neglect rates were comparable among parents reporting no people in their lives who provided opportunities for recreational activities, regardless of substance use behaviors. In contrast, researchers observed higher neglect risk for parents reporting either harmful/risky substance use or substance use disorders for parents reporting one to two people providing social companionship. Yet, findings showed parents reporting three or more sources of social companionship only increased neglect risks for the subsample of parents reporting past year substance use disorder.

Kepple said future research will further examine the types of social interactions parents have with individuals within their social networks and how that influenced neglectful or harmful behaviors. She also plans to work with parents in recovery from prior substance use disorder to understand how their experiences in recovery services and communities have affected their parenting.

Study results show the importance of not simply relying on a single factor to make determinations in services or treatment for parents who have neglected or are at risk of neglecting their children. To better serve families, evaluating the big picture, including factors like clinical depression, social supports and substance use is necessary, researchers argue. It may require more time, resources and clinical thinking; the data supports modern interventions that are providing comprehensive services that support recovery and well-being of parents to address neglect behaviors.

"Neglect is highly contextual," Kepple said. "There are lots of reasons it might be occurring, and that's what we need to understand and further explore. We can't just say 'there's substance misuse, that's a problem,' or 'they have social support, that's good.' When you break these things down, context matters. These findings suggest an individualized plan is likely the best plan, given the complex interactions that are occurring among different risk and protective factors. If systems mandate a parent remain abstinent from alcohol or substance use without addressing underlying mental health or social supports needs, we are not addressing the whole picture."

Credit: 
University of Kansas

Genetic analysis to help predict sunflower oil properties

Skoltech researchers and their colleagues from the University of Southern California have performed genetic analysis of a Russian sunflower collection and identified genetic markers that can help predict the oil's fatty acid composition. The research was published in BMC Genomics.

Genomic selection, which helps quickly create new crop varieties, has been a much-discussed topic worldwide for the last 10 years. DNA sequencing and extensive genotyping have been applied to obtain genetic profiles of crops. When analyzed and compared to field data, those profiles help identify genetic markers for traits of interest to farming and predict the properties and value of a crop based on its genetic profile alone.

"Our work is the first large-scale study of the Russian sunflower genetic collection and one of the first attempts to create new varieties using genomic selection. Predicting what a plant will be like before actually planting it - an idea that seemed utterly unrealistic until recently - has become commonplace in many countries thanks to technological advances. Classical breeding can hardly cope with the challenges posed by the global climate change, growing human needs, and evolving food quality requirements. To get a head start, we should turn to genetics," Alina Chernova, Skoltech PhD and lead author of the study, notes.

This long-term research project has been carried out by a joint team led by Skoltech professor Philipp Khaitovich and featuring scientists from Skoltech, the University of Southern California, Vavilov All-Russian Institute of Plant Genetic Resources, and Pustovoit All-Russian Research Institute of Oil Crops, joined by breeders from the seed-producing company Agroplasma.

The team looked at species from two major Russian sunflower gene banks and Agroplasma's collection. Their genetic analysis covered 601 lines of cultivated sunflower to check genetic diversity against the global collection and compare the results with chemical tests of oil obtained from these lines. Bioinformatic analysis revealed genetic markers that can help control the oil's fatty acid content.

"The reason we chose the sunflower is that it is a key source of vegetable fats, and Russia is the world's leading supplier of sunflower oil. You can vary the oil's fatty acid composition - which was the focus of our research - to obtain oils with different properties suitable for roasting, dressings or industrial uses," Skoltech PhD student and study co-author Rim Gubaev says.

"Thanks to this project, we have gained valuable insights and built a team of like-minded people keen on helping breeders to introduce genetics in their work. We have founded Oil Gene - it's a startup that will focus on practical tasks and provide genomic selection services," Gubaev adds.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)