Culture

A future ocean that is too warm for corals might have half as many fish species

image: Researchers found that areas with diverse corals still tended to have more diverse fishes, suggesting that coral diversity begets fish diversity.

Image: 
Davide Seveso

Predicting the potential effects of coral loss on fish communities globally is a fundamental task, especially considering that reef fishes provide protein to millions of people. A new study led by the University of Helsinki predicts how fish diversity will respond to declines in coral diversity and shows that future coral loss might cause a more than 40% reduction in reef fish diversity globally.

Corals increasingly bleach and often die when the water warms. What happens to fish if there are no alternative reefs to swim to? The few fish species that feed on corals will inevitably starve, but the rest might find alternative rocky habitat to persist. As yet, it has been hard to do the larger-scale studies that can project what fish will remain in a world without corals. A new study led by Giovanni Strona at the University of Helsinki finds that global projections of fish diversity without corals are as low as small-scale experiments suggest.

An international team of marine biologist started by mapping tropical fish and coral diversity across the world's oceans for every square degree latitude and longitude. These unprecedented maps showed what marine biologists have long known, fish and coral diversity vary widely, with many more species in the Indo-Pacific "coral triangle" than in the western Atlantic and eastern Pacific. Marine diversity hotspots have been long explained by the way that latitude, habitat, temperature, and geography affect speciation and extinction rates among corals and fishes alike. After controlling for factors that drive diversity in general, the authors found that areas with diverse corals still tended to have more diverse fishes, suggesting that coral diversity begets fish diversity.

"This is not particularly surprising given that corals provide a unique food source for some species, as well a three-dimensional habitat that many species use for shelter. And the fish that depend on corals may be prey for fish that don't depend directly on corals," says the lead author Giovanni Strona from the University of Helsinki.

After fitting the line between coral diversity and fish diversity, the authors did a simple thought experiment. They anticipated global coral extirpation by extrapolating the fish vs corals association down to the point where no coral species were left. That extrapolation suggested that around 40% of the world's tropical reef fishes are likely to disappear should corals disappear. As shown in smaller scale experiments, this is a much bigger loss than those species known to depend directly or even indirectly on coral, suggesting that coral reef food webs will begin to unravel if corals do go extinct. This unravelling is expected to be more intense some places than others. The Central Pacific is expected to lose more than 60% of its reef fish, compared with only 10% in the Western Atlantic.

"We first devised a statistical model to disentangle the effect of environment, biogeography and history on fish and coral diversity that accurately predicting local-scale fish diversity as a response to several environmental variables such as water temperature, pH and salinity and coral diversity," Strona explains.

"Besides offering a way to predict fish diversity under novel environmental conditions, the approach offered a tool to explore how fish diversity will vary in response to changes in coral diversity", continues Valeriano Parravicini, a co-leader of the study at the University of Perpignan.

"For anyone who has enjoyed snorkeling on a coral reef, or for the millions of people that depend on reef fishes for food, this thought experiment should be concerning. But it also inspires greater efforts to conserve and restore coral reefs. The benefits of doing so would extend far beyond corals, to fish and other organisms that depend directly or indirectly on corals," says Kevin Lafferty, Senior Scientist with the U.S. Geological Survey, at the University of California, Santa Barbara.

Credit: 
University of Helsinki

Detailed simulation of air flow after sneezing to study the transmission of diseases

video: The video shows the results of the numerical simulation of aerosol dispersion produced by a sneeze. Particles are expelled during the expiration of air and are mainly transported by the action of moving air and gravity. To evaluate the impact of the evaporation of the aqueous fraction that reduces the size of the particles, the transport of aerosols that had not evaporated (left-hand panel) was compared with those that had evaporated (right-hand panel). The color shows the evaporated water fraction between 0 and 1 for no evaporation and total evaporation, respectively.

Image: 
©URV

By the beginning of April 2021, the number of people infected during the COVID-19 pandemic had risen to more than 130 million people of whom more than 2.8 million died. The SARS-CoV-2 virus responsible for COVID-19 is transmitted particularly by droplets or aerosols emitted when an infected person speaks, sneezes or coughs. This is how the viruses and other pathogens spread through the environment and transmit infectious diseases when they are inhaled by someone else.

The capacity of these particles to remain suspended in the air and to spread in the environment depends largely on the size and nature of the air flow generated by the expiration of air. As with other airborne infectious diseases such as tuberculosis, common flu or measles, the role played by fluid dynamics is key to predicting the risk of infection by inhaling these particles in suspension.

In a coughing event that lasts for 0.4 seconds and has a maximum exhaled air speed of 4.8 m/s, the flow first generates a turbulent stream of air that is hotter and more humid than that of the environment. Once the expiration is over, the stream turns into a puff of air that rises because of flotation and its lack of weight while it dissipates.

The particles transported by this flow form clouds, the trajectories of which depend on their size. The dynamics of the largest particles are governed by gravity and describe parabolas with a clear horizontal limit. Despite their limited ability to remain in suspension and the limited horizontal scope, the viral load can be high because they are large (diameters larger than 50 microns).

In contrast, the smallest particles (with diameters shorter than 50 microns) are transported by the action of air flow. These aerosols are capable of remaining in suspension for longer times and they spread over a greater area. The largest particles remain in the air for a few seconds while the smallest can remain suspended for up to a few minutes, Even though their viral load is smaller, these aerosols can get through face masks and be moved from room to room, for example, through ventilation systems. The retention percentage of face masks decreases when the particles are smaller.

The behaviour of the particle cloud depends on the size of the particles and can be complicated by the effects of evaporation, which gradually reduces the diameter of the droplets.

With the support of the Consortium of University Services of Catalonia, the research group form the URV's Department of Mechanical Engineering, led by Alexandre Fabregat and Jordi Pallarés, in conjunction with researchers from the University of the State of Utah and the University of Illinois, has used high-performance numerical simulations to study in unprecedented detail the process of aerosol dispersion generated by a cough or a sneeze. The level of detail was so high that they needed considerable calculation power and numerous processors of a supercomputer working at the same time.

The results indicate that the plume of air produced by the expiration carries particles of less than 32 microns above the height of emission, which generates a cloud that has a great capacity to remain in suspension and be dispersed by air currents over a significant distance. The largest particles have a limited scope which is not changed by the effect of evaporation during the displacement to the ground. Assuming the habitual viral loads for infectious diseases, the results were used to draw a map of the concentration of viral particles around the infected person after they had coughed or sneezed.

This research has been published as two scientific articles in the journal Physics of Fluids with the titles "Direct numerical simulation of the turbulent flow generated during a violent expiratory event" and "Direct numerical simulation of turbulent dispersion of evaporative aerosol clouds produced by an intense expiratory event". Both articles were featured on the front cover because of their scientific impact.

Credit: 
Universitat Rovira i Virgili

Monkeys also learn to communicate

video: Infant marmoset calls.

Image: 
Video: Kurt Hammerschmidt

Language distinguishes us humans; we learn it through experience and social interactions. Especially in the first year of life, human vocalizations change dramatically, becoming more and more language-like. In our closest relatives, non-human primates, language development was previously thought to be largely predetermined and completed within the first few weeks after birth. In a behavioral study now published, researchers from the German Primate Center, the University of Tübingen and the Rockefeller University New York were able to show that the infantile development of vocalizations in common marmosets also includes an extended flexible phase, without which language development in humans would not be possible. The common marmoset is therefore a suitable animal model to better understand the evolution of early infant speech development (Science Advances).

Human language changes dramatically in the first year after birth. It evolves from preverbal, not-yet-language-like calls such as laughter or crying, to pre-linguistic vocalizations, to a babbling phase in which utterances become increasingly language-like and complex. Here, several factors are considered particularly critical for vocal development, including maturation, learning, and early social interactions with parents.

In contrast, it was assumed for decades that phonation in monkeys develops exclusively as a result of physical growth and maturation and that it is independent of learning processes or external factors such as social interaction. For example, previous studies showed that deafness or social isolation due to the absence of parents has little or no effect on the vocal development of nonhuman primates, and that vocal development in most monkey species is completed within a few weeks after birth, so these animals appear to be equipped with the adult vocal repertoire by the juvenile stage. "One of the reasons for these findings is probably that previous work on voice development in non-human primates has mainly focused on the first weeks after birth and ignored possible changes associated with later growth in the following months until adulthood," says Yasemin Gültekin, first author of the study and a scientist at the German Primate Center.

A team led by Yasemin Gültekin and Steffen Hage from the German Primate Center - Leibniz Institute for Primate Research, The Rockefeller University New York and the University of Tübingen closely studied the sound development of common marmosets, a social primate species, from early infancy until sexual maturity at 15 months of age. During this period, the vocalization behavior of the animals kept at the Rockefeller University was recorded with microphones every month. In total, nearly 150,000 vocalizations of six marmosets were thus recorded and analyzed. "Our results show that, similar to the first months of human life, the vocalization behavior of marmosets changes through different developmental stages from the first weeks after birth to adulthood," says Kurt Hammerschmidt, who analyzed the data of the study at the German Primate Center.

The team found that all species-specific vocalization types were already present in the first month after birth and that their developmental changes in acoustic structure can be largely explained by physical maturation. These results are in agreement with previous work suggesting that the acoustic structure of vocalizations is innate and does not require learning through auditory or social feedback. "While changes in acoustic structure could be explained mainly by physical growth or maturation, we found that the way these vocalizations are used flexibly during development points to experiential learning mechanisms, which are one of the key features in human language development," says Gültekin, lead author of the study.

"Our work provides an important building block to better understand the evolutionary foundations of early human language development. It sets the stage for future studies on how social interactions can influence speech development," concludes Yasemin Gültekin.

Credit: 
Deutsches Primatenzentrum (DPZ)/German Primate Center

Severe cannabis intoxication and rates of ingestion in children rise after legalization

Significantly higher rates of child intensive care admissions for unintentional cannabis poisonings have been seen following legalization of the drug in Canada.

Researchers from The Hospital for Sick Children (SickKids), based in Toronto, found a four-fold increase in unintentional poisonings in children under the age of 12 and a three-fold increase in intensive care admissions for severe cannabis poisoning in the first two years following cannabis legalization.

However, the overall number of visits per month for cannabis intoxications to the SickKids Emergency Department (ED) remained consistent when comparing the pre- and post-legalization periods. The findings were published in the peer-reviewed journal Clinical Toxicology.

Led by Dr Yaron Finkelstein, Staff Physician, Paediatric Emergency Medicine and Clinical Pharmacology and Toxicology at SickKids, the study compared cannabis-related ED visits, hospitalizations and intensive care unit (ICU) admissions at SickKids during pre- and post-legalization periods to analyze the unintentional impacts of the legislation.

"While uncommon in adults, cannabis intoxication can have significant negative impacts on young children including behavioural changes, seizures, respiratory depression, problems with coordination and balance, and even coma. As different formulations of cannabis continue to be legalized, it is important for everyone who has cannabis in their home, to be aware of the potential harms to children and ensure cannabis products are safely stored," notes Finkelstein, Senior Scientist, Child Health Evaluative Sciences at SickKids.

Measuring admissions for cannabis intoxication to SickKids over a 12-year period, from January 1, 2008 to December 31, 2019, the study identified that a higher proportion of children were admitted to the ICU following legalization (13.6% vs. 4.7%, respectively).

The study determined that the increases in severe intoxications from cannabis were primarily due to exposure of young children to cannabis edibles, which have become increasingly accessible and popular. Edible cannabis products are both highly concentrated and visually attractive to young children - leading to ingestion as the most consequential route of paediatric exposures. Inconsistencies and difficulties in determining the exact formulation and potency of the edible ingested can also make it challenging for health-care providers to anticipate the severity and length of the effects of cannabis exposure.

The study team, including researchers and trainees from across SickKids, hopes that by raising awareness of the potential dangers of unintentional cannabis poisonings, the findings will encourage the public to be even more careful when storing cannabis products within the home, particularly edibles that can often be mistaken by children for regular food and candy.

"As the COVID-19 pandemic has presented more opportunities for families to be at home, it is even more important to ensure that substances, such as cannabis, are stored out of the reach of young children. There are simple actions everyone can take to help prevent unintentional poisonings and keep children safe, including keeping cannabis products in a locked container, away from other food and drinks" adds Finkelstein, who is also a Professor, Departments of Paediatrics and Pharmacology and Toxicology at the University of Toronto.

Credit: 
Taylor & Francis Group

Fecal records show Maya population affected by climate change

image: Fecal records from lake sediment show that Maya lived in the area for longer than previously believed.

Image: 
Andy Breckenridge

A McGill-led study has shown that the size of the Maya population in the lowland city of Itzan (in present-day Guatemala) varied over time in response to climate change. The findings, published recently in Quaternary Science Reviews, show that both droughts and very wet periods led to important population declines.

These results are based on using a relatively new technique involving looking at stanols (organic molecules found in human and animal faecal matter) taken from the bottom of a nearby lake. Measurements of stanols were used to estimate changes in population size and to examine how they align with information about climate variability and changes in vegetation drawn from other biological and archaeological sources.

By using the technique, the researchers were able to chart major Maya population changes in the area over a period starting 3,300 years before the present (BP). They were also able to identify shifts in settlement patterns that took place over the course of hundreds of years that are associated with changes in land use and agricultural practices.

They discovered, moreover, that the land had been settled earlier than previously suggested by archaeological evidence.

New tool provides surprising information about human presence in Maya lowlands

The evidence from faecal stanols suggests that humans were present on the Itzan escarpment about 650 years before the archaeological evidence confirms it. It also shows that that the Maya continued to occupy the area, albeit in smaller number, after the so-called "collapse" between 800-1000 AD, when it had previously been believed that drought or warfare caused the entire population to desert the area. There is further evidence of a large population spike around the same time as a historical record of refugees fleeing the Spanish attack of 1697 AD on the last Maya stronghold in the southern Maya lowlands (Nojpeten, or modern-day Flores in Guatemala) - something that had not been known before.

Estimates of ancient population size in the Maya lowlands have traditionally been obtained through ground inspection and excavation. To reconstruct population dynamics, archaeologists locate, map, and count residential structures, and they excavate them to establish dates of occupation. They compare population trends at the site and regional levels. And they then use techniques such as pollen analysis and indicators of soil erosion into lakes to reconstruct the ecological changes that took place at the same time.

"This research should help archaeologists by providing a new tool to look at changes that might not be seen in the archaeological evidence, because the evidence may never have existed or may have since been lost or destroyed," said Benjamin Keenan, a PhD candidate in the Department of Earth and Planetary Sciences at McGill, and the first author on the paper. "The Maya lowlands are not very good for preserving buildings and other records of human life because of the tropical forest environment."

Maya population size affected by both droughts and wet periods

The faecal stanol from the sediment in Laguna Itzan confirms that the Maya population in the area declined due to drought at three different periods; between 90-280 AD, between 730-900 AD and during the much less well studied drought between 1350-950 BC. The researchers also found that the population declined during a very wet period from 400--210 BC, something which has received little attention until now. The population decline in response to both dry and wet periods shows that there were climatic effects on population at both climate extremes, and not only during dry periods.

"It is important for society generally to know that there were civilisations before us that were affected by and adapted to climate change," said Peter Douglas, an assistant professor in the Department of Earth and Planetary Sciences and the senior author on the paper. "By linking evidence for climate and population change we can begin to see a clear link between precipitation and the ability of these ancient cities to sustain their population."

The research also suggests that the Maya people may have adapted to environmental issues such as soil degradation and nutrient loss by using techniques such as the application of human waste (also known as night soil) as a fertiliser for crops. This is suggested by a relatively low amount of fecal stanols in the lake sediment at a time when there is archaeological evidence for the highest human populations. One explanation for this is that human waste was applied to soils as fertilizer and therefore the stanols were not washed into the lake.

Credit: 
McGill University

Buttoned up biomolecules

Increasing our understanding of cellular processes requires information about the types of biomolecules involved, their locations, and their interactions. This requires the molecules to be labeled without affecting physiological processes (bioorthogonality). This works when the markers are very quickly and selectively coupled using small molecules and "click chemistry". In the journal Angewandte Chemie, a team of researchers has now introduced a novel type of click reaction that is also suitable for living cells and organisms.

As an example, labeling biomolecules allows for the localization and characterization of tumors when an antibody that binds to specific molecules in the tumor cells is injected. A dye is then also injected. The antibodies and the dye are both equipped with small molecular groups that have almost no influence on cellular processes. When they encounter their counterpart, they bind immediately and specifically to each other with no side reactions--as easily as the two parts clicking together. This is where the term click chemistry comes from. The dye only remains attached to tumor cells, making them detectable.

The most well-known click chemistry reaction is the azide-alkyne reaction. An azide group reacts with an alkyne group to form a five-membered ring. However, this reaction requires a toxic copper catalyst, making it unsuitable for living systems. An alternative is the use of cyclic alkynes, in which the triple bond is under so much strain that the reaction works without a catalyst. Yet, the cycle can be unsuitable for some applications.

A team headed by Justin Kim at the Dana-Farber Cancer Institute and Harvard Medical School (Boston, USA) has now developed an alternative click reaction with linear, terminal alkynes, which works rapidly and is catalyst-free under complex physiological conditions. After a precise analysis of the electronic interactions in alkynes and tests with a variety of substituents, the team found that certain alkynes with halogens on both sides of the triple bond are reactive enough. The trick was to balance the different influences of the individual substituents so that the alkynes were sufficiently activated (push-pull activation) to react without a catalyst while remaining safe from attack by cellular components. For the other half of the click unit the team chose to use N,N-dialkylhydroxylamines (organic compounds containing both nitrogen and oxygen) instead of the conventional azides. The resulting reaction products (enamine-N-oxides) are biocompatible.

These new click reactions (retro-Cope eliminations) are very fast. The products are formed regioselectively, and the components are sufficiently stable and can easily be introduced to biomolecules. This broadens the spectrum of bioorthogonal coupling reactions for cellular labeling in living systems.

Credit: 
Wiley

Floods may be nearly as important as droughts for future carbon accounting

Plants play an essential role in curbing climate change, absorbing about one-third of the carbon dioxide emitted from human activities and storing it in soil so it doesn't become a heat-trapping gas. Extreme weather affects this ecosystem service, but when it comes to understanding carbon uptake, floods are studied far less than droughts - and they may be just as important, according to new research.

In a global analysis of vegetation over more than three decades, Stanford University researchers found that photosynthesis - the process by which plants take up carbon dioxide from the atmosphere - was primarily influenced by floods and heavy rainfall nearly as often as droughts in many locations. The paper, published in Environmental Research Letters on June 29, highlights the importance of incorporating plant responses to heavy rainfall in modeling vegetation dynamics and soil carbon storage in a warming world.

"These wet extremes have basically been ignored in this field and we're showing that researchers need to rethink it when designing schemes for future carbon accounting," said senior study author Alexandra Konings, an assistant professor of Earth system science in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "Specific regions might be much more important for flood impacts than previously thought."

More photosynthesis in combination with other factors can enable greater amounts of carbon to be stored in the soil over the long term, according to the researchers. To estimate the presence of photosynthesis, they analyzed plant greenness according to publicly available satellite data from 1981 to 2015.

Because the field of carbon accounting is dominated by research on drought impacts, the co-authors were surprised to find that photosynthesis was affected by flooding so frequently - in about half the regions in the analysis. While drought is known to decrease photosynthesis, wet extremes can either decrease or accelerate the process.

"I think the drought side is probably something that many of us understand clearly because we can see soils drying out - we know that plants need water to be able to function normally," said lead study author Caroline Famiglietti, a PhD student in Earth system science.

Using statistical analysis, the researchers divided the globe into regions and isolated periods during which the plants' photosynthetic activity wouldn't have resulted from other factors, such as temperature or sunlight changes. They then used several long-term soil moisture datasets to determine which locations were more sensitive to extreme wet events than to extreme dry events and found that many regions in central Mexico, eastern Africa and northern latitudes should be targeted for further investigation.

"Everything that is observed in this master dataset reflects the behavior of the broader climate system," Famiglietti said. "This paper identified something surprising, but it didn't answer all the questions we still have."

In a warmer world, extreme weather is projected to become more intense, extensive and persistent, but the mechanisms controlling drought responses in plants are much better understood than extreme wet responses. The findings suggest an opportunity to address "a big component of the uncertainty in future climate change and its links to ecosystem carbon storage," according to Konings.

"If we can better understand these processes, we can improve modeling and better prepare for the future," Famiglietti said.

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences

Have a pandemic plan? Most people did not

image: Steven Woods, University of Houston psychology professor and director of the Cognitive Neuropsychology of Daily Life Laboratory, is corresponding author of the study.

Image: 
University of Houston

Since the onset of the COVID-19 pandemic last year, medical experts have stressed the importance of having a plan in the event of a positive test result. Where should you self-isolate? Do you have personal protective equipment for family members? Who should you notify about your diagnosis? An overwhelming 96% of healthy, educated adults surveyed by University of Houston researchers in the early stages of the pandemic did not have a comprehensive plan in mind, while 62% didn't have a plan at all.

"What that suggests is that it was difficult even for very high functioning people to digest and use all the complex information that was quickly emerging about COVID. They were largely unprepared and unsure how to proceed," said Steven Woods, UH psychology professor and corresponding author of the study published in the Journal of Clinical and Experimental Neuropsychology.

The World Health Organization declared COVID-19 a pandemic on March 11, 2020. Woods and Michelle A. Babicz, first author and UH clinical psychology doctoral student, spoke to 217 participants by phone between April 23 and May 21, 2020. Survey participants completed standard measures of neurocognition, health literacy, intelligence, personality and anxiety, while also answering questions about their COVID-19 information seeking skills, knowledge and adherence to recommendations from the Centers for Disease Control and Prevention, such as wearing masks and social distancing.

"The surprising outcome confirms the importance of building basic health literacy skills, because people's ability to understand numbers and medical terms was associated with how effectively they looked for credible COVID-19 information on the internet, how much they learned about COVID-19, and how they used that information to keep themselves and others safe," explained Woods, who runs the Cognitive Neuropsychology of Daily Life Laboratory in the UH College of Liberal Arts and Social Sciences.

Researchers point out that if these healthy individuals had a challenging time absorbing COVID-related information, then such challenges may be even greater in people with limited educational opportunities or with neurocognitive disorders, such as Alzheimer's disease or a brain injury, due to low health literacy and impaired memory.

"People with lower neurocognitive ability may be at higher risk for acquiring and using misinformation about COVID-19, which could have downstream implications for both personal and public health," said Babicz.

The researchers offer an array of effective techniques for individuals to improve their ability to learn and remember health information, including spacing, which is processing new information over time rather than cramming it in all at once. In addition, they suggest using flash cards to test knowledge recall and elaboration -- the practice of building a story around what one has learned.

"The findings may also help with the development and targeting of information campaigns as new public health crises inevitably emerge," Babicz added. "We suggest these campaigns use language and constructs that are accessible to persons with low levels of health literacy, perhaps through community-based participatory research approaches."

Credit: 
University of Houston

COVID-19 in Europe and travel: Researchers show the important role of newly introduced lineages in COVID-19 resurgence after last summer

Following the first wave of SARS-CoV-2 infections in spring 2020, Europe experienced a resurgence of the virus starting late summer. Although it appears clear that travel had a significant impact on the circulation of the virus, it remains challenging to assess how it may have restructured and reignited the epidemic in the different European countries.

In a new study published in the journal Nature this June 30th, 2021, Philippe Lemey - Rega Institute, KU Leuven, Simon Dellicour - SpELL, Spatial Epidemiology Lab, Université Libre de Bruxelles, and their collaborators, built a phylogeographic model to assess how newly introduced viral lineages, as opposed to persisting ones, contributed to the resurgence of COVID-19 in Europe. Their model was informed using epidemiological, mobility, and viral genomic data from ten European countries (Belgium, France, Germany, Italy, Netherlands, Norway, Portugal, Spain, United Kingdom, Switzerland).

Their analyses show that in the majority of the countries under investigation, more than half of the lineages circulating at the end of the summer resulted from new introductions since June 15. The researchers also show that the success of transmission of the newly introduced lineages was predicted by the local incidence of COVID-19: in countries that experienced a relatively higher summer incidence (e.g. Spain, Portugal, Belgium and France), the introduction events led to proportionately fewer active transmission chains after August 15.

For instance, their results also indicate that introductions in the UK were particularly successful in establishing local transmission chains, with a considerable fraction of introductions originating from Spain.

"Imagine a fire: if there are already quite a few outbreaks in a forest, lighting a few more will not change the fate of the forest; the fire will spread anyway. On the opposite, if there are only a few sporadic fire spots, then lighting new ones can accelerate and increase the violence of the overall fire to come" explains Simon Dellicour - author of the article, FNRS Research Associate at the ULB.

These results illustrate the threat of viral spread via international travel, a threat that must be carefully considered by strategies to control the current spread of variants that are more transmissible and/or evade immunity.

The pandemic exit strategy offered by vaccination programs is a source of optimism that also sparked proposals by EU member states to issue vaccine passports in a bid to revive travel and rekindle the economy. In addition to implementation challenges and issues of fairness, there are risks associated with such strategies when immunization is incomplete, as likely will be the case for the European population this summer.

The authors of the study conclude that conditions similar to those demonstrated in their study could provide fertile ground for viral dissemination and resurgence, which may now also involve the spread of variants that evade immune responses triggered by vaccines and previous infections. They hope that a well-coordinated, unified implementation of European strategies to mitigate the spread of SARS-CoV-2 will reduce the chances of future waves of infection.

Credit: 
Université libre de Bruxelles

Fairer finance could speed up net zero for Africa by a decade

Levelling up access to finance so that poorer countries can afford the funds needed to switch to renewable energy could see regions like Africa reaching net zero emissions a decade earlier, according to a study led by UCL researchers.

Access to finance (credit) is vital for the green energy transition needed to reduce global greenhouse gas emissions, as laid out in the Paris Agreement. But access to low-cost finance is uneven, with the cost of securing capital to help reach net zero differing substantially between regions.

Modelling created for the study, Higher cost of finance exacerbates a climate investment trap in developing economies, published in Nature Communications, shows the road to decarbonisation for developing economies is disproportionately impacted by differences in the weighted average cost of capital (WACC). This is a financial ratio used to calculate how much a company or organisation pays to finance its operations, whether through debt, equity or both. The lower the value, the easier the company or government can access funds.

In the case of Africa, and a scenario where global warming this century is kept at 2°C, researchers calculated that current unfavourable WACC values will stunt the region's green electricity production by 35%.

The study is a collaboration between researchers from the UCL Institute of Sustainable Resources and the UCL Energy Institute, both of which sit with in UCL's Bartlett Faculty of the Built Environment. It makes the case for policy interventions to lower WACC values for low-carbon technologies by 2050. This would allow Africa to reach net-zero emissions approximately 10 years earlier than if reductions in the cost of capital are not considered, so 2058 rather than 2066.

It describes the 'climate investment trap' that developing economies are faced with when climate-related investments remain chronically insufficient. These regions of the world already pay a high cost of finance for low-carbon investments, delaying the energy system transition and the reduction of emissions. Yet, unchecked climate change would lead to greater impacts in these regions, raising the cost of capital and discouraging investment even further. The trap is so binding in itself that poorer countries will struggle to escape it - especially in the aftermath of COVID-19 and its impact on their economies.

While developing economies require the bulk of low-carbon investment, and developed countries are where the most financial capital is, the former currently appear to be left out by the main current sustainable finance efforts and initiatives.

The authors of the report suggest that radical changes are needed such as helping to underwrite the perceived greater risks of low carbon investments in such regions.' So that capital is more equitably distributed and all regions, not just those in the global north, can afford to work towards net zero at the rate needed to tackle climate change for the benefit of everyone.

International organisations such as the IMF, investors and policymakers can all take responsibility for lowering the costs of capital in Africa.

Lead author Dr Nadia Ameli (UCL Institute for Sustainable Resources) said: "Our research shows how earlier action to improve financing conditions could have a significant impact on the speed and timing of the transition to renewable energy in lower and middle-income countries which, in turn, will significantly help to protect our planet.

"We don't believe it is fair that regions where people are already losing their lives and livelihoods because of the severe impacts of climate change also have to pay a high cost of finance to switch to renewables. Radical changes in finance frameworks are needed to better allocate capital to the regions that most need it. We should take the opportunity to reframe international market finance, where lower cost of capital for developing economies would allow for low-carbon development at a more internationally equitable cost. The sooner we act the better."

Co-author Professor Michael Grubb (UCL Institute for Sustainable Resources) said: "There is a growing belief that, with the dramatic decline in the global average cost of renewable, it will be much easier for the developing world to decarbonise. Our analysis shows that major obstacles remain, particularly given the difficulties that many of these countries have in accessing capital on the same terms. Appropriate international financial support remains vital to accelerate global decarbonisation."

Co-author Dr Hugues Chenet (UCL Institute for Sustainable Resources) said: "Our analysis shows the additional difficulty for least developed economies to access capital to finance their decarbonation. Sadly, we see that these countries are literally left aside by new sustainable finance policy frameworks such as the European Union Sustainable Finance Action Plan, which ignores parts of the world that need it most."

Co-author Dr Matt Winning (UCL Institute for Sustainable Resources) said: "Often developing economies are simply disadvantaged when it comes to decarbonisation simply because they are that, developing. Even with abundant renewable resources, developing region capital costs can be higher simply because of the risks involved in investing there. Our results show that policies to achieve a more level playing field for climate finance for those economies can make a significant impact."

Earlier this month UCL and University of Exeter academics joined with the International Centre for Climate Change and Development (ICCCAD) to launch a 1.5 Degree Charter to highlight to global leaders how breaching the 1.5?C target for warming this century, outlined in the Paris Agreement, will cost far more than paying poorer nations to help reach it.

Credit: 
University College London

'Plugging in' to produce environmentally friendly bioplastics

Bioplastics -- biodegradable plastics made from biological substances rather than petroleum -- can be created in a more economical and environmentally friendly way from the byproducts of corn stubble, grasses and mesquite agricultural production, according to a new study by a Texas A&M AgriLife Research scientist.

green tractor pulling a red cart through a field of bioenergy sorghum that is taller than the tractor

A bioenergy sorghum crop is harvested near College Station. (Texas A&M AgriLife photo)

This new approach involves a "plug-in" preconditioning process, a simple adjustment for biofuel refineries, said Joshua Yuan, Ph.D., AgriLife Research scientist, professor and chair of Synthetic Biology and Renewable Products in the Texas A&M College of Agriculture and Life Sciences Department of Plant Pathology. These "plug-in" technologies allow for optimization of sustainable, cost-effective lignin -- the key component of bioplastics used in food packaging and other everyday items.

The $2.4 million project is funded by the U.S. Department of Energy's Energy Efficiency and Renewable Energy Bioenergy Technologies Office. The research has recently been published in Nature Communications.

Yuan and researchers are submitting next-phase requests for additional project funding.

An adaptable process

head shot of a man in glasses - Joshua Yuan.
Joshua Yuan, Ph.D. (Texas A&M AgriLife photo)

Efficient extraction and use of lignin is a major challenge for biofuel refineries, Yuan said.

"Our process takes five conventional pretreatment technologies and modifies them to produce biofuel and plastics together at a lower cost."

Yuan's research builds on previous work investigating enhanced extraction methods for lignin.

The new method, named "plug-in preconditioning processes of lignin," or PIPOL, can be directly added into current biorefineries and is not cost prohibitive, Yuan said. PIPOL is designed to integrate dissolving, conditioning and fermenting lignin, turning it into energy and making it easily adaptable to biorefinery designs.

Bioeconomy 'a federal priority'

Yuan said the bioeconomy and biomanufacturing sectors are a federal priority as the White House Office of Science and Technology Policy points to bioeconomy infrastructure, innovation, products, technology and data to enhance U.S. economic growth.

a picture of a sorghum head after seeds were removed as well as a stock and a small glass container with seed in it

A high-yielding perennial sorghum forage hybrid can be used as a feedstock to create bioplastics in a more economical and environmentally friendly way. (Texas A&M AgriLife photo by Kay Ledbetter)

The bioeconomy supports some 285,000 jobs and generates $48 billion in annual revenue.

"Innovation is the key to achieving growth and a more widespread use of biodegradable plastics. Lignocellulosic biorefinery commercialization is hindered by limited value-added products from biomass, lack of lignin utilization for fungible products and overall low-value output with ethanol as primary products," he said. "This recent discovery will make significant strides to overcome some of these challenges."

Yuan also touted the research for its environmentally friendly aspects.

"We are producing over 300 million tons of plastics each year," he said. "It's critical to replace those with biodegradable plastics. This work provides a path to produce bioplastics from common agriculture waste like [that from production of] corn and other grasses and wood.

"We think this research is very industrially relevant and could only help enable the biorefinery and polymer industries to [attain] greater efficiencies and economic opportunity."

The role of agriculture byproducts

AgriLife Research and the College of Agriculture and Life Sciences share a commitment to seek solutions through science to solve environmental challenges. Their research has already found that sustainable products such as mesquite and high-tonnage sorghum can be used as feedstock for biofuel production.

Agricultural byproducts such as corn stubble and other grasses are alternative feedstock sources for biofuel plants, Yuan said. These create potential new revenue streams for farmers as well as the transportation sector that transports harvested feedstock and byproduct crops to refinery operations.

"We have shown that bioplastics from lignocellulosic biorefineries can be more economically beneficial, which opens new avenues to use agricultural waste to produce biodegradable plastics," Yuan said. "The discovery will mitigate global climate changes via replacing fossil fuel and nondegradable plastics by renewable and biodegradable plastics."

Credit: 
Texas A&M AgriLife Communications

University of Cincinnati screening program contributes to increase in HIV diagnoses

image: Michael Lyons, M.D., of the Department of Emergency Medicine at the University of Cincinnati College of Medicine.

Image: 
Colleen Kelley/UC Creative + Brand

Newly published research shows that a screening program in the University of Cincinnati Medical Center Emergency Department helped detect an outbreak of HIV among persons who inject drugs in Hamilton County, Ohio, from 2014-18.

The study was published in PLOS ONE.

The results of the study highlight UC contributions to public health surveillance as yet another reason why emergency departments should be screening for undiagnosed HIV infections, according to Michael Lyons, MD, associate professor in the Department of Emergency Medicine at the UC College of Medicine.

"The importance of emergency department screening has been established for over 20 years," says Lyons. "Diagnosing people as early as possible allows for changes in behavior to stop spreading the illness and treatment that improves their health and makes them much less infectious to others."

Since 2015, an increasing number of HIV outbreaks among those who inject drugs have been reported in the United States. The study found this single testing site contributed 20% of new HIV diagnoses regionally during a period of rapidly increasing HIV infection in that group.

An Early Intervention Program (EIP) was founded at UC in 1998, the first program of its kind in the country. The EIP offers HIV intervention/prevention counseling, testing, linkage to care and many other services to assist individuals.

"Everyone understands that early diagnosis of HIV is critical for individuals and public health," says Lyons. "Everyone knows that when you screen, that data goes into surveillance systems. This study highlights that a contribution to surveillance is an important public health outcome. In this case, it helped public health authorities to identify a serious HIV outbreak and trigger a Centers for Disease Control and Prevention investigation and response planning. That's not to say that we solved this public health problem, but your chances of helping are a lot better if you know about the crisis than if you don't."

The screening is done through a blood test or an oral swab. The blood test method is the most common, and the sample is sent to the lab, with results returned typically in 90 minutes. If the patient has been discharged, the emergency staff follows up with them later. The health department is notified of all positive results for surveillance and to facilitate partner counseling and referral services.

Lyons says the screening program at UC follows a few different program models. They have publicly funded health promotion advocates who are adjunct health care workers in the department who help with screening. There are also integrated screening workflows in the electronic health record where the nurses and providers are prompted to order testing as well.

Lyons and the other researchers would like to see this study have an impact on public health policy as well as the frequency of HIV screening in emergency departments across the country.

"I hope that public health and policymakers continue to realize that understanding disease epidemiology through surveillance data is essential for fighting infectious disease and that emergency department data is very important to those surveillance efforts," says Lyons. "I also hope that emergency departments are even more motivated to expand HIV screening given their role in monitoring trends in epidemiology that guide public health response."

Study authors included representatives from the Centers for Disease Control and Prevention, the Ohio Department of Health and Hamilton County Public Health.

Credit: 
University of Cincinnati

Genetic risks for nicotine dependence span a range of traits and diseases

Some people casually smoke cigarettes for a while and then stop without a problem, while others develop long-term, several packs-per-day habits. A complex mix of environmental, behavioral and genetic factors appear to raise this risk for nicotine dependence.

Studies of groups of twins suggest that 40 to 70 percent of the risk factors are heritable. Until recently, however, studies have only explained about 1 percent of the observed variation in liability to nicotine dependence, using a genetic score based on how many cigarettes a person smokes per day.

A new study led by psychologists at Emory University offers a new model for examining this genetic risk. It leveraged genome wide association studies for a range of different traits and disorders correlated with nicotine dependence and explained 3.6 percent of the variation in nicotine dependence.

The journal Nicotine & Tobacco Research published the finding.

Higher polygenetic scores for a risk for schizophrenia, depression, neuroticism, self-reported risk-taking, a high body mass index, alcohol use disorder, along with a higher number of cigarettes smoked per day were all indicators of a higher risk for nicotine dependence, the study found. And polygenetic scores associated with higher education attainment lowered the risk for nicotine dependence, the results showed.

"If you look at the joint effect of all of these characteristics, our model accounts for nearly 4 percent of the variation in nicotine dependence, or nearly four times as much as what we learn when relying solely on a genetic index for the number of cigarettes someone smokes daily," says Rohan Palmer, senior author of the study and assistant professor in Emory's Department of Psychology, where he heads the Behavioral Genetics of Addiction Laboratory.

"What we're finding," Palmer adds, "is that to better leverage genetic information, we need to go beyond individual human traits and disorders and think about how risk for different behaviors and traits are interrelated. This broader approach can give us a much better measure for whether someone is at risk for a mental disorder, such as nicotine dependence."

"All of the traits and diseases we looked at are polygenic, involving multiple genes," adds Victoria Risner, first author of the study, who did the work as an Emory undergraduate majoring in neuroscience and behavioral biology. "That means that millions of genetic variants likely go into a complete picture for all of the heritable risks for nicotine dependence."

The researchers hope that others will build on their multi-trait, polygenetic model and continue to boost the understanding of the risk for such complex disorders. "The more we learn, the closer we can get to one day having a genetic test that clinicians can use to inform their assessment of someone's risk for nicotine dependence," Palmer says.

Although the hazards of smoking are well established, about 14 percent of Americans report daily use of tobacco. Around 500,000 people die each year in the United States from smoking or exposure to smoke, and another 16 million live with serious illnesses caused by tobacco use, including cancer, cardiovascular disease and pulmonary disease. While the toxic chemicals produced during smoking and vaping are what cause harmful health effects, it's the addictive component of nicotine that hooks people on these habits.

Risner worked on the current paper for her honors thesis. "Nicotine dependence was interesting to me because the vaping scene was just arriving while I was an undergraduate," she says. "I saw some of my own friends who were into vaping quickly becoming dependent on it, while some others who were using the same products didn't. I was curious about the genetic underpinnings of this difference."

The project leveraged genome-wide association studies for a range of traits and disorders. The researchers then looked for matching variants in genetic data from a national representative sample of Americans diagnosed with nicotine dependence. The results showed how polygenetic scores for the different traits and disorders either raised or lowered the risk for that dependence. The number of cigarettes smoked per day, self-perceived risk-taking and educational attainment were the most robust predictors.

The multi-variant, polygenetic model offers a road map for future studies. A clearer picture of heritability for nicotine dependence, for instance, may be gained by adding more risk associations to the model (such as nicotine metabolism) and clusters of polygenic traits (such as anxiety along with neuroticism).

"As we continue to zero in on who is most at risk for becoming nicotine dependent, and what inter-related factors, whether genetic or environmental, may raise their risk, that could help determine what intervention might work best for an individual," Palmer says.

"Just a few decades ago, it was not well understood that nicotine dependence could have a genetic component," Risner says. "Genetic studies may help reduce some of the stigma society has against substance use disorders, while also making treatment more accessible."

Risner graduated from Emory in 2019 and is now in medical school at the University of North Carolina, Chapel Hill. This summer, she's applying the coding and analytical skills she learned at Emory to conduct research into genetic factors that may raise the risk for pre-term births.

Credit: 
Emory Health Sciences

New treatment options for deadliest of cancers

A new way to target a mutant protein which can cause the deadliest of cancers in humans has been uncovered by scientists at the University of Leeds.
The mutated form of the RAS protein has been referred to as the "Death Star" because of its ability to resist treatments and is found in 96% of pancreatic cancers and 54% of colorectal cancers.

RAS is a protein important for health but in its mutated form it can be switched on for longer, leading to the growth of tumours.

One drug has already been approved for treatment but it can only tackle a small subset of the total number of cancers driven by RAS.

Now a team from the University of Leeds' School of Molecular and Cellular Biology has gone further and found a new way to target the protein to pave the way for a greater range of treatments for more patients.

Lead author of the report, Dr Darren Tomlinson, of the Astbury Centre for Structural and Molecular Biology, said: "The RAS protein has been referred to as the Death Star with good reason and that's because it's spherical and impenetrable, essentially preventing drugs binding and inhibiting it. We've identified a further chink in the Death Star that can be used to develop new drugs beyond the ones already in development."

The researchers used the School of Molecular and Cellular Biology's own patented Affimer biotechnology platform to pinpoint druggable "pockets" on the protein to allow effective treatment to take place.

The study was funded by the Wellcome Trust, the Medical Research Council, the Technology Strategy Board and Avacta and is published today (30 June 2021) in the journal, Nature Communications.

Dr Tomlinson added: "This work opens up the door for the hundreds of other disease targets. We could effectively probe any protein involved in any disease for druggable pockets in the future."

Co-first author of the report and PhD student, Amy Turner, from the School of Molecular and Cellular Biology, said: "Because it causes 20-30% of all known cancers, RAS really is the Holy Grail of therapeutic targets. The fact that it has previously been termed "undruggable" has allowed us to demonstrate the huge impact that our Affimer technology can have when it comes to treating challenging pathologies. We have already identified small molecules that bind to RAS, so it will be very exciting to be involved in developing these over the next few years."

The researchers say work on expanding more ways to target RAS is still in its early stages but they believe their discovery could lead to new treatments, putting Leeds at the forefront of the fight against cancer.

Credit: 
University of Leeds

Researchers discuss common errors in internet energy analysis to develop best practices

When it comes to understanding and predicting trends in energy use, the internet is a tough nut to crack. So say energy researchers Eric Masanet, of UC Santa Barbara, and Jonathan Koomey, of Koomey Analytics. The two just published a peer-reviewed commentary in the journal Joule discussing the pitfalls that plague estimates of the internet's energy and carbon impacts.

The paper describes how these errors can lead well-intentioned studies to predict massive energy growth in the information technology (IT) sector, which often doesn't materialize. "We're not saying the energy use of the internet isn't a problem, or that we shouldn't worry about it," Masanet explained. "Rather, our main message is that we all need to get better at analyzing internet energy use and avoiding these pitfalls moving forward."

Masanet, the Mellichamp Chair in Sustainability Science for Emerging Technologies at UCSB's Bren School of Environmental Science & Management, has researched energy analysis of IT systems for more than 15 years. Koomey, who has studied the subject for over three decades, was for many years a staff scientist and group leader at Lawrence Berkeley National Lab, and has served as a visiting professor at Stanford University, Yale University and UC Berkeley. The article, which has no external funding source, arose out of their combined experiences and observations and was motivated by the rising public interest in internet energy use. Although the piece contains no new data or conclusions about the current energy use or environmental impacts of different technologies and sectors, it raises some important technical issues the field currently faces.

Masanet and Koomey's work involves gathering data and building models of energy use to understand trends and make predictions. Unfortunately, IT systems are complicated and data is scarce. "The internet is a really complex system of technologies and it changes fast," Masanet said. What's more, in the competitive tech industry, companies often guard energy and performance data as proprietary trade secrets. "There's a lot of engineering that goes into their operations," he added, "and they often don't want to give that up."

Four fallacies

This feeds directly into the first of four major pitfalls the two researchers identified: oversimplification. Every model is a simplification of a real-world system. It has to be. But simplification becomes a pitfall when analysts overlook important aspects of the system. For example, models that underestimate improvements to data center efficiency often overestimate growth in their energy use.

Some simplification is understandable, said Koomey, since often researchers simply don't have enough data. But too much simplification runs the risk of producing inaccurate results, he stressed.

The second pitfall is essentially the conflation of internet usage with energy demand: data traffic and energy use are not equivalent. "It seems rational to say that a 20% increase in data traffic would lead to a 20% increase in the energy use of the internet," Masanet said, "but that's not the way the system works." Networks have high fixed energy use, so energy demand doesn't change much when data traffic changes.

Imagine data throughput on the internet as passengers on a train. Most of the energy goes into moving the train. Doubling the number of people on the train won't double the amount of energy the train requires. "So there's this smaller, marginal effect that's well known to network engineers but is not always known among energy analysts," Masanet said.

The pace and nature of changes in internet technologies and data demand bring about the third pitfall: Projecting too far into the future. In a retrospective study(link is external) published in 2020 Masanet, Koomey and their colleagues found that earlier projections overestimated data center energy growth. They didn't foresee large increases in IT virtualization or shifts of workloads to the cloud.

Not only do we develop new and improved technologies, but industry structures and consumer demands often change as well. For instance, few people could have predicted the massive amounts of processing power now devoted to bitcoin mining just 5 years ago. That said, the researchers caution against extrapolating such early growth trends too far into the future. "When the internet was growing rapidly in the late 1990s, some analysts projected that IT would account for half of U.S. electricity use within a decade," Koomey said.

Given all this uncertainty, it's no wonder that analysts can miss the mark in their predictions. IT changes so rapidly that projections simply won't be accurate beyond a few years, Masanet said. In contrast, projecting decades out is common in other domains of energy analysis. It can be crucial for planning power grid capacity or transportation infrastructure, to name a few. This can lead to unrealistic expectations when it comes to forecasting IT energy use, which is much more rapid and unpredictable.

The final pitfall the duo identified stemmed from a lack of proper scope: overgeneralization. When data is scarce, it's tempting to apply growth rates from one part of a system to the system as a whole. Masanet offered the rise of cloud computing as one example. Although the energy use of many cloud companies grew rapidly over the last decade, this wasn't the whole picture for data centers. The energy use of traditional data centers fell concurrently as that part of the sector shrank, keeping the overall energy use of data centers in check during that same time period.

Similarly, while the rise in streaming video may drive up energy use for data centers, it could reduce home energy use by decreasing the number of TV set-top boxes, Koomey explained.

"You've got to look at the whole system and avoid extrapolating from just one part," Masanet said.

Going forward

In addition to dealing with a dearth of data and a complex system, tech companies and analysts don't have any standards for reporting internet energy use. Automobiles have miles per gallon -- the agreed upon efficiency metric in the U.S.-- but there's no analogue for data centers yet. One reason is that every data center is different: It's difficult to compare a center primarily engaged in scientific computing with another that mostly handles web hosting, Masanet pointed out.

Congress recently passed the Energy Act of 2020, which has provisions for data centers. "It's a positive sign that we're moving toward having those benchmarks that could enable more reporting from companies, at least in the U.S.," Masanet said.

"One thing the research community can do is help develop these metrics so that if companies do want to report and still stay confidential, they can have standard, agreed-upon, scientific metrics to use," he added.

"The world needs better IT energy predictions, and the analysis community needs to get a lot better at producing these, ourselves included," Masanet continued. "We've encountered these pitfalls in our own work.

"Now we need to recognize them and figure out how to avoid them in the future so that the we all can provide more rigorous outputs, because those outputs are becoming more and more important."

Koomey emphasized the importance of exercising restraint when confronting complex systems with persistent data gaps. While it can be appealing to make assumptions when data doesn't exist, that's not the best approach, he said. It's better to collect more data, acknowledge caveats and remain modest when making claims.

"Our goal is to promote accurate analysis of information technology, so that policymakers can make judgments based on reality rather than misconceptions," he said. "Data on IT electricity use will always lag behind reality because much relevant data are closely-guarded secrets, and these systems change so quickly. Analysts need to accept these inherent limitations and not make strong claims based on speculation or too many assumptions."

Credit: 
University of California - Santa Barbara