Culture

Children with asymptomatic malaria a 'hidden risk' to disease control efforts

The role of people infected with malaria without showing symptoms presents a hidden risk to efforts to control the disease after they were found to be responsible for most infections in mosquitoes, according to a study published in The Lancet Infectious Diseases.

Researchers from the Infectious Diseases Research Collaboration (IDRC), London School of Hygiene & Tropical Medicine (LSHTM), Radboud university medical center and University of California, San Francisco, found asymptomatic children in the Uganda study were the biggest source of malaria parasites transmitted to mosquitoes. This could provide a new opportunity for control efforts by targeting this infectious reservoir.

Malaria presents a major health threat globally, with 94% of cases on the African continent alone, according to the WHO World Malaria Report 2020. The disease is passed to a human through the bite of an infected female Anopheles mosquito causing infection with the parasite. The predominant and most deadly parasite, Plasmodium falciparum, accounts for over 75% of mortality worldwide and is highly prevalent in Uganda.

Malarial parasites depend on a life cycle in which they constantly move back and forth between humans and mosquitoes. Successfully interrupting transmission of the disease can involve clearing parasites from human 'hosts' using anti-malaria drugs.

Nagongera sub-county (Tororo district) in eastern Uganda has historically very high malaria transmission but following intensive malaria control efforts, such as insecticide-treated bednets, indoor residual spraying (IRS) with insecticides and access to malaria drugs, infections - or at least symptomatic cases - have gone down remarkably.

The research team aimed to investigate patterns of malaria infection and understand more about transmission in the area. The study involved two years of regularly testing more than 500 people for evidence of malaria parasites. The genetic make-up of parasites was determined, as well as their ability to infect mosquitoes.

The researchers found that individuals who were asymptomatic were unknowingly responsible for most mosquito infections in the study. People with symptomatic infections were responsible for less than 1% of mosquito infections and appeared to play a negligible role in sustaining transmission.

School-aged children aged 5-15 years were responsible for over half (59%) of the infectious reservoir, followed by the under-5s (26%) and people aged 16 years and older (16%). Surprisingly, the researchers found just four children were linked to 60% of the infected mosquitoes studied.

Co-author Dr John Rek from IDRC said: "These findings are a real eye opener in the fight against malaria. We found that infections in school-aged children drive malaria transmission. Some children harboured billions of malaria parasites in their bloodstream without experiencing symptoms."

LSHTM co-author Professor Sarah Staedke said: "School-aged children are an important reservoir of malaria parasites that could be easily targeted for control interventions, such as chemoprevention through intermittent preventive treatment.

"This would benefit individual children, may reduce malaria transmission, and could help sustain malaria gains if intense vector control measures are interrupted."

Understanding transmission among asymptomatic cases is particularly important in areas where malaria control has been successful, but there is a risk that malaria might resurge when control measures are relaxed or withdrawn. Asymptomatic children that keep malaria circulating at relatively low levels could be sufficient to cause infections to quickly rebound if control efforts are not maintained. In other parts of Uganda, when intense malaria control with indoor residual spraying was halted, infections rebounded within weeks.

Principal author Professor Teun Bousema, from Radboud university medical center, said: "Our study demonstrates that even when malaria appears under control, there is a reservoir of infected individuals who can sustain the spread of this deadly disease. Unless their infections are targeted, malaria can quickly return."

Overall, these findings provide evidence that asymptomatic infections are an important source of onward transmission to mosquitoes. Many malaria infections that contribute to transmission are initially below the level detectable by conventional diagnostic tests, including microscopy and rapid diagnostic tests.

Professor Moses Kamya, IDRC co-author of the study, said: "Only through focused interventions, ideally supplemented by highly sensitive testing, can we target the reservoirs of infection in school-age children."

The study authors acknowledge limitations of the study including the length of time between participant sample selection and mosquito feeds, meaning they did not routinely measure infectiousness in the first few weeks of asymptomatic infections. They also noted that trial participants had exceptionally good access to care whereas in other settings people with symptomatic malaria infections might develop more transmissible infections if treatment is not administered quickly.

Credit: 
London School of Hygiene & Tropical Medicine

Convergent mechanism of aging discovered

image: Andrea Annibal uses the mass spectrometer to investigate various metabolites in long-lived worms and mice.

Image: 
Link/Max Planck Institute for Biology of Ageing, 2021

Several different causes of ageing have been discovered, but the question remains whether there are common underlying mechanisms that determine ageing and lifespan. Researchers from the Max Planck Institute for Biology of Ageing and the CECAD Cluster of Excellence in Ageing research at the University Cologne have now come across folate metabolism in their search for such basic mechanisms. Its regulation underlies many known ageing signalling pathways and leads to longevity. This may provide a new possibility to broadly improve human health during ageing.

In recent decades, several cellular signalling pathways have been discovered that regulate the lifespan of an organism and are thus of enormous importance for ageing research. When researchers altered these signalling pathways, this extended the lifespan of diverse organisms. However, the question arises whether these different signalling pathways converge on common metabolic pathways that are causal for longevity.

The search begins in the roundworm

The scientists started their search in the roundworm Caenorhabditis elegans, a well-known model organism for ageing research. "We studied the metabolic products of several, long-lived worm lines. Our analyses revealed that, among other things, we observed clear changes in the metabolites and enzymes of the folate cycle in all worm lines. Since folate metabolism plays a major role in human health, we wanted to further pursue its role in longevity", explains Andrea Annibal, lead author of the study.

A common mechanism for longevity

Folates are essential vitamins important for the synthesis of amino acids and nucleotides - the building blocks of our proteins and DNA. "We tuned down the activity of specific enzymes of folate metabolism in the worms. Excitingly, the result was an increase in lifespan of up to 30 percent", says Annibal. "We also saw that in long-lived strains of mice, folate metabolism is similarly tuned down. Thus, the regulation of folate metabolism may underlie not only the various longevity signalling pathways in worms, but also in mammals."

"We are very excited by these findings because they reveal the regulation of folate metabolism as a common shared mechanism that affects several different pathways of longevity and is conserved in evolution", adds Adam Antebi, director at the Max Planck Institute for Biology of Ageing. "Thus, the precise manipulation of folate metabolism may provide a new possibility to broadly improve human health during ageing." In future experiments, the group aims to find out the mechanism by which the folate metabolism affects longevity.

Credit: 
Max-Planck-Gesellschaft

Developing countries pay steep economic & health costs because of high car air pollution

In an international study published by the journal Environment International, the University of Surrey led an international team of air pollution experts in monitoring pollution hotspots in 10 global cities: Dhaka (Bangladesh); São Paulo (Brazil); Guangzhou (China); Medellín (Colombia); Cairo (Egypt); Addis Ababa (Ethiopia); Chennai (India); Sulaymaniyah (Iraq); Blantyre (Malawi); and Dar-es-Salaam (Tanzania).

Surrey's Global Centre for Clean Air Research (GCARE) set out to investigate whether the amount of fine air pollution particles (PM2.5) drivers inhaled is connected to the duration drivers spend in pollution hotspots and socio-economic indicators such as gross domestic product (GDP).

Across all the cities in the study, researchers found that drivers only needed to spend a short amount of time in high-pollution hotspots to inhale a significant amount of PM2.5 particles. For example, drivers in Guangzhou and Addis Ababa spent 26 and 28 per cent of their commute in hotspot areas, which contributed to 54 and 56 per cent of the total amount of air pollution inhaled on their trip.

The researchers found that the cities where drivers were exposed to the highest levels of PM2.5 pollution - Dar-es-Salaam, Blantyre and Dhaka - also experienced higher death rates per 100,000 commuting car population per year. The low PM2.5 levels in Medellín, São Paulo and Sulaymaniyah corresponded with very low death rates.

The international study assessed economic losses by measuring a city's death rate caused by PM2.5 car exposure against its GDP per capita. It found that, for most cities, lower GDP linked directly to more significant economic losses caused by in-car PM2.5 exposure - with Cairo and Dar-es-Salaam being impacted the most (losses of 8.9 and 10.2 million US dollars per year, respectively).

The team also found that, except for Guangzhou, cities with higher GDP per capita have less hotspot areas during an average route trip, thus decreasing the risk to drivers.

Professor Prashant Kumar, Principal Investigator of CArE-Cities Project, Associate Dean (International) and Founding Director of GCARE at the University of Surrey, said: "Our global collaborative project has confirmed that air pollution disproportionately affects developing countries. Many countries are caught in a vicious cycle where their low GDP leads to higher pollution exposure rate for drivers, which leads to poorer health outcomes, which further damages the economy of those cities. This is discouraging news - but it should galvanise the international community to find and deploy measures that mitigate the health risks faced by the world's most vulnerable drivers."

Professor Shi-Jie Cao, a collaborative partner from the Southeast University, said: "If we are ever to make a world where clean air is available to all, it will take a truly global collaborative effort - such as CArE-Cities. We hope to continue to work closely with Surrey and other global partners, sharing knowledge and expertise that will make a cleaner future a reality."

Professor Adamson Muula, a collaborative partner from formerly University of Malawi and now Head of Public Health at the Kamuzu University of Health Sciences (KUHeS), said: "If developing countries are to not be left behind in the struggle against air pollution and climate change, it is important that we build the capacity and knowledge to gather on-the-ground data. This project is a small but a significant step in the right direction for Malawians; a direction which will lead to better decisions and cleaner air for Malawi."

Credit: 
University of Surrey

How do we know where things are?

image: In Experiment 1, the frame moves left and right but instead of seeing the locations of the blue and red edges where they are when they flash, they always appear with the blue flash on the left and separated by the width of the frame, as if the frame were not moving. When the frame moves more than its width as shown here, the red edge is physically to the left of the blue when they flash at the end of the frame's motion, and yet the blue still appears to the left of red, separated again by almost the width of the frame. See Movie 1 ((http://movie-usa.glencoesoftware.com/video/10.1073/pnas.2102167118/video-1).

Image: 
Figure by P.Cavanagh

Our eyes move three times per second. Every time we move our eyes, the world in front of us flies across the retina at the back of our eyes, dramatically shifting the image the eyes send to the brain; yet, as far as we can tell, nothing appears to move. A new study provides new insight into this process known as "visual stabilization". The results are published in the Proceedings of the National Academy of Sciences.

"Our results show that a framing strategy is at work behind the scenes all the time, which helps stabilize our visual experience," says senior author Patrick Cavanagh, a research professor in psychological and brain sciences at Dartmouth and a senior research fellow in psychology at both Glendon College and the Centre for Vision Research at York University. "The brain has its own type of steadycam, which uses all sorts of cues to stabilize what we see relative to available frames, so that we don't see a shaky image like we do in handheld movies taken with a smartphone. The visual world around us is the ultimate stable frame but our research shows that even small frames work: the locations of a test within the frame will be perceived relative to the frame as if it were stationary. The frame acts to stabilize your perception."

One such example is when someone waves goodbye to you from the window of a moving bus. Their hand will appear as if it's moving up and down relative to the window rather following the snake-like path that it actually traces out from the moving bus. The bus window acts like a frame through which the motion of the hand waving good-bye is seen relative to that frame.

The study consisted of two experiments that tested how a small square frame moving on a computer monitor affected participants' judgments of location. The experiments were conducted in-person: eight individuals including two of the authors; and also online due to the COVID-19 pandemic: 274 participants were recruited from York University of which 141 had complete data. The data were very similar for both types of participants.

In Experiment 1, a white, square frame moves left and right, back and forth, across a grey screen and the left and right edges of the square flash when the square reaches the end of its path: the right edge flashes blue at one end of the travel and the left edge flashes red at the other, as shown in the figure. Participants were asked to adjust a pair of markers at the top of the screen to indicate the distance they saw between the flashed edges. Experiment 1 had two conditions. The first condition, demonstrated in Movie 1, evaluated how far apart the outer left and right edges of the square frame appeared. In this example, the frame travel is longer than the frame size so the red flash is physically to the left of the blue flash. When the frame is moving slowly at the start, we see the flashes where they really are, with red to the left. However, once the frame is moving fast enough, blue is seen left of red. This is where they would be, if the frame were actually stationary. The moving frame fools us by stabilizing our judgments of location. Further on in the movie, the frame briefly fades out to reveal that the red flash is actually to the left of the blue flash, which has been the case the entire time.

The second condition assessed the travel of the frame's physical edge. The left edge flashes at each end of the frame's travel and the distance the frame travels is seen as the space between the two flashes.

The data from both conditions of Experiment 1 demonstrated that participants perceived the flashed edges of the frame as if it were stable even though it was clearly moving, illustrating what the researchers call the "paradoxical stabilization" produced by a moving frame.

Experiment 2 again demonstrated the stabilizing power of a moving frame (see Movie 2)) by flashing a red disc and a blue disc at the same location within a moving frame. The square frame moves back and forth from left to right while the disc flashes red and blue in alternation. As in Experiment 1, participants were asked to indicate the perceived separation between the red and blue discs. Even though there is no physical separation between the discs, the moving frame creates the appearance that the two discs are located to the left and right of their true locations, relative to the frame where they flashed. In other words, participants perceived the location of the discs relative to the frame, as if it were stationary and this was true across a wide range of frame speeds, sizes, and path lengths.

"By using flashes inside a moving frame, our experiments triggered a paradoxical form of visual stabilization, which made the flashes appear in positions where they were never presented," says Cavanagh. "Our results demonstrate a 100% stabilization effect triggered by the moving frames - the motion of the frame has been fully discounted. These data are the first to show a frame effect that matches our everyday experience where, each time our eyes move, the motion of the scene across our retinas has been fully discounted making the world appear stable."

"In the real-world, the scene in front of us acts as the anchor to stabilize our surroundings," Cavanagh says. Discounting the motion of the world as our eye move makes a lot of sense, as most scenes (i.e. house, workplace, school, outdoor environment) are not moving, unless an earthquake is occurring.

"Every time our eyes move, there's a process that blanks out the massive blur caused by the eye movement. Our brain stitches this gap together so that we don't notice the blank, but it also uses the motion to stabilize the scene. The motion is both suppressed and discounted so that we can keep track of the location of objects in the world," says Cavanagh.

Based on the study's results, the research team plans to explore visual stabilization further using brain imaging at Dartmouth.

Credit: 
Dartmouth College

Corticosteroids may be an effective treatment for COVID-19 complications in children

Corticosteroids may be an effective treatment for children who develop a rare but serious condition after COVID-19 infection.

This is the finding of an international study of 614 children, published in the New England Journal of Medicine, led by Imperial College London.

All children in the study developed a serious disorder following COVID-19 infection. This condition, called multi-system inflammatory syndrome in children (MIS-C), is thought to affect 1 in 50,000 children with SARS-CoV-2 infection.

The new disorder, which is also called paediatric inflammatory multi-system syndrome temporally associated with SARS-CoV-2 infection (PIMS-TS), affects children of all ages but is more common in older children and teenagers. The disorder generally occurs 2-6 weeks after infection with the SARS-CoV-2 virus.

The illness is characterised by persistent high fever, often accompanied by abdominal pain, vomiting, red eyes and red rash. Severely affected children have developed heart inflammation, with shock and failure of multiple organs.

Fortunately, with optimal treatment the majority of affected children have recovered well. However, worldwide most reports suggest a fatality rate of 2-4%.

An important concern has been that some affected children have developed inflammation of their arteries that supply the heart with blood (called coronary arteries), resulting in widening of these arteries. This is also known to happen in another condition called Kawasaki disease.

The new study, supported by the EU's Horizon 2020 programme, investigated two initial treatments for this condition: a type of steroid called corticosteroids (such as methyl prednisolone) and antibody treatment (called immunoglobulin). The antibodies come from human blood, and have been shown to reduce inflammation in the body. The study also compared initial treatment with steroids together with immunoglobulin.

The study involved hundreds of doctors worldwide uploading information about patient outcomes onto an online database, and was not a randomised controlled trial (see Notes to Editors).

All three treatments (immunoglobulin, immunoglobulin combined with corticosteroids and cortico-steroids alone) resulted in more rapid resolution of inflammation, as measured by the level of a protein that indicates inflammation levels in the body, called C-reactive protein (CRP).

The CRP fell by half approximately one day quicker in those receiving treatment. There were no clear differences between the three treatments in rate of recovery from organ failure, or progression to organ failure.

The number of fatal cases (2%) was too low to enable comparison between treatments, but death was included in a combined assessment with organ failure, which found no significant differences between the three treatments.

However, when analysis was restricted to the 80% of children who met the World Health Organization's criteria for MIS-C, there was evidence of a lower rate of organ support or death at 2 days in those receiving steroids alone as initial treatment, compared to immunoglobulin alone.

Dr Elizabeth Whittaker, one of the authors of the study from Imperial's Department of Infectious Disease, and one of the first doctors in the world to originally identify this condition, together with colleagues at Imperial College and Imperial College Healthcare NHS Trust, said: "The finding that outcome is similar for patients treated with steroids alone as with those treated with steroids and immunoglobulin or immunoglobulin alone, suggests that steroids may be a cheaper and more available alternative to immunoglobulin. Corticosteroids are cheap and available worldwide whereas immunoglobulin is expensive, and there is a worldwide shortage of it. This is a particular problem in many low and middle income countries. "

However the authors stress there is insufficient data to establish that all three treatments are equivalent in preventing coronary artery aneurysms. Around 6 per cent of children in the study suffered a coronary artery aneurysm.

Professor Michael Levin, from the Department of Infectious Disease at Imperial, who led the study, said: "The study has been a real example of international collaboration and the willingness of paediatricians in many countries to share their data and experience to enable important questions as to optimal treatment to be answered. Our finding, that treatments with immunoglobulin, steroids or a combination of both agents all result in more rapid resolution of inflammation (and have similar rates of progression to organ failure or recovery from critical illness), will be of great value to paediatricians worldwide in their treatment of children with this new disorder. As immunoglobulin is unavailable or in short supply in many countries, and is expensive, the findings of this study may provide some reassurance for those who only have access to corticosteroids, particularly in those countries with more limited resources.

However it is important to note that our study does not yet provide a definitive answer as to whether any of the treatments lowers the risk of coronary artery aneurysms, as the numbers with this complication were too low. The study is continuing to enrol patients and our planned further analysis with larger numbers of patients should provide answers to this question."

Credit: 
Imperial College London

Like your olives bitter? Molecular breeding can make them even better!

image: The availability of a high-quality genome paves the way for molecular techniques to enhance olive quality and yield.

Image: 
NJAU academic journals

Olives, well-known for their characteristic bitter taste, are in high demand owing to the popularity of the oil that's derived from them. The health benefits of olive oil are well known, ranging from antiviral, anti-cancer, to even anti-hypertensive effects. These benefits are attributed to "oleuropein," the most abundant olive secoiridoid found in olives.

An efficient method to enhance the quality of plant products is by using molecular methods to manipulate their genes and enhancing their yield. With olives, however, this is still a challenge, because of a lack of sufficient genome data.

So far, the genomes of two European olive varieties have been sequenced. But to fully decipher the sequence composition of the genetic material, computationally assembling these genomes is essential. This has been difficult to do with olives, owing to the high number of repetitive sequences in their genomes and their complex nature.

A team of scientists from China, led by Dr. Guodong Rao, sought to address this challenge by using the latest sequencing technologies to assemble these genomes. Dr. Rao explains, "In our study, a chromosome level high-quality olive genome was obtained, which largely improved the previous version of the genomes." Their study has been published in Horticulture Research.

For their study, the scientists first collected European olive leaf samples and extracted their DNA. Then, they used advanced sequencing technologies called "Oxford Nanopore third-generation sequencing" and "Hi-C" to obtain the sequences and performed seven different strategies to assemble the final genome. The final genome that they obtained underwent phylogenetic analysis and comparisons with other related species of plants.

The team's findings led to the successful identification of nine gene families with 202 genes involved in the biosynthesis of oleuropein, which is twice as many as were known until now! The scientists also revealed that a part of olive DNA is similar to that of soybean and sunflower. They found olives are genetically closest to the oleaster plant, also called the Russian olive.

Dr. Rao highlights the applications of this study, "Through the high-quality genome map we determined, it is possible to cultivate olive varieties and olive oil of higher quality in the future."

Hopefully, this research will find its way in olive breeding soon!

Credit: 
Nanjing Agricultural University The Academy of Science

Scientists demonstrate promising new approach for treating cystic fibrosis

image: Protein expression (green) in mouse lung treated with an oligonucleotide plus OEC, correcting splicing defect in cells.

Image: 
Kreda Lab, UNC School of Medicine

CHAPEL HILL, NC - UNC School of Medicine scientists led a collaboration of researchers to demonstrate a potentially powerful new strategy for treating cystic fibrosis (CF) and potentially a wide range of other diseases. It involves small, nucleic acid molecules called oligonucleotides that can correct some of the gene defects that underlie CF but are not addressed by existing modulator therapies. The researchers used a new delivery method that overcomes traditional obstacles of getting oligonucleotides into lung cells.

As the scientists reported in the journal Nucleic Acids Research, they demonstrated the striking effectiveness of their approach in cells derived from a CF patient and in mice.

"With our oligonucleotide delivery platform, we were able to restore the activity of the protein that does not work normally in CF, and we saw a prolonged effect with just one modest dose, so we're really excited about the potential of this strategy," said study senior author Silvia Kreda, PhD, an associate professor in the UNC Department of Medicine and the UNC Department Biochemistry & Biophysics, and a member of the Marsico Lung Institute at the UNC School of Medicine.

Kreda and her lab collaborated on the study with a team headed by Rudolph Juliano, PhD, Boshamer Distinguished Professor Emeritus in the UNC Department of Pharmacology, and co-founder and Chief Scientific Officer of the biotech startup Initos Pharmaceuticals.

About 30,000 people in the United States have CF, an inherited disorder in which gene mutations cause the functional absence of an important protein called CFTR. Absent CFTR, the mucus lining the lungs and upper airways becomes dehydrated and highly susceptible to bacterial infections, which occur frequently and lead to progressive lung damage.

Treatments for CF now include CFTR modulator drugs, which effectively restore partial CFTR function in many cases. However, CFTR modulators cannot help roughly ten percent of CF patients, often because the underlying gene defect is of the type known as a splicing defect.

CF and splicing defects

Splicing is a process that occurs when genes are copied out - or transcribed - into temporary strands of RNA. A complex of enzymes and other molecules then chops up the RNA strand and re-assembles them, typically after deleting certain unwanted segments. Splicing occurs for most human genes, and cells can re-assemble the RNA segments in different ways so different versions of a protein can be made from a single gene. However, defects in splicing can lead to many diseases - including CF when CFTR's gene transcript is mis-spliced.

In principle, properly designed oligonucleotides can correct some kinds of splicing defects. In recent years the U.S. Food and Drug Administration has approved two "splice switching oligonucleotide" therapies for inherited muscular diseases.

In practice, though, getting oligonucleotides into cells, and to the locations within cells where they can correct RNA splicing defects, has been extremely challenging for some organs.

"It has been especially difficult to get significant concentrations of oligonucleotides into the lungs to target pulmonary diseases," Kreda said.

Therapeutic oligonucleotides, when injected into the blood, have to run a long gauntlet of biological systems that are designed to keep the body safe from viruses and other unwanted molecules. Even when oligonucleotides get into cells, the most usually are trapped within vesicles called endosomes, and are sent back outside the cell or degraded by enzymes before they can ever do their work.

A new delivery strategy

The strategy developed by Kreda, Juliano, and their colleagues overcomes these obstacles by adding two new features to splice switching oligonucleotides: Firstly, the oligonucleotides are connected to short, protein-like molecules called peptides that are designed to help them to distribute in the body and get into cells. Secondly, there is a separate treatment with small molecules called OECs, developed by Juliano and Initos, which help the therapeutic oligonucleotides escape their entrapment within endosomes.

The researchers demonstrated this combined approach in cultured airway cells from a human CF patient with a common splicing-defect mutation.

"Adding it just once to these cells, at a relatively low concentration, essentially corrected CFTR to a normal level of functioning, with no evidence of toxicity to the cells," Kreda said.

The results were much better with than without OECs, and improved with OEC dose.

There is no mouse model for splicing-defect CF, but the researchers successfully tested their general approach using a different oligonucleotide in a mouse model of a splicing defect affecting a reporter gene. In these experiments, the researchers observed that the correction of the splicing defect in the mouse lungs lasted for at least three weeks after a single treatment - hinting that patients taking such therapies might need only sporadic dosing.

The researchers now plan further preclinical studies of their potential CF treatment in preparation for possible clinical trials.

Credit: 
University of North Carolina Health Care

Several persistent chemicals were found in fetal organs

image: Pauliina Damdimopoulou and Richelle Duque Björvang at the Department of Clinical Science, Intervention and Technology, Karolinska Institutet. Photo: Lasse Schullström

Image: 
Lasse Schullström

Researchers at Karolinska Institutet in Sweden found industrial chemicals in the organs of fetuses conceived decades after many countries had banned the substances. In a study published in the journal Chemosphere, the researchers urge decision makers to consider the combined impact of the mix of chemicals that accumulate in people and nature.

"These are important findings that call for regulators to consider the collective impact of exposure to multiple chemicals rather than evaluating just one chemical at a time," says first author Richelle Duque Björvang, PhD student at the Department of Clinical Science, Intervention and Technology at Karolinska Institutet.

The researchers studied concentrations of 22 persistent organic pollutants (POPs). These are toxic chemicals that remain in the environment for long periods of time and accumulate in humans through food, drinking water and air particles. EU's members and many other countries have signed a treaty that prohibits or restricts manufacturing and use of these chemicals.

In the study, the researchers examined samples of fetal fat tissue, liver, heart, lung and brain from 20 pregnancies that for various reasons had ended in stillbirth in the third trimester in 2015-2016. The researchers identified at least 15 of the 22 POPs in every organ. Four chemicals were found in all tissues in all fetuses. The most pervasive chemicals were:

HCB, a pesticide previously used to protect food crops from fungi;

DDE, a metabolite of DDT, an insect killer used in the mid-1900s;

Variants of PCBs, chemicals formerly used in a range of electrical products.

Most current methods for estimating fetal exposure to chemicals rely on maternal blood and placenta samples as proxies. The new study found that, for some of the chemicals, the concentrations in the fetal tissues exceeded those found in the maternal blood and placenta. This can be explained by the fact that these chemicals tend to accumulate in fat tissue due to their structure. However, levels in fetal liver and lung also exceeded those found in the mother. Some pesticides - PeCB, α-HCH, γ-HCH and oxychlordane - were detected in fetal tissue even when they were not quantified in maternal blood samples or the placenta. According to the researchers, these latest findings suggest that blood and placenta samples may give a misleading picture on the diversity and concentration of chemicals that babies are exposed to during early development.

This study only investigated the presence and concentration of the various chemicals but not their links to potential health risks. However, the researchers point out that several previous studies have linked early life exposure to POPs to adverse health outcomes such as low birth weight, gestational diabetes, ADHD, infertility, obesity and reduced sperm production. For example, the European Food Safety Authority (EFSA) recently revised their risk assessment of dioxins and dioxin-like PCBs, and concluded that the dietary intake in Europe is currently at a level that can disrupt fertility in men.

"Getting an accurate picture of chemical exposure in early human development is critical to assessing both short and long-term health consequences for future generations," says last author Pauliina Damdimopoulou, researcher in the Department of Clinical Science, Intervention and Technology at Karolinska Institutet. "Therefore, we believe today's approaches estimating fetal chemical exposure, for example in birth cohort studies, need to be updated to better reflect the likelihood that for some chemicals, fetal exposure is actually greater than what the blood and placenta samples show."

Thirteen of the pregnancies also had data from an earlier study on PFAS (chemicals used in frying pans, food packaging and firefighting foam). By combining these data, the researchers were able to assess the proportion of chemicals in each type of tissue. While pesticides and PCBs were significantly overrepresented in fat tissue, more than half of the chemicals in the fetal lung, brain, liver and heart was due to PFAS. Overall, the highest concentration of a mix of chemicals were found in fat tissue and the lowest in the brain. The study found that the relative exposure of baby boys was higher compared to baby girls.

"Studies conducted in the 1960s and 1970s, when POPs were widely in use, found higher levels compared to ours," Richelle Duque Björvang says. "This shows that political action leading to restrictions in the use of chemicals has an impact on population exposures, although in the case of persistent chemicals it will take multiple generations to get rid of the exposure."

The researchers recognize that the study has some limitations, including a relatively modest sample size and that it only included fetuses who had died in the womb late in the pregnancy. Thus, it may not be fully representative of babies born alive.

Credit: 
Karolinska Institutet

Can biodegradable polymers live up to the hype?

As consumers and corporations alike become more environmentally conscious, the chemical industry is working to find solutions to the plastic waste crisis. One idea is to use biodegradable polymers known as polyhydroxyalkanoates (PHA) as replacements for traditional plastic packaging and other materials. A feature article in Chemical & Engineering News, the weekly newsmagazine of the American Chemical Society, explores the possibilities and pitfalls of PHA.

PHA is not a new human invention; this class of polymers can be found in nature and is used to store cellular energy, writes Senior Editor Alex Tullo. Commercially, it is manufactured through the industrial fermentation of sugars or lipids. As cities around the world ban single-use plastic products, such as straws and bags, companies are working to commercialize PHA as a viable alternative. The main selling point is rapid biodegradability in a variety of environments. Demand has increased for PHA in recent years, with several companies opening or planning commercial plants in the U.S. and beyond. In addition, major food and beverage brands are planning to switch their packaging to PHA-based materials soon.

Despite its much-touted promise, there's reason to believe PHA might be too good to be true. Several companies have tried and failed to bring it to market in recent years, and PHA is much more expensive than its traditional plastic counterparts. Beyond that, some experts have published findings saying the biodegradability of PHA is overstated, and that the rapid degradation time is based on optimized laboratory conditions rather than real-world ones. However, PHA's boosters say that it's still a better alternative to non-biodegradable plastics, and that the industry may be on the cusp of a breakthrough.

Credit: 
American Chemical Society

SNMMI Image of the Year: PET imaging measures cognitive impairment in COVID-19 patients

image: A: COVID-19-related spatial covariance pattern of cerebral glucose metabolism overlaid onto an MRI template. Voxels with negative region weights are color-coded in cool colors, and regions with positive region weights in hot colors. B: Association between the expression of COVID-19-related covariance pattern and the Montreal Cognitive Assessment (MoCA) score adjusted for years of education. Each dot represents individual patient. C: Results of a statistical parametric mapping analysis. Upper row illustrates regions that show significant increases of normalized FDG uptake in COVID-19 patients at 6-months follow-up compared to the subacute stage (paired t test, p < 0.01, false discovery rate-corrected). Bottom row depicts regions that still show significant decreases of normalized FDG uptake in COVID-19 patients at 6-months follow-up compared to the age-matched control cohort at an exploratory statistical threshold (two-sample t test, p < 0.005).

Image: 
G Blazhenets et al., Department of Nuclear Medicine, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg.

Reston, VA--The effects of COVID-19 on the brain can be accurately measured with positron emission tomography (PET), according to research presented at the Society of Nuclear Medicine and Molecular Imaging (SNMMI) 2021 Annual Meeting. In the study, newly diagnosed COVID-19 patients, who required inpatient treatment and underwent PET brain scans, were found to have deficits in neuronal function and accompanying cognitive impairment, and in some, this impairment continued six months after their diagnosis. The detailed depiction of areas of cognitive impairment, neurological symptoms and comparison of impairment over a six-month time frame has been selected as SNMMI's 2021 Image of the Year.

Each year, SNMMI chooses an image that best exemplifies the most promising advances in the field of nuclear medicine and molecular imaging. The state-of-the-art technologies captured in these images demonstrate the capacity to improve patient care by detecting disease, aiding diagnosis, improving clinical confidence, and providing a means of selecting appropriate treatments. This year, the SNMMI Henry N. Wagner, Jr., Image of the Year was chosen from more than 1,280 abstracts submitted to the meeting and voted on by reviewers and the society leadership.

"As the SARS-CoV-2 pandemic proceeds, it has become increasingly clear that neurocognitive long-term consequences occur not only in severe COVID-19 cases, but in mild and moderate cases as well. Neurocognitive deficits like impaired memory, disturbed concentration and cognitive problems may persist well beyond the acute phase of the disease," said Ganna Blazhenets, PhD, a post-doctoral researcher in Medical Imaging at the University Medical Center Freiburg, in Freiburg, Germany.

To study cognitive impairment associated with COVID-19, researchers carried out a prospective study on recently diagnosed COVID-19 patients who required inpatient treatment for non-neurological complaints. A cognitive assessment was performed, followed by imaging with 18F-FDG PET if at least two new neurological symptoms were present. By comparing COVID-19 patients to controls, the Freiburg group established a COVID-19-related covariance pattern of brain metabolism with most prominent decreases in cortical regions. Across patients, the expression of this pattern showed a very high correlation with the patients' cognitive performance.

Follow-up PET imaging was performed six months after the initial COVID-19 diagnosis. Imaging results showed a significant improvement in the neurocognitive deficits in most patients, accompanied by an almost complete normalization of the brain metabolism.

"We can clearly state that a significant recovery of regional neuronal function and cognition occurs for most COVID-19 patients based on the results of this study. However, it is important to recognize the evidence of longer-lasting deficits in neuronal function and accompanying cognitive deficits is still measurable in some patients six months after manifestation of disease," noted Blazhenets. "As a result, post-COVID-19 patients with persistent cognitive complaints should be presented to a neurologist and possibly allocated to cognitive rehabilitation programs."

"18F-FDG PET is an established biomarker of neuronal function and neuronal injury," stated SNMMI's Scientific Program Committee chair, Umar Mahmood, MD, PhD. "As shown the Image of the Year, it can be applied to unravel neuronal correlates of the cognitive decline in patients after COVID-19. Since 18F-FDG PET is widely available, it may therefore aid in the diagnostic work-up and follow-up in patients with persistent cognitive impairment after COVID-19."

Credit: 
Society of Nuclear Medicine and Molecular Imaging

Model helps analyze decision-making on adopting Type 2 diabetes medical guidelines

image: Eunice E. Santos, the dean of the School of Information Sciences at the University of Illinois Urbana-Champaign, led a research team that developed a new computational framework for analyzing how best to communicate about new medical guidelines to encourage their adoption.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Health care workers often don't adopt new guidelines for best practices in medical care until well after those guidelines are established. A team of researchers led by Eunice E. Santos, the dean of the School of Information Sciences at the University of Illinois Urbana-Champaign, has developed a new computational modeling and simulation framework to analyze decision-making and identify effective dissemination strategies for medical guidelines.

The research team examined guidelines for Type 2 diabetes that were established in 2012 and were still not adopted years later. The researchers found that health care workers' specialties, patient volume and experience were among the factors that affected acceptance of individualized glycemic-control guidelines.

The team developed a novel computational framework that incorporates the interactions and influences among health care workers, along with other intricacies of medical decision-making, to simulate and analyze a wide range of real-world scenarios. Researchers introduced the Culturally Infused Agent Based Model (CI-ABM) and reported their findings in the cover article for the June issue of the IEEE Journal of Biomedical and Health Informatics.

Their research highlights that modeling and simulating human behaviors must take into account factors such as sociocultural context and complex social interactions, without which the models can lead to a profound misunderstanding of human decision-making, they said.

"One of the major challenges is capturing the decision-making of the actors and the factors that influence them. This is especially true when the agents are human beings (e.g., health care workers), where their behavior is uncertain and the information about the factors that influence their decision-making is often incomplete and/or contradictory," they wrote.

The modeling system they developed incorporates social networks and cultural influences that guide decision-making, and it captures how beliefs evolve over time due to personal and external factors. It provides that ability to model real-world events that involve incomplete, imprecise and conflicting information, and it provides a way to handle uncertainty in human behavior. These aspects of their computational model led to better analysis and prediction of guideline-dissemination behaviors, the researchers said.

Santos and her colleagues used the model to analyze the dissemination of a Type 2 diabetes guideline that recommends individualizing glycemic goals for patients. Diabetes care guidelines since 2012 have emphasized individualizing glycemic goals based on patient factors such as age, hypoglycemia risk and overall health. But it isn't known how many doctors have adopted this guideline.

The researchers used two 2015 surveys that focused on challenges faced by doctors in individualizing the glycemic goals of their patients. The surveys included doctors from diverse backgrounds and a range of specialties - including endocrinology, family medicine and geriatrics - experience levels and practice types.

In their simulation, some of the doctors received guideline recommendations from the American Diabetes Association. Best practices also spread through word-of-mouth. The team compared the results of the simulations with the answers given on the surveys. The researchers found that including sociocultural factors and information about social interactions of health care workers in their model increased the accuracy of predicting guideline-adoption behaviors of various demographic groups. In addition, by including sociocultural information, the model helps to identify factors that drive guideline-adoption behavior.

The framework also allows policymakers to study the effect of different barriers to disseminating medical guideline information, identify the factors contributing to guideline adoption and create targeted strategies to improve communication about the guidelines, they said.

The modeling system will help policymakers test different strategies and analyze their effects, the researchers said. It provides a way to capture the effect of unique factors - for example, when modeling guideline dissemination for infectious diseases, it can help analyze the effects of incorporating information about the novelty and mortality of infectious diseases, as well as the impact of changes in social networks due to lockdowns.

The team of researchers included Suresh Subramanian and Vairavan Murugappan, both doctoral students in Illinois' School of Information Sciences; John Korah, a computer science professor at California State Polytechnic University; Elbert S. Huang and Neda Laiteerapong, both professors of medicine at The University of Chicago Medicine; and Ali Cinar, a chemical and biological engineering professor at the Illinois Institute of Technology.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Algorithm reveals the mysterious foraging habits of narwhals

image: Researchers tagging a narwhal in East Greeenland.

Image: 
Carsten Egevang.

An algorithm can predict when narwhals hunt - a task once nearly impossible to gain insight into. Mathematicians and computer scientists at the University of Copenhagen, together with marine biologists in Greenland, have made progress in gathering knowledge about this enigmatic Arctic whale at a time when climate change is pressuring them.

The small whale, known for its distinctively spiraled tusk, is under mounting pressure due to warming waters and the subsequent increase in Arctic shipping traffic. To better care for narwhals, we need to learn more about their foraging behaviour - and how these may change as a result of human disturbances and global warming. Biologists know almost nothing about this. Because narwhals live in isolated Arctic regions and hunt at depths of up to 1,000 meters, it is very difficult - sometimes impossible - to gain any insight whatsoever.

Ironically, artificial intelligence may be the answer to the mystery of their natural behaviours. An interdisciplinary collaboration between mathematicians, computer scientists and marine biologists from the University of Copenhagen and the Greenland Institute of Natural Resources demonstrates that algorithms can be used to map the foraging behavior of this enigmatic whale.

"We have shown that our algorithm can actually predict that when narwhals emit certain sounds, they are hunting prey. This opens up entirely new insights into the life of narwhals," explains Susanne Ditlevsen, a professor at UCPH's Department of Mathematical Sciences who has helped marine biologists in Greenland with the processing of data for several years.

"It is crucial to gain more insight into where and when narwhals hunt for food as sea ice recedes. If they are disturbed by shipping traffic, it matters whether this is in the middle of an important foraging area. Finding out however, is incredibly difficult. Here, artificial intelligence seems to be able to make a huge difference and to a great extent, provide us with knowledge that could not otherwise have been obtained," says cetacean researcher Mads Peter Heide-Jørgensen, a professor at the Greenland Institute of Natural Resources and adjunct professor at the University of Copenhagen. He adds:

"In a situation where narwhals are in deep water, in the middle of the Bay of Baffin during December, we currently have no way of finding out where or when they are foraging. Here, artificial intelligence seems to be the way forward."

Algorithm maps clicks and buzzes

Until now, the best way to learn about the hunting patterns of narwhals has been to collect acoustic data using measuring instruments attached to their bodies. Like bats, narwhals orient themselves using echolocation. By making clicking sounds, they explore their environment and orient themselves. As they begin to hunt, these clicks shorten in interval to become buzzing.

While the buzzing sounds are therefore interesting to researchers, it is impossible to collect acoustic data in many places. Furthermore, recording these sounds is highly data-intensive and time consuming to analyze manually.

As a result, the researchers set out to investigate whether, by using artificial intelligence, they could find a pattern in the way whales move and the buzzes they emit. In the future, this would make it possible for them to rely only on measurements of animal movements using an accelerometer, a simple to use technology familiar to us from our smartphones.

"The major challenge was that these whales have very complex movement patterns, which can be tough to analyze. This becomes possible only with the use of deep learning, which could learn to recognize both the various swimming patterns of whales as well as their buzzing sounds. The algorithm then discovered connections between the two," explains Assistant Professor Raghavendra Selvan of the Department of Computer Science.

The researchers trained the algorithm using large quantities of data collected from five narwhals in Scoresby Sound fjord in East Greenland.

Now, the researchers hope to add to the algorithm by characterizing different types of buzzing sounds in order to identify the precise buzzing sounds that lead to a catch. This can be achieved by collecting data in which biologists give whales a temperature pill that detects temperature drops in their stomachs as they consume cold fish or squid.

Credit: 
University of Copenhagen - Faculty of Science

Sweeping analysis concludes there's no cheating old age

image: A chimpanzee researchers have named Duane watches the sunrise in Uganda.

Image: 
Florian Moellers

DURHAM, N.C. - Special diets, exercise programs, supplements and vitamins -- everywhere we look there is something supposed to help us live longer. Maybe those work: human average life expectancy has gone from a meager 40-ish years to a whopping 70-something since 1850. Does this mean we are slowing down death?

A new study comparing data from nine human populations and 30 populations of non-human primates says that we are probably not cheating the reaper. The researchers say the increase in human life expectancy is more likely the statistical outcome of improved survival for children and young adults, not slowing the aging clock.

"Populations get older mostly because more individuals get through those early stages of life," said Susan Alberts, professor of Biology and Evolutionary Anthropology at Duke University and senior author of the paper. "Early life used to be so risky for humans, whereas now we prevent most early deaths."

The research team, comprising scientists from 14 different countries, analyzed patterns of births and deaths in the 39 populations, looking at the relationship between life expectancy and lifespan equality.

Lifespan equality tell us how much the age of death varies in a population. If everyone tends to die at around the same age -- for instance, if almost everyone can expect to live a long life and die in their 70s or 80s -- lifespan equality is very high. If death could happen at any age -- because of disease, for example -- lifespan equality is very low.

In humans, lifespan equality is closely related to life expectancy: people from populations that live longer also tend to die at a similarly old age, while populations with shorter life-expectancies tend to die at a wider range of ages.

To understand if this pattern is uniquely human, the researchers turned to our closest cousins: non-human primates. What they found is that the tight relationship between life expectancy and lifespan equality is widespread among primates as well as humans. But why?

In most mammals, risk of death is high at very young ages and relatively low at adulthood, then it increases again after the onset of aging. Could higher life-expectancy be due to individuals ageing slower and living longer?

The primate populations tell us that the answer is probably no. The main sources of variation in the average age of death in different primate populations were infant, juvenile and young adult deaths. In other words, life expectancy and lifespan equality are not driven by the rate at which individuals senesce and become old, but by how many kids and young adults die for reasons unrelated to old age.

Using mathematical modeling, the researchers also found that small changes in the rate of ageing would drastically alter the relationship between life expectancy and lifespan equality. Changes in the parameters representing early deaths, on the other hand, led to variations very similar to what was observed.

"When we change the parameters representing early deaths, we can explain almost all of the variation among populations, for all of these species," said Alberts. "Changes in the onset of aging and rate of ageing do not explain this variation."

These results support the 'invariant rate of ageing' hypothesis.

"The rate of ageing is relatively fixed for a species," said Alberts. "That's why the relationship between life expectancy and lifespan equality is so tight within each species."

The researchers point out that there is some individual variation within species in the rate of ageing and on the onset of senescence, but that this variation is contained to a fairly narrow range, unlike death rates at younger ages.

"We can't slow down the rate at which we're going to age," Alberts said. "What we can do is prevent those babies from dying."

This work was supported by a grant from the National Institute on Aging (P01AG031719), with additional support provided by the Max Planck Institute of Demographic Research and the Duke University Population Research Institute.

Credit: 
Duke University

Osteoporosis: New approach to understanding bone strength pays dividends

image: The osteoporosis research team included Larry Mesner (from left), Gina Calabrese, Basel Al-Barghouthi and Charles Farber.

Image: 
UVA Health

Osteoporosis researchers at the UVA School of Medicine have taken a new approach to understanding how our genes determine the strength of our bones, allowing them to identify several genes not previously known to influence bone density and, ultimately, our risk of fracture.

The work offers important insights into osteoporosis, a condition that affects 10 million Americans, and it provides scientists potential new targets in their battle against the brittle-bone disease.

Importantly, the approach uses a newly created population of laboratory mice that allows researchers to identify relevant genes and overcome limitations of human studies. Identifying such genes has been very difficult but is key to using genetic discoveries to improve bone health.

"Genome-wide association studies have revolutionized the identification of regions of the human genome that influence bone mineral density. However, there are challenges to using this information to help patients, such as identifying the specific genes involved. Additionally, such studies have focused only on bone mineral density, although many other aspects of bone contribute to bone strength and risk of fracture but cannot be measured in humans," said Charles Farber, PhD, of UVA's Center for Public Health Genomics and Department of Public Health Sciences. "The ability to use mice in a novel way has allowed us to begin to overcome the challenges associated with human genome-wide association studies."

UNDERSTANDING OSTEOPOROSIS AND BONE STRENGTH

Genome-wide association studies have identified more than 1,000 locations on our chromosomes where genes are found that influence bone mineral density (BMD), a strong predictor of how likely an individual is to experience a bone fracture. But bone mineral density is only one factor in bone strength. Farber and his colleagues wanted to get a more complete picture.

They created a resource by collecting information on 55 different skeletal characteristics in hundreds of mice and then used an approach called systems genetics to analyze the data. The analysis identified a total of 66 genes that contribute to BMD, including 19 not previously linked to BMD.

Of the 19, the researchers were able to determine that two, SERTAD4 and GLT8D2, likely affect bone mineral density through cells that form bone called osteoblasts. This ability to determine the cell types that genes use to perform biological processes is one of the great strengths of systems genetics analysis, the researchers say.

The scientists also found that another gene, QSOX1, plays an important role in determining the mass and strength of the outer, "cortical" layer of bone. This type of bone makes up 80% of our skeleton and is vital for bone strength and weight bearing.

In addition to providing new insights into osteoporosis, the new findings highlight the tremendous potential of using mice to identify important genes in humans, Farber says.

"The information we generated from mice can be used in the future to evaluate these newly identified genes as potential drug targets," said Basel Al-Barghouthi, of UVA's Center for Public Health Genomics, who led the analysis. "Furthermore, these approaches can be applied across a wide range of diseases."

Credit: 
University of Virginia Health System

Icebergs drifting from Canada to southern Florida

image: These 3D perspective views of the seafloor bathymetry from multibeam sonar offshore of South Carolina show numerous grooves carved by drifting icebergs. As iceberg keels plow into the seafloor, they dig deep grooves that push aside boulders and piles of sand and mud along their tracks. Sediment cores from nearby buried iceberg scours were used to determine when these icebergs travelled south along the coast.

Image: 
Jenna Hill, U.S. Geological Survey, Pacific Coastal & Marine Science Center

Woods Hole, MA (June 16, 2021) -- Woods Hole Oceanographic Institution (WHOI) climate modeler Dr. Alan Condron and United States Geological Survey (USGS) research geologist Dr. Jenna Hill have found evidence that massive icebergs from roughly 31,000 years ago drifted more than 5000km (> 3,000 miles) along the eastern United States coast from Northeast Canada all the way to southern Florida. These findings were published today in Nature Communications.

Using high resolution seafloor mapping, radiocarbon dating and a new iceberg model, the team analyzed about 700 iceberg scours ("plow marks" on the seafloor left behind by the bottom parts of icebergs dragging through marine sediment ) from Cape Hatteras, North Carolina to the Florida Keys. The discovery of icebergs in this area opens a door to understanding the interactions between icebergs/glaciers and climate.

"The idea that icebergs can make it to Florida is amazing," said Condron. "The appearance of scours at such low latitudes is highly unexpected not only because of the exceptionally high melt rates in this region, but also because the scours lie beneath the northward flowing Gulf Stream."

"We recovered the marine sediment cores from several of these scours, and their ages align with a known period of massive iceberg discharge known as Heinrich Event 3. We also expect that there are younger and older scours features that stem from other discharge events, given that there are hundreds of scours yet to be sampled," added Hill.

To study how icebergs reached the scour sites, Condron developed a numerical iceberg model that simulates how icebergs drift and melt in the ocean. The model shows that icebergs can only reach the scour sites when massive amounts of glacial meltwater (or glacial outburst floods) are released from Hudson Bay. "These floods create a cold, fast flowing, southward coastal current that carries the icebergs all the way to Florida," says Condron. "The model also produces 'scouring' on the seafloor in the same places as the actual scours"

The ocean water temperatures south of Cape Hatteras are about 20-25°C (68-77°F). According to Condron and Hill, for icebergs to reach the subtropical scour locations in this region, they must have drifted against the normal northward direction of flow -- the opposite direction to the Gulf Stream. This indicates that the transport of icebergs to the south occurs during large-scale, but brief periods of meltwater discharge.

"What our model suggests is that these icebergs get caught up in the currents created by glacial meltwater, and basically surf their way along the coast. When a large glacial lake dam breaks, and releases huge amounts of fresh water into the ocean, there's enough water to create these strong coastal currents that basically move the icebergs in the opposite direction to the Gulf Stream, which is no easy task" Condron said.

While this freshwater is eventually transferred northward by the Gulf Stream, mixing with the surrounding ocean would have caused the meltwater to be considerably saltier by the time it reached the most northern parts of the North Atlantic. Those areas are considered critical for controlling how much heat the ocean transports northward to Europe. If these regions become abundant with fresh water, then the amount of heat transported north by the ocean could significantly weaken, increasing the chance that Europe could get much colder.

The routing of meltwater into the subtropics - a location very far south of these regions - implies that the influence of meltwater on global climate is more complex than previously thought, according to Condron and Hill. Understanding the timing and circulation of meltwater and icebergs through the global oceans during glacial periods is crucial for deciphering how past changes in high-latitude freshwater forcing influenced shifts in climate.

"As we are able to make more detailed computer models, we can actually get more accurate features of how the ocean actually circulates, how the currents move, how they peel off, and how they spin around. That actually makes a big difference in terms of how that freshwater is circulated and how it can actually impact climate," Hill added.

Key Takeaways:

The discovery of icebergs in this area opens a door to understanding the interactions between icebergs/glaciers and climate.

Evidence suggests that there may be hundreds of undiscovered scours that range in ages

A newly developed iceberg computer model helped the researchers understand the timing and circulation of meltwater and icebergs through the global oceans during glacial periods, which is crucial for deciphering how past changes in high-latitude freshwater forcing influenced shifts in climate.

Credit: 
Woods Hole Oceanographic Institution