Culture

Why doesn't deep-brain stimulation work for everyone?

image: Brain networks corresponding to functions such as vision (blue) and attention (green) mingle and share information in structures deep inside the brain, as seen, for example, in the bottom right corner of this color-coded composite MRI image. Researchers at Washington University School of Medicine in St. Louis have mapped nine functional networks in the deep-brain structures of 10 healthy people, an accomplishment that could lead to improvements in deep-brain stimulation therapy for severe cases of Parkinson's disease and other neurological conditions.

Image: 
Scott Marek

People with severe Parkinson's disease or other neurological conditions that cause intractable symptoms such as uncontrollable shaking, muscle spasms, seizures, obsessive thoughts and compulsive behaviors are sometimes treated with electric stimulators placed inside the brain. Such stimulators are designed to interrupt aberrant signaling that causes the debilitating symptoms. The therapy, deep-brain stimulation, can provide relief to some people. But in others, it can cause side effects such as memory lapses, mood changes or loss of coordination, without much improvement of symptoms.

Now, a study from researchers at Washington University School of Medicine in St. Louis may help explain why the effects of deep-brain stimulation can vary so much - and points the way toward improving the treatment. The stimulators typically are implanted in structures known as the thalamus and the basal ganglia that are near the center of the brain. These structures, the researchers found, serve as hubs where the neurological networks that control movement, vision and other brain functions cross paths and exchange information. Each person's functional networks are positioned a bit differently, though, so electrodes placed in the same anatomical spot may influence different networks in different people - alleviating symptoms in one but not in another, the researchers said.

The findings are published Dec. 10 in Neuron.

"The sites we target for deep-brain stimulation were discovered serendipitously," said co-author Scott Norris, MD, an assistant professor of neurology and of radiology who treats patients with movement disorders. "Someone had a stroke or an injury in a specific part of the brain, and suddenly their tremor, for example, got better, and so neurologists concluded that targeting that area might treat tremor. We've never really had a way to personalize treatment or to figure out if there are better sites that would be effective for more people and have fewer side effects."

What neurosurgeons lacked were individualized maps of brain functions in the thalamus and basal ganglia. These structures connect distant parts of the brain and have been linked to neurological and psychiatric conditions such as Parkinson's disease, Tourette's syndrome and obsessive-compulsive disorder. But their location deep inside the brain means that mapping is technically challenging and requires enormous amounts of data.

Nico Dosenbach, MD, PhD, an assistant professor of neurology and the study's senior author - along with co-first authors Deanna Greene, PhD, an assistant professor of psychiatry and of radiology, and Scott Marek, PhD, a postdoctoral researcher, and other colleagues - set out to create individual maps of the functional networks in the basal ganglia and thalamus. Such maps, they reasoned, might provide clues to why people with neurological and psychiatric conditions exhibit such a wide range of symptoms, and why electrodes placed in those structures produce variable results.

"Deep-brain stimulation is a very invasive treatment that is only done for difficult, severe cases," said Greene, who specializes in Tourette's syndrome. "So it is difficult to grapple with the fact that such an invasive treatment may only help half the people half the time."

Using data from a group of Washington University scientists who scanned themselves at night as part of the so-called Midnight Scan Club, the researchers analyzed 10 hours of MRI brain scan data on each of 10 individuals. From this, they created 3D maps color-coded by functional network for each individual. One of the functional networks is devoted to vision, two relate to movement, two involve paying attention, three relate to goal-directed behaviors, and the last network is the default network, which is active when the brain is at rest.

The researchers discovered that each functional network followed its own path through the deep structures of the brain, intermingling with other networks at defined meeting spots. Some of these spots - such as the motor integration zone, where a movement and a goal-directed network come together - were located in much the same place in all 10 people. The locations of other networks and their points of intersection varied more from person to person.

"I showed a neurosurgeon where we'd found the motor integration zone, and he said, 'Oh, that's where we put the electrodes for essential tremor, and it always works,'" said Dosenbach, who is also an assistant professor of occupational therapy, of pediatrics and of radiology. People with essential tremor experience uncontrollable shaking. "That's interesting because there isn't a lot of variability among people in terms of where the motor integration zone is located. So then we looked at a spot they target to treat Parkinson's disease. We saw that there was a great deal of variation across people in terms of what functional networks are represented there, and deep-brain stimulation is only about 40% to 50% successful there."

The findings suggest that the outcome of deep-brain stimulation may reflect how successfully a neurosurgeon taps into the correct functional network - and avoids tapping into the wrong one.

"Historically, our understanding of deep-brain stimulation has been based on averaging data across many people," Norris said. "What this study suggests is that a particular patient may do better if the wire is placed in relation to their personal functional brain map rather than in context of the population average. A personalized functional map - as opposed to an anatomical map, which is what we use today - could help us place a wire in the exact place that would provide the patient with the most benefit."

The researchers now are studying the relationship between the locations of an individual's functional networks and the outcomes of deep-brain stimulation to identify the networks that provide relief when stimulated, and those that cause side effects. In the future, the researchers hope to dig deeper into the networks with therapeutic effects, looking for other spots that might provide even better results than the traditional sites of electrode placement.

"A lot of what I do is basic science - understanding how the brain works," Dosenbach said. "But now we can make a map and give it to neurosurgeons and potentially improve treatment for these devastating conditions. We still have to prove the hypothesis that deep-brain stimulation outcomes are linked to functional networks, and translation will take time, but this really could make a difference in people's lives."

Credit: 
Washington University School of Medicine

Chiton mollusk provides model for new armor design

image: The chiton mollusk, which is about 1 to 2 inches long, has a series of eight large plates and is ringed by a girdle of smaller, more flexible scales. The mollusk is the inspiration behind a 3D printed armor.

Image: 
Virginia Tech

The motivations for using biology as inspiration to engineering vary based on the project, but for Ling Li, assistant professor of mechanical engineering in the College of Engineering, the combination of flexibility and protection seen in the chiton mollusk was all the motivation necessary.

"The system we've developed is based on the chiton, which has a unique biological armor system," Li said. "Most mollusks have a single rigid shell, such as the abalone, or two shells, such as clams.

But the chiton has eight mineralized plates covering the top of the creature and around its base it has a girdle of very small scales assembled like fish scales, that provide flexibility as well as protection."

Li's work, which was featured in the journal Nature Communications Dec. 10, is the result of a collaboration with researchers from various institutions, including the Massachusetts Institute of Technology, the Dana-Farber Cancer Institute at the Harvard Medical School, California State University, Fullerton, the Max Planck Institute of Colloids and Interfaces, Germany, and the Wyss Institute for Biologically Inspired Engineering at Harvard University.

Because the mechanical design of the chiton's girdle scales had not been studied in-depth before, the team of researchers needed to start with basic material and mechanical analysis with the mollusk before using that information as the bio-inspiration for the engineering research.

"We studied this biological material in a very detailed way. We quantified its internal microstructure, chemical composition, nano-mechanical properties, and three-dimensional geometry. We studied the geometrical variations of the scales across multiple chiton species, and we also investigated how the scales assemble together through 3D tomography analysis," Li said.

The team then developed a parametric 3D modeling methodology to mimic the geometry of individual scales. They assembled individual scale units on either flat or curved substrates, where the scales' sizes, orientations, and geometries can also be varied, and used 3D printing to fabricate the bio-inspired scale armor models.

"We produced the chiton scale-inspired scale assembly directly with 3D multi-material printing, which consists of very rigid scales on top of a flexible substrate," Li explained. With these physical prototypes of controlled specimen geometries and sizes, the team conducted direct mechanical testing on them with controlled loading conditions. This allowed the researchers to understand the mechanisms behind the dual protection-flexibility performance of the biological armor system.

The way the scale armor works is that when in contact with a force, the scales converge inward upon one another to form a solid barrier. When not under force, they can "move" on top of one another to provide varying amounts of flexibility dependent upon their shape and placement.

"The strength comes from how the scales are organized, from their geometry," Li said. "Reza's [Mirzaeifar, assistant professor of mechanical engineering] team has done an amazing job by using computational modeling to further reveal how the scale armor becomes interlocked and rigid when the external load reaches a critical value."

The design of place-specific armor takes into account the size of scales used. Smaller scales, such as those around the girdle of the chiton, are more useful for regions requiring maximum flexibility, while larger scales are used for areas requiring more protection. "Working with Reza, our next step is to expand the space so we can design tailored armor for different body locations.

The flexibility vs. protection needs of the chest, for example, will be different than for the elbow or knee, so we would need to design the scale assembly accordingly in terms of scale geometry, size, orientation, etc."

The work being featured began with Department of Defense funding when Li was a graduate research assistant at the Massachusetts Institute of Technology. Since he arrived at Virginia Tech in 2017, the work has continued without sponsorship as part of his start-up funding.

"We started with a pretty pure motivation - looking for multifunctional biological materials," Li said. "We wanted to integrate flexibility and protection and that's very hard to achieve with synthetic systems. We will continue with our research to explore the design space beyond the original biological model system and conduct testing under different load conditions."

Li admits the process, which has taken multiple years, is long, but the work is unique in how they've approached it from the start as a two-step process in conducting the fundamental biological materials research followed by the bio-inspired research.

"Having that level of familiarity with the subject has been very useful to the design and modeling of the armor," Li said. "I think this type of bio-inspired armor will represent a significant improvement to what is currently available."

Credit: 
Virginia Tech

Dementia study reveals how proteins interact to stop brain signals

Fresh insights into damaging proteins that build up in the brains of people with Alzheimer's disease could aid the quest for treatments.

A study in mice reveals how the two proteins work together to disrupt communication between brain cells.

Scientists observed how proteins - called amyloid beta and tau - team up to hamper key genes responsible for brain messaging. By changing how genes are expressed in the brain, the proteins can affect its normal function.

These changes in brain function were completely reversed when genetic tools were used to reduce the presence of tau, researchers at the University of Edinburgh found.

The study focused on the connection points between brain cells - known as synapses - that allow chemical and electrical messages to flow and are vital to healthy brain function.

Stopping the damage that the two proteins cause to synapses could help scientists prevent or reverse dementia symptoms, the researchers say.

In both the mouse model and in brain tissue from people with Alzheimer's disease, the team found clumps of amyloid beta and tau proteins in synapses.

When both amyloid beta and tau were present in the brain, genes that control the function of synapses were less active. And some of the genes that control the immune system in the brain were more active.

Related to increased immune system activity, the scientists observed immune cells called microglia containing synapses in the brains of mice. This adds to findings from recent studies suggesting that these immune cells consume synapses during Alzheimer's disease.

Alzheimer's disease is the most common form of dementia, affecting some 850,000 people in the UK - a figure predicted to rise to more than one million by 2025. It can cause severe memory loss and there is currently no cure.

Lead researcher, Professor Tara-Spires Jones of the UK Dementia Research Institute at the University of Edinburgh, said: "More work is needed to take what we've learned in this study and find therapeutics - but this is a step in the right direction, giving us new targets to work towards."

Credit: 
University of Edinburgh

Greenland ice losses rising faster than expected

image: The midnight sun casts a golden glow on an iceberg and its reflection in Disko Bay, Greenland. Much of Greenland's annual mass loss occurs through calving of icebergs such as this.

Image: 
Ian Joughin, University of Washington

Greenland is losing ice seven times faster than in the 1990s and is tracking the Intergovernmental Panel on Climate Change's high-end climate warming scenario, which would see 40 million more people exposed to coastal flooding by 2100.

A team of 96 polar scientists from 50 international organisations have produced the most complete picture of Greenland ice loss to date. The Ice Sheet Mass Balance Inter-comparison Exercise (IMBIE) Team combined 26 separate surveys to compute changes in the mass of Greenland's ice sheet between 1992 and 2018. Altogether, data from 11 different satellite missions were used, including measurements of the ice sheet's changing volume, flow and gravity.

The findings, published today in Nature today, show that Greenland has lost 3.8 trillion tonnes of ice since 1992 - enough to push global sea levels up by 10.6 millimetres. The rate of ice loss has risen from 33 billion tonnes per year in the 1990s to 254 billion tonnes per year in the last decade - a seven-fold increase within three decades.

The assessment, led by Professor Andrew Shepherd at the University of Leeds and Dr Erik Ivins at NASA's Jet Propulsion Laboratory in California, was supported by the European Space Agency (ESA) and the US National Aeronautics and Space Administration (NASA).

In 2013, the Intergovernmental Panel on Climate Change (IPCC) predicted that global sea levels will rise by 60 centimetres by 2100, putting 360 million people at risk of annual coastal flooding. But this new study shows that Greenland's ice losses are rising faster than expected and are instead tracking the IPCC's high-end climate warming scenario, which predicts 7 centimetres more.

Professor Shepherd said: "As a rule of thumb, for every centimetre rise in global sea level another six million people are exposed to coastal flooding around the planet."

"On current trends, Greenland ice melting will cause 100 million people to be flooded each year by the end of the century, so 400 million in total due to all sea level rise."

"These are not unlikely events or small impacts; they are happening and will be devastating for coastal communities."

The team also used regional climate models to show that half of the ice losses were due to surface melting as air temperatures have risen. The other half has been due to increased glacier flow, triggered by rising ocean temperatures.

Ice losses peaked at 335 billion tonnes per year in 2011 - ten times the rate of the 1990s - during a period of intense surface melting. Although the rate of ice loss dropped to an average 238 billion tonnes per year since then, this remains seven times higher and does not include all of 2019, which could set a new high due to widespread summer melting.

Dr Ivins said: "Satellite observations of polar ice are essential for monitoring and predicting how climate change could affect ice losses and sea level rise".

"While computer simulation allows us to make projections from climate change scenarios, the satellite measurements provide prima facie, rather irrefutable, evidence."

"Our project is a great example of the importance of international collaboration to tackle problems that are global in scale."

Guðfinna Aðalgeirsdóttir, Professor of Glaciology at the University of Iceland and lead author of the Intergovernmental Panel on Climate Change's sixth assessment report, who was not involved in the study, said:

"The IMBIE Team's reconciled estimate of Greenland ice loss is timely for the IPCC. Their satellite observations show that both melting and ice discharge from Greenland have increased since observations started."

"The ice caps in Iceland had similar reduction in ice loss in the last two years of their record, but this last summer was very warm here and resulted in higher loss. I would expect a similar increase in Greenland mass loss for 2019."

"It is very important to keep monitoring the big ice sheets to know how much they raise sea level every year."

Credit: 
University of Leeds

Report discusses potential role of coffee in reducing risk of Alzheimer's and Parkinson's

A new report from the Institute for Scientific Information on Coffee (ISIC) highlights the potential role of coffee consumption in reducing the risk of neurodegenerative disorders such as Alzheimer's and Parkinson's diseases1-3.

For the first time in history most people can expect to live into their 60s and beyond, however with increasing age, the risk of disease and disabilities rises4,5. The number affected with Alzheimer's disease is estimated to increase globally from today's 47 million to 75 million 2030 and to 132 million in 20506.

Parkinson's disease, the second most common age-related neurodegenerative disorder, affects 7 million people globally7. Research has suggested that lifestyle may be an important part of the risk for neurodegenerative conditions for which there is currently no curative treatment8-10.

The new report, authored by Associate Professor Elisabet Rothenberg, Kristianstad University, discusses the role of dietary components, including coffee and caffeine, in reducing the risk of neurodegenerative disorders.

The report considers the mechanisms involved in the positive associations between coffee and Alzheimer's and Parkinson's diseases which are not yet well understood. The role of caffeine and other plant-based compounds present in coffee such as phytochemicals and polyphenols are of particular academic interest11-13.

Key research findings highlighted in the report include:

Dietary pattern may have an impact on the risk of developing neurodegenerative disorders5,6

Coffee consumption may help reduce the risk of neurodegenerative conditions or relieve symptoms1-3

Considering PD, men might benefit more from coffee consumption than women possibly because oestrogen may compete with caffeine9,10

Further research is required for better understanding of the associations11-13

Credit: 
Kaizo

Was Earth's oxygenation a gradual, not step-wise, process -- driven by internal feedbacks?

The oxygenation of Earth's surface - which transformed the planet into a habitable haven for all life as we know it - may have been the consequence of global biogeochemical feedbacks, rather than the product of discrete planetary-scale biological and tectonic revolutions as proposed, according to a new study. The findings have implications for the understanding of life's evolution on Earth and perhaps other planets. More than two billion years ago, oxygen levels in Earth's atmosphere and oceans began to rise, and with it, the evolution of progressively complex animal life began. The progressive path to oxygenation is widely considered to have occurred over three major steps - the "Great Oxidation Event," the "Neoproterozoic Oxygenation Event" and the "Paleozoic Oxygenation Event." While several biological and geological explanations for these stepwise increases in oxygen have been advanced, the geologically rapid yet rare oxygenation events do not correspond to known external tectonic or evolutionary processes. Now, Lewis Alcott and colleagues suggest that Earth's stepwise oxygenation may be better explained by internal feedbacks within long-term biogeochemical processes. Using a theoretical model, Alcott et al. identified a set of internal biogeochemical feedbacks involving the global phosphorus, carbon and oxygen cycles, which are capable of driving rapid shifts in ocean and atmospheric O2 levels. Based only on the gradual oxygenation of Earth's surface over time, the model produced the same three-step pattern observed in the geological record and did not require any biological advances beyond simple photosynthetic cyanobacteria. The findings dramatically increase the possibility of high-O2 worlds existing elsewhere in the universe, the authors suggest.

Credit: 
American Association for the Advancement of Science (AAAS)

Lower BMI means lower diabetes risk, even among non-overweight people

Lower body mass index (BMI) is consistently associated with reduced type II diabetes risk, among people with varied family history, genetic risk factors and weight, according to a new study published this week in PLOS Medicine by Manuel Rivas of Stanford University, and colleagues.

Weight-loss interventions have shown demonstratable benefit for reducing the risk of type II diabetes in high-risk and pre-diabetic individuals but have not been well-studied in people at lower risk of diabetes. In the new study, researchers studied the association between BMI, diabetes family history and genetic risk factors affecting type II diabetes or BMI. They used data on 287,394 unrelated individuals of British ancestry recruited to participate in the UK Biobank from 2006 to 2010 when between the ages of 40 and 69.

Nearly 5% of the participants had a diagnosis of type II diabetes and diabetes prevalence was confirmed to be associated with higher BMI, a family history of type II disease and genetic risk factors. Moreover, a 1 kg/m2 BMI reduction was associated with a 1.37 fold reduction (95% CI 1.12-1.68) in type II diabetes among non-overweight individuals with a BMI of less than 25 and no family history of diabetes, similar to the effect of BMI reduction in obese individuals with a family history (1.21, 95% CI 1.13-1.29)

"These findings suggest that all individuals can substantially reduce their type II diabetes risk through weight loss," the authors say. However they also caution that the results must be taken with a grain of salt since they didn't study actual weight loss interventions. Although the new analysis "can determine that lower lifetime BMI is protective against diabetes, that does not necessarily imply weight loss later in life, after carrying excess weight for decades, would have the same result," they say.

Credit: 
PLOS

Stardust from red giants

Around 4.5 billion years ago, an interstellar molecular cloud collapsed. At its centre, the Sun was formed; around that, a disc of gas and dust appeared, out of which the earth and the other planets would form. This thoroughly mixed interstellar material included exotic grains of dust: "Stardust that had formed around other suns," explains Maria Schoenbaechler, a professor at the Institute of Geochemistry and Petrology at ETH Zurich. These dust grains  only made up a small percentage of the entire dust mass and were distributed unevenly throughout the disc. "The stardust was like salt and pepper," the geochemist says. As the planets formed, each one ended up with its own mix.

Thanks to extremely precise measurement techniques, researchers are nowadays able to detect the stardust that was present at the birth of our solar system. They examine specific chemical elements and measure the abundance of different isotopes - the different atomic flavours of a given element, which all share the same number of protons in their nuclei but vary in the number of neutrons. "The variable proportions of these isotopes act like a fingerprint," Schönbächler says: "Stardust has really extreme, unique fingerprints - and because it was spread unevenly through the protoplanetary disc, each planet and each asteroid got its own fingerprint when it was formed."

Studying palladium in meteorites

Over the past ten years, researchers studying rocks from the Earth and meteorites have been able to demonstrate these so-?called isotopic anomalies for more and more elements. Schoenbaechler and her group have been looking at meteorites that were originally part of asteroid cores that were destroyed a long time ago, with a focus on the element palladium.

Other teams had already investigated neighbouring elements in the periodic table, such as molybdenum and ruthenium, so Schoenbaechler's team could predict what their palladium results would show. But their laboratory measurements did not confirm the predictions. "The meteorites contained far smaller palladium anomalies than expected," says Mattias Ek, postdoc at the University of Bristol who made the isotope measurements during his doctoral research at ETH.

Now the researchers have come up with a new model to explain these results, as they report in the journal Nature Astronomy. They argue that stardust consisted mainly of material that was produced in red giant stars. These are aging stars that expand because they have exhausted the fuel in their core. Our sun, too, will become a red giant four or five billion years from now.

In these stars heavy elements such as molybdenum and palladium were produced by what is known at the slow neutron capture process. "Palladium is slightly more volatile than the other elements measured. As a result, less of it condensed into dust around these stars, and therefore there is less palladium from stardust in the meteorites we studied" Ek says.

The ETH researchers also have a plausible explanation for another stardust puzzle: the higher abundance of material from red giants on Earth compared to Mars or Vesta or other asteroids further out in the solar system. This outer region saw an accumulation of material from supernova explosions.

"When the planets formed, temperatures closer to the Sun were very high," Schoenbaechler explains. This caused unstable grains of dust, for instance those with an icy crust, to evaporate. The interstellar material contained more of this kind of dust that was destroyed close to the Sun, whereas stardust from red giants was less prone to destruction and hence concentrated there. It is conceivable that dust originating in supernova explosions also evaporates more easily, since it is somewhat smaller. "This allows us to explain why the Earth has the largest enrichment of stardust from red giant stars compared to other bodies in the solar system" Schoenbaechler says.

Credit: 
ETH Zurich

No 'clouded' judgments: Geostationary satellite an alternative to monitor land surfaces

image: Seasonal variations in vegetation index observed by Himawari-8 (middle) and Suomi-NPP (bottom) at Takayama in-site observation site and camera images taken downward at the site (top).

Image: 
Tomoaki Miura and Kazuhito Ichii

Satellite remote sensing has widely been used to monitor and characterize the spatial and temporal changes of the Earth's vegetative cover. Satellites used in these analyses have conventionally been polar-orbiting satellites, which orbit from "pole to pole" and obtain only one to two images of the Earth per day. The utility of these polar-orbiting satellites has, however, often been limited because frequently occurring clouds block their view of the land surface.

New-generation geostationary satellites present an opportunity to observe land surfaces in a more efficient manner. Being in geostationary orbit, the Advanced Himawari Imager (AHI) sensor onboard Himawari-8, for example, can obtain multi-band color images over Japan every 10 minutes, increasing the chance of obtaining "cloud-free" observations. In a new study published in Scientific Reports, an international team of researchers, including Tomoaki Miura from University of Hawaii, Shin Nagai and Mika Takeuchi from Japan Agency for Marine-Earth Science and Technology, Kazuhito Ichii from Chiba University, and Hiroki Yoshioka from Aichi Prefectural University, examined this possibility and the utility of Himawari-8 AHI geostationary satellite data for capturing seasonal vegetation changes in Central Japan.

Their study found that Himawari-8 AHI acquired approximately 26 times more observations than the Suomi-National Polar-orbiting Partnership (S- NPP) Visible Infrared Imaging Radiometer Suite (VIIRS), one of the latest polar-orbiting satellite sensors, for the year 2016. As a result, there were a larger number of days with "cloud-free" observations with Himawari-8 AHI than with S-NPP VIIRS. The study has demonstrated that the AHI geostationary sensor obtained one cloud-free observation data every 4 days, whereas the VIIRS polar-orbiting sensor was able to obtain one cloud-free observation every 7 to 16 days. Owing to this larger number of cloud-free observations, AHI "vegetation index," a satellite measure of vegetation greenness, captured the temporal changes of vegetation from leaf expansion to leaf fall continuously throughout the growing season, corresponding to the observed vegetation phenology with in situ time-lapse digital images (Figure 1). There were, however, several periods where even AHI was unable to obtain cloud-free observations due to persistent cloud cover during summer-fall seasons.

"Detailed vegetation seasonal information from the Himawari-8 geostationary satellite can be useful for many applications such as short- term drought monitoring and assessing the impact of heavy rainfall events," said Prof Miura, the lead author of the study. "This study has shown that the Himawari-8 meteorological satellite can be used to monitor land surface and vegetation. With new-generation geostationary satellites, we may begin to see various types of vegetation changes that could not be seen with previous satellites. The new findings contribute to understanding land-atmosphere carbon dioxide budgets," said Prof Ichii of Chiba University, a co-author of this study.

The Himawari-8 AHI geostationary satellite also acquires multi-band color images over the tropical Southeast Asia region every 10 minutes. It is expected that AHI geostationary sensor data would contribute to improving our understanding of vegetation dynamics and the effect of climate change in this cloud-prone tropical region.

Credit: 
Chiba University

Scientists discover a novel method to combat antibiotic-resistant bacteria

image: AMPs target and kill bacteria in such variable ways that few bacteria ever become resistant to these molecules; this makes AMPs uniquely suited to treating antibiotic-resistant bacteria, also called, 'superbugs'. However, as of now, no one has been able to artificially create effective AMPs for use as an antibiotic. The researchers' discovery has found a way to overcome this limitation and has huge potential in treating and preventing infections for post-surgery wounds, and in diabetic patients and those with weakened immune systems.

Image: 
Unilever

We humans are constantly battling with bacteria and other microbes. In this war against the microbial world, we added antibiotics our arsenal in the early 1920s. Suddenly, fighting off infections became so easy that we thought we had won the war.

We could not have been more wrong.

Over the years, bacteria have evolved so many clever ways of protecting themselves against antibiotics, that now, the World Health Organization (WHO) fears that we may soon slip back into a situation similar to the pre-antibiotic era. The death toll caused by antimicrobial resistance is estimated to rise to 10 million deaths annually by 2050 with India carrying one of the largest burdens of drug-resistant pathogens worldwide. To compound this problem, the global antibiotic pipelines to develop next-generation antibiotics is precariously thin.

In the context of this alarming public health threat, scientists from the Institute for Stem Cell Science and Regenerative Medicine (inStem) and Unilever joined forces to develop innovative strategies to deal with antimicrobial resistance. Together, the team probed the cellular mechanisms that regulate the release of antimicrobial peptides (AMPs), which are natural antibiotics produced by skin cells to fight off bacteria. AMPs target and kill bacteria in such variable ways that few bacteria ever develop resistance to them, thus making AMPs uniquely suited to treating antibiotic-resistant bacterial infections. The scientists' work led to the discovery of a new signalling pathway in skin cells that controls the long-term release of AMPs from these cells. By tweaking this pathway, researchers can induce AMP release from skin cells without any exposure to bacteria! This has tremendous potential in preventing and treating infections for post-surgery wounds, and for diabetic patients and those with weakened immune systems.

Apart from their role as natural antibiotics, AMPs are also known to be involved in wound healing in the skin. This fact spurred Dr. Amitabha Majumdar (Unilever R&D) to hypothesise that the same machinery used to release AMPs during wound healing could be harnessed to control AMP release from skin cells for treating or preventing infections. To test this, Dr. Majumdar contacted Dr. Colin Jamora of the Joint IFOM-inStem Research Laboratory at inStem's Centre for Inflammation and Tissue Homeostasis (CITH) - a group that works extensively on the mechanisms of wound healing in the skin.

When the joint team of scientists probed the cellular mechanism regulating AMP release, they discovered a new signalling pathway for long-term release of AMPs from skin cells. Usually, AMPs are released to fight off bacterial infections when direct contact between skin epidermal cells and bacteria occur, and this process is triggered by a reduction in the levels of a protein called caspase-8.

Interestingly, the researchers found that reducing caspase-8 via molecular techniques is also enough to trigger the release of stored AMP from skin cells. Just by modulating caspase-8 levels in the skin, AMP release can be controlled to prevent a whole spectrum of infections; this may be especially useful for diabetics and patients with weakened immune systems who are highly susceptible to bacterial, yeast, fungal, and viral infections in post-surgery wounds.

"This fruitful collaboration illustrates how partnerships between academic institutions and industry benefits consumers and society", say Dr. Jamora and Dr. Majumdar.

Credit: 
National Centre for Biological Sciences

A study demonstrates the efficiency of a screening strategy to detect liver diseases

image: Estimates of survival of the cost-effectiveness model in the diagnosis of fibrosis.

Image: 
UPF

Non-alcoholic fatty liver diseases (NAFLD) and alcohol-related diseases (ALD) are currently the leading cause of chronic disease, cancer and mortality associated with this organ in developed countries.

Patients in advanced stages have a poor prognosis with regard to survival and quality of life. Similarly to cancer, liver diseases in their early stages are asymptomatic, and therefore, most patients are diagnosed in advanced stages. However, there is a lack of screening strategies in the field of public health for the detection of liver fibrosis.

A study published recently in Journal of Hepatology, drafted within the LiverScreen European consortium, with the participation of the Centre for Research in Health Economics (CRES-UPF) and Hospital Clínic de Barcelona together with experts from various international universities within the framework of the LiverScreen project, proposes exploring the cost-effectiveness of transient elastography (TE) as a screening method in primary care. It is a non-invasive test that evaluates the elasticity and stiffness of the liver by ultrasound as a screening method for detecting liver fibrosis in primary care.

"The implementation of a screening programme to detect liver fibrosis, in primary care centres, is a highly cost-effective intervention. This approach could prove a valuable public health strategy. In fact, the UK's National Institute for Health and Care Excellence (NICE) already recommends it in patients at risk", says Miquel Serra-Burriel.

A study in patients from six countries

To carry out their research, the researchers included a total of 6,295 participants from six countries (France, Spain, Denmark, UK, Germany and Hong Kong). They correspond to seven independent prospective studies carried out previously (one cohort per country, except Spain with two), in which TE was used to diagnose liver fibrosis.

The cohorts of the different countries consisted of people of different ages and risk factors, such as high alcohol consumption. The cohorts from Denmark, France and the UK were designed to obtain biopsies to confirm liver fibrosis. "We used data from a subset of patients who had undergone a liver biopsy. After defining the best indicators, we applied them to our cohorts to evaluate the prevalence of significant fibrosis and accuracy of diagnosis", the CRES-UPF researcher affirms.

Then, these results were used to adjust an economic model that compares two different ways to detect significant fibrosis: the TE pathway compared to the standard care pathway, based on liver enzyme tests.

"The goal of using non-invasive detection of fibrosis by TE as a public health intervention was to achieve earlier, more reliable identification of the patient, refer her in a timely manner to the specialist, give her the appropriate treatment and include her in monitoring programmes", Miquel Serra-Burriel asserts.

A cost-effective public health intervention

The study shows that the detection of liver fibrosis using optimized algorithms is a highly cost-effective public health intervention, especially in the early stages of fibrosis, with a 12% probability of cost saving. "As expected, when we focus on patients with risk factors for chronic liver disease, including patients with diabetes, obesity and hazardous alcohol consumption, the screening programme is even more cost-effective", Miquel Serra-Burriel adds.

According to the authors of the study, implementing TE liver screening would require investing between 2,500 (at-risk population) and 6,500 euros (general population) adapted to purchasing power parity (PPP), to achieve another year of life, adjusted for quality of life.

Compared to the subsequent stages of chronic liver disease, significant liver fibrosis screening results in a ten-fold cost-effectiveness improvement, highlighting the importance of early identification, referral and monitoring of these patients.

The authors state that future studies should test whether a two-step approach using serum biomarkers followed by TE would be cost-effective, and analyse the cost savings entailed in this combined system for screening the population.

A multidisciplinary international project

LiverScreen, which involves health professionals, economists, statisticians and quality controllers, has received funding within EIT Health 2018, a project which promotes healthy living, active ageing and improved health in the context of the European Commission's Horizon 2020 research and innovation programme. It has also received a Spanish Plan Nacional I+D+i grant, co-funded by the Carlos III Health Institute and the European Union's ERDF.

In addition to CRES-UPF, in Catalonia LiverScreen also involves institutions such as Hospital Clínic, the IDIBAPS research centre and the Catalan Health Institute (ICS).

Credit: 
Universitat Pompeu Fabra - Barcelona

Silver improves the efficiency of monograin layer solar cells

image: Next-generation lightweight flexible monograin layer solar cell developed by TalTech researchers.

Image: 
Professor Jüri Krustok

As a result of their two-year joint project, the materials researchers of Tallinn University of Technology have improved the efficiency of next generation solar cells by partial substitution of copper with silver in absorber material.

Economic development and the general growth in energy consumption have led to an increased demand for environmentally friendly energy production at lower cost. Most viable solutions can be found in the renewable energy sector. New technologies for energy production should provide clean, low cost, environmentally friendly solutions with versatile applications, making solar energy the best solution today. TalTech's material researchers are working on the development of the next-generation photovoltaics - monograin layer solar cells.

Senior Researcher at TalTech Laboratory of Photovoltaic Materials Marit Kauk-Kuusik says, "The production of traditional silicon solar cells that started back in the 1950s is still very resource and energy consuming. Our research is focused on the development of the next generation of solar cells, i.e. thin-film solar cells based on compound semiconductors."

A thin-film solar cell consists of several thin layers of semiconductor materials. For efficient thin film solar cells, semiconductor with very good light-absorbing properties must be used as absorber. Silicon absorber is not suitable candidate for thin film solar cells due to non-optimal light absorption leading to rather thick absorber layer. TalTech researchers are developing compound semiconductor materials named kesterites (Cu2ZnSn(Se,S)4), which in addition to excellent light absorption contain earth abundant and low cost chemical elements (e.g. copper, zinc , tin, sulphur and selenium). To produce kesterites, TalTech researchers use a monograin powder technology, which is unique in the world.

"The monograin powder technology we are developing differs from other similar solar cell manufacturing technologies used in the world in terms of its method. Compared to vacuum evaporation or sputtering technologies, which are widely used to produce thin-film structures, the monograin powder technology is less expensive," Marit Kauk-Kuusik says.

Powder growth technology is the process of heating chemical components in a special chamber furnace at 750 degrees for four days. Thereafter the mass obtained is washed and sieved in special machines. The synthesized high-quality microcrystalline powder, monograin powder, is used for the production of solar cells. The powder technology differs from other production methods in particular due to its low cost, since it does not require any expensive high vacuum equipment.

The monograin powder consists of unique microcrystals that form parallel connected miniature solar cells in a large module (covered with an ultra-thin buffer layer). This, however, provides major advantages over the photovoltaic modules of the previous generation, i.e. silicon-based solar panels: the photovoltaics cells are lightweight, flexible, can be transparent, while being environmentally friendly and significantly less expensive.

The indicator of the quality of photovoltaics is the efficiency. Efficiency depends not only on the properties of the materials used and the structure of the solar cell, but also on solar radiation intensity, angle of incidence and temperature.

The ideal conditions for achieving the maximum efficiency are in cold sunny mountains, not in a hot desert, as one would expect, because heat does not improve solar cell's efficiency. It is possible to calculate the maximum theoretical efficiency for each solar panel, which, unfortunately, has so far been impossible to achieve in reality, but it is an objective to pursue.

"We have reached the point in our development where partial replacement of copper with silver in kesterite absorber materials can increase efficiency by 2%. This is because copper is highly mobile in nature, causing unstable solar cell efficiency. The replacement of 1% copper with silver improved the efficiency of monograin layer solar cells from 6.6% to 8.7%," Marit Kauk-Kuusik says.

The two TalTech's groups of material researchers: photovoltaic materials and optoelectronic materials physics research groups published an article "The effect of Ag alloying of Cu2(Zn,Cd)SnS4 on the monograin powder properties and solar cell performance" in a high-quality scientific journal "Journal of Materials Chemistry A".

The monograin layer solar cell technology is implemented by the Estonian-Austrian joint venture Crystalsol GmbH. In order to commercialize the photovoltaic technology developed by our researchers, the solar cell efficiency should be increased to 15%.

Credit: 
Estonian Research Council

Oxygen shaped the evolution of the eye

image: The evolution of the size of the eye (A) and retina (B). The evolution of structures to supplement retinal oxygen supply to tightly coupled to the evolution of large eyes and a thick retina. The pectens oculi is a vascular structure found in the eyes of birds, the choroid rete mirabile is a gas-gland found in the eyes of fishes, and intra-retinal capillaries are found in some mammals, including humans.

Image: 
Christian Damsgaard, AU

Convergent origins of new mechanisms to supply oxygen to the retina were directly linked to concurrent enhancements in the functional anatomy of the eye.

In his "On the Origin of Species", Darwin used the complexity of the eye to argue his theory of natural selection and the eye has continued to fascinate and trouble evolutionary biologists ever since.

In a paper published today in eLife, researchers from Aarhus University teamed up with scientists from eight international institutions to explore the physiological requirements for the evolution of improved eyesight.

They argue that the evolution of high-acuity vision in ancestral animals was constrained by the ability to deliver sufficient amounts of oxygen to cells in the retina. Their study uncovered a fascinating pattern of mechanisms to improve retinal oxygen supply capacity that evolved in concert with enhanced retinal morphology to improve vision. The model fits across all bony vertebrates from fish through to birds and mammals. These findings add an additional component to our understanding of the evolution eye, which has fascinated and troubled evolutionary biologists for centuries.

The rises and falls of retinal oxygen supply

The study took advantage of the diversity in the physiology and anatomy among eyes from 87 animal species, including fishes, amphibians and mammals. By placing these species on the tree of life, the authors unravelled the evolutionary history of the eye from a 425 million-year-old extinct ancestor of modern vertebrates to current day animals. They identified three distinct physiological mechanisms for retinal oxygen supply that are always associated with improved vision. Thus, in fishes, mutations in haemoglobin were associated with the ability to deliver oxygen to the retina at exceptional high oxygen partial pressures to overcome the significant diffusion distance to the retinal cells.

The authors show that the origin of this mechanism around 280 million years ago was associated with a dramatic increase in eye size and retinal thickness that directly links to improved light sensitivity and spatial resolution. This mechanism in hemoglobin was subsequently lost several times, possibly to avoid oxidative damage and gas bubble formation in the eye.

Warm blooded dinosaurs shaped the vision of mammals

The authors show that increased reliance on vision in mammals was associated with the evolution of capillary beds inside the retina despite the potential trade-off to visual acuity imposed by the bending of light by red blood cells.

Retinal capillaries in mammals originated around 100 million years ago when dinosaurs evolved endothermy. Endothermy allowed these Mesozoic dinosaurs to hunt at night, which forced the previously nocturnal mammals into a diurnal lifestyle with an increased reliance of vision.

The new model on eye evolution shows that the evolution of intra-retinal capillaries coincided precisely with the improvements in vision around 100 million years ago. Further, it shows that some mammals lost retinal capillaries when they became less reliant on vision (e.g., echolocating bat).

Oxygen and vision go hand in hand

Overall, this analysis shows that the functional morphology of the eye has changed dynamically throughout animal evolution. It shows that eye morphology goes hand in hand with parallel changes in retinal oxygen supply, and they are likely driven by different tradeoffs to retinal oxygen supply. These tradeoffs appear acceptable in place of the improved visual acuity available when the thickness of the retina was allowed to increase.

Overall, this study shows that adaptations to ensure oxygen delivery to the retina was a physiological prerequisite for the functional evolution of the eye.

Credit: 
Aarhus University

What blocks bird flu in human cells?

image: Microscopic image of human cells infected with flu viruses: The cell nucleus is shown in blue and the viral proteins in green. The viruses that have adapted to human hosts leave the cell nucleus after 14 hours (left), while the viruses that have adapted to bird hosts remain stuck in the nucleus.

Image: 
Selbach Lab, MDC.

Normally, bird flu viruses do not spread easily from person to person. But if this does happen, it could trigger a pandemic. Researchers from the MDC and RKI have now explained in the journal Nature Communications what makes the leap from animals to humans less likely.

Whenever people suddenly become infected with a bird flu virus such as H5N1, H7N9, and H5N6, the World Health Organization (WHO) has to assess the risk: Are these the first signs of a pandemic? Or is it just a few dozen or hundred cases that have only arisen through close contact with infected poultry? Researchers led by Professor Matthias Selbach from the Max Delbrueck Center for Molecular Medicine (MDC) have now found another piece of the puzzle that may be important in this initial assessment. In a paper published in Nature Communications, the researchers explain that avian influenza A viruses (IAVs) are unable to transform infected human cells into effective virus factories, because they do not produce enough of the matrix protein M1 following infection. The virus requires this protein, however, to export its many copies of its genetic material from the cell nucleus - a prerequisite for building new viruses.

Not all flu is the same - the name refers to a large family of viruses. Each member of this family is named after two prickly growths on the virus's surface: hemagglutinin (H), which enables the virus to infect human and animal cells where it can multiply, and neuraminidase (N), which helps the virus's offspring to extract themselves from the infected cell. In waterfowl, there are 16 known hemagglutinin subtypes and nine known neuraminidase subtypes. That results in at least 144 possible combinations that are constantly changing and adapting to new hosts - like chickens, for example, but also mammals including horses, pigs, and humans.

Such new virus variants are often more dangerous than seasonal flu, because the human immune system has never encountered them before. Some people find themselves defenseless, while the immune system of others reacts so violently that the person's own resistance damages the body. In the worst case scenario, a pandemic could cost millions of lives. The Spanish flu of 1918, for example, claimed more than 50 million victims. Researchers around the world are therefore trying to understand the rules that determine when there is the possibility of a pandemic, and when there is not.

Why are human cells bad virus factories for bird flu?

"Hemagglutinin in humans and birds has a slightly different chemical structure, for example, which makes it more difficult for an avian influenza virus to infiltrate a human cell than a bird's cell," explains Selbach. Boris Bogdanow, a PhD student in Selbach's research group and the lead author of the current study, focused his research specifically on what other natural species barriers exist in flu viruses.

Matthias Selbach's group analyses proteins using quantitative mass spectrometry. In collaboration with the Robert Koch Institute (RKI), Boris Bogdanow and his colleagues infected human pulmonary epithelial cells separately with a bird flu virus and a human flu virus. They then measured the quantity of all newly produced proteins in the mass spectrometer. Postdoctoral researcher Dr. Katrin Eichelbaum had also developed a method that enables the precise differentiation of new and old proteins. "In the first analysis, we did not find any major differences between the two strains," reports Boris Bogdanow. "At first glance, the avian flu virus and the human virus displayed little difference with regard to protein production, which was quite surprising."

But the devil is in the detail, so Bogdanow performed more in-depth analyses to take a closer look at the protein distribution. In doing so, he came across the matrix protein M1, much larger quantities of which were produced in the lung cells infected with the human virus. The M1 protein is responsible, among other things, for exporting the replicated viral RNA from the nucleus of the infected cells and then assembling it with other newly produced viral proteins to form flu virus offspring. Could it be, therefore, that the viral RNA of bird flu viruses in human cells remains trapped in the cell nucleus because too little M1 protein is present?

Another piece of the puzzle

Fluorescence microscopic investigations confirmed these suspicions. The genetic material of the bird flu virus was far less capable of breaking out of the cell nucleus than the RNA of the human flu virus. But why? With the help of the MDC's sequencing platform and Professor Irmtraud Meyer, they discovered a small segment in the viral RNA of the avian flu virus that affects alternative splicing. "We call this a cis-regulatory element," says Bogdanow. "Alternative splicing regulates which proteins are ultimately made from a single gene, because many genes code for more than one protein. When human cells are attacked by bird flu, this element ensures that more M2 rather than M1 protein is produced."

In order to assess the relevance of this result, Professor Thorsten Wolff and his research team from the Robert Koch Institute transferred the cis-regulatory element from the bird virus to the human virus. This did indeed result in the human flu virus replicating less effectively in human lung cells. Selbach's team even conducted a similar experiment with Spanish flu viruses, whose genetic material was isolated in the nineties from graves in the permafrost soil of Alaska. However, they only used a small part of the viral RNA and not the entire virus for the experiment. Nevertheless, they were also able to confirm their theory on the cis-regulatory element for this virus.

"How pathogenic an avian flu virus is and whether or not it has pandemic potential depends, of course, on many factors," says Selbach. "A study on cell cultures cannot cover all these factors. Nevertheless, it might be useful in future to include an analysis of this RNA segment in the risk assessment of avian influenza viruses."

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Research revises classification of acute myeloid leukemia & myelodysplastic syndrome

image: This is first author and presenter Ilaria Iacobucci, Ph.D., of the St. Jude Department of Pathology.

Image: 
St. Jude Children's Research Hospital

Results from a study conducted by St. Jude Children's Research Hospital and the Munich Leukemia Laboratory were presented today as a late-breaking abstract at the American Society of Hematology annual meeting. The study integrates genomic and transcriptomic sequencing to provide the most detailed classification of acute myeloid leukemia (AML) and myelodysplastic syndrome (MDS) to date.

The researchers conducted whole genome sequencing and transcriptome sequencing (RNA-seq) on 598 adults with AML and 706 with MDS. The researchers found evidence to support several different biologic subgroups. They found sets of mutations that cooperate to drive the development of leukemia and predict poor outcomes as well as groups of mutations that indicate positive responses to treatment.

"These diseases have a complex genomic background that we cannot ignore if we want to treat these patients optimally," said first author and presenter Ilaria Iacobucci, Ph.D., of the St. Jude Department of Pathology. "Therapy for AML has changed little in the last five decades so we need to better understand the nature of genomic changes in order to offer new therapies to these patients."

Our understanding of the genetics of these diseases has advanced dramatically in the last decade due to intensive sequencing efforts. However, most of these studies used a targeted sequencing-based approach or were limited to specific subtypes. Now, by integrating whole genome and transcriptome sequencing, researchers can further unravel the biology of these diseases.

AML and MDS are similar since they both derive from aberrations occurring in hematopoietic stem cells and cause impaired differentiation. However, the diseases differ in immature white blood cell (blast) counts and cell morphology. The researchers wanted to learn more about how the diseases divide into genetic subgroups to inform treatments based on their biology. 

"This work lifts a veil on the constellations of mutations that come together to drive groups of cases that have similar or differing behavior," said co-senior author Charles Mullighan, M.D., MBBS, of the St. Jude Department of Pathology and deputy director of the Comprehensive Cancer Center. "It strongly supports the way the field is moving, that unbiased sequencing is needed to accurately diagnose and risk-stratify our patients."

"For all types of leukemia, we struggle with piecing together information from sequencing approaches to find the best predictors of outcome," Mullighan said. "There is data here that brings clarity to prognosis, which may ultimately help in identifying patients upfront in need of more or less therapy."

The findings demonstrate how different mutation patterns contribute to the growth of leukemia cells. For example, prognosis is generally considered quite good for patients whose disease has an NPM1 mutation. The researchers found that this subgroup can be further divided based on the presence of other mutations to indicate who is most likely to have a favorable response and thus might not require as stringent a treatment regimen.

The work also reveals a new finding regarding AML fueled by RUNX1 mutations. These mutations have only provisionally been considered a distinct subgroup by World Health Organization classification. Yet, the new results show that RUNX1 AML is a distinct subtype with a poor prognosis.

"We all feel the assessment is incomplete when you're just looking at the number of cancer cells under a microscope," said co-senior author Torsten Haferlach, M.D., of Munich Leukemia Laboratory. "We now have the options and ability to move forward, addressing all of the known targets and expression patterns that are present in these cases. Ours is the first study of this size showing this is feasible and applicable for routine clinical use."

Credit: 
St. Jude Children's Research Hospital