Culture

Concordia researchers develop new method to evaluate artificial heart valves

image: Lyes Kadem (left) and Ahmed Darwish.

Image: 
Concordia University

Researchers at Concordia have devised a technique to detect obstructions in a type of mechanical heart valve they believe will contribute to safer follow-up methods for cardiologists and their patients.

The team led by Lyes Kadem, professor in the Department of Mechanical, Industrial and Aerospace Engineering at the Gina Cody School of Engineering and Computer Science, published their findings in the journal Artificial Organs. PhD candidate Ahmed Darwish was lead author, and Giuseppe Di Labbio, assistant professor Wael Saleh and Othman Smadi of Hashemite University in Jordan contributed.

The researchers used high-tech equipment to look at the flow downstream of a bi-leaflet mechanical heart valve (BMHV). The equipment included a custom-made double-activation left heart duplicator designed and created in their lab by Concordia undergraduate students, a high-speed camera and a laser.

Despite the impressive name, the BMHV is a simple ring with an inner diameter of about 2.5 cm. Two carbon-based leaflets inside the ring open and close as the heart pumps blood out of the left ventricle and into the aortic arch, which sends the blood out into the body.

They replace damaged aortic valves, and are installed via open-heart surgery. An obstructed BMHV can be catastrophic.

Mapping blood flow

The method the team designed maps simulated blood flow patterns that result from six different heart valve blockages. The researchers photographed particles immersed in a liquid that mimics blood and pumped the fluid through the heart duplicator.

Using a technique called particle image velocimetry, they were able to determine the flow velocity. It allowed them to simulate what blood flow would look like with the leaflets completely clear of obstruction, when they were partially obstructed and when fully obstructed.

“Imagine you are outside a stadium and the crowd is leaving from three gates next to each other,” says Kadem, the Concordia Research Chair for Cardiovascular Engineering and Medical Devices.

“If the gates are open, you will see a uniform distribution of people leaving from all three openings. If one gate is closed, you will see more people leaving from the two others, and none from the one that is closed. Therefore, you will deduce that there is a blockage.”

When applied using phase-contrast magnetic resonance imaging (MRI), the method is both non-invasive and radiation-free, says Kadem. That means doctors can use it for BMHV dysfunction detection and follow-up.

“Currently, ultrasound is the best way to detect valve dysfunction,” he says. “The next step is cinefluoroscopy, which uses radiation. You can’t use this method as a follow-up because it exposes the patient to radiation and increases their risk of cancer.”

Darwish and Kadem note that artificial heart valves are generally safe but are not risk-free. There is between 0.1 per cent and 6 per cent chance of dysfunctions that can occur between one hour and 20 years after they replace the organic valve. These can be fatal, with a 28.6 per cent mortality rate when a dysfunction results in an emergency.

This research is supported by a grant from the Natural Sciences and Engineering Research Council of Canada.

Read the cited paper: Experimental investigation of the flow downstream of a dysfunctional bileaflet mechanical aortic valve.

Journal

Artificial Organs

DOI

10.1111/aor.13483

Credit: 
Concordia University

People with multiple physical conditions have faster brain decline, higher suicide risk

Having arthritis, or diabetes, or heart disease can change a person's life, getting in the way of daily activities and requiring special diets and medicines.

But what happens when new conditions get stacked on top of that first one, creating a burden of multiple diseases that need daily managing?

As millions of Americans cope with just such a combination of conditions, a new approach to measuring what their lives are actually like has emerged.

Multimorbidity scores can help doctors understand their patients' overall prognosis - and can help researchers identify special risks faced by people with multiple chronic illnesses.

In fact, new research by a University of Michigan team shows that people with higher multimorbidity scores had a much faster decline in their thinking and memory abilities than those with lower scores.

Even though most of the chronic conditions included in the index have no direct relationship to brain health, the higher a person's score, the faster they declined over a 14-year period in their ability to recall words and do simple math.

The results, published online in the Journals of Gerontology: Series A, used data from more than 14,260 people studied multiple times over a decade or more through the Health and Retirement Study based at U-M.

Meanwhile, just months ago, the same index revealed that people with higher scores were more than twice as likely to die by suicide than those with lower scores, and that they had worse mental health-related quality of life in general.

Those findings, made by calculating the multimorbidity index for participants in three long-term studies of more than 250,000 health professionals including dentists, podiatrists, chiropractors and nurses, were published in the Journal of the American Geriatrics Society. They show the mental and physical burden of living with multiple diseases.

U-M researcher and Michigan Medicine primary care physician Melissa Wei, M.D., M.P.H., M.S., has spearheaded the development of the scoring system, called the multimorbidity weighted index or MWI.

Assessing the total impact of a person's health conditions is important because 80% of adults over the age of 65 have more than one condition, and 45% of all adults have more than one, says Wei.

Careful tool development

Over the course of years, she compiled and tested a way to assess what life is like for people with multiple chronic conditions, from glaucoma and heart arrhythmias to multiple sclerosis and a history of knee, hip and spinal disc problems.

But it's not as simple as counting the number of diseases and conditions a person has gotten diagnosed with, the researchers caution.

Rather, the risk of cognitive decline, suicide or poor mental well-being has to do with the total impact that their unique combination of conditions has on their quality of life.

Because different conditions affect people in different ways, the scoring system takes into account how that happens - and how those effects might interact with one another.

Early this year, Wei and colleagues published a study showing that the risk of dying rose 8% for every single-point rise in MWI score, and that the rise in score tracked closely with the decrease in physical abilities of people with multiple conditions.

That study, also in Journals of Gerontology: Series A, also used Health and Retirement Study data, from 18,174 people over the age of 51 who took part in the study over 11 years.

How to calculate a multimorbidity score

While the MWI scoring approach has been useful in research, Wei and her colleagues now hope that clinicians can use it to help them understand the needs and manage the care of patients with multiple conditions.

A free MWI scoring tool is now available for clinician use at the website ePrognosis, run by the University of California, San Francisco.

Any clinician can enter a few pieces of anonymous information about a patient over age 54 into the calculator and come up with a score for them. The results page also gives a breakdown of how likely people like that study participant are to die within the next 10 years, or to experience a decline in their physical functioning in the next four to eight years.

While Wei cautions clinicians not to use the score as the sole indicator of any one patient's prognosis, she hopes that the score can help guide discussions about a range of decisions from preventive care to elective surgery to living arrangements and end-of-life care preferences.

Using multimorbidity research

The research that Wei and colleagues have done on the impacts of high MWI scores across groups of patients could also help guide care.

For instance, the finding that suicide risk rose sharply as MWI score rose could help clinicians think about which patients might be most in need of depression and suicide screening. As patients develop more conditions with age, physicians may want to monitor their mental health more closely, and offer appropriate lifestyle advice and treatment.

"As clinicians, we are more likely to assess suicide risk in people with known depression or other mental health or substance use issues, but we may not automatically consider that those with more 'physical' conditions only could also be at higher risk," says Wei. "Multimorbidity has several downstream consequences. Physical impairments are just the beginning. As conditions accumulate and physical functioning deteriorates, we have found this is closely linked to worse mental health, social health, and eventually premature mortality."

In short, she says, "The association between the MWI score with suicide risk and overall mental well-being warrants attention."

Having a high MWI score, she says, makes someone functionally older than their "calendar" or chronologic age would suggest. Clinicians can use the score to help them think about the "biologic age" of the patient before them based on the life span expectations for people with similar scores.

Using scores clinically could also help providers ensure that patients with high scores receive care management services, or other support to help them live their best life and keep on top of the tests, treatments and lifestyle changes that can help them do so.

"We want patients to have good insight into how the conditions they've developed over the years are affecting their well-being, and be open to communicating with their care teams about how those conditions affect their functioning, quality of life and overall health now and in the future," says Wei. "We also know that social support, and having a strong purpose in life, can protect against some of the detrimental effects of multiple conditions. We need to help patients understand these connections, foster their development early on, and sustain them through each stage of life and changes in health."

Credit: 
Michigan Medicine - University of Michigan

Serotonin linked to somatic awareness, a condition long thought to be imaginary

An international team spearheaded by researchers at McGill University has discovered a biological mechanism that could explain heightened somatic awareness, a condition where patients experience physical discomforts for which there is no physiological explanation.

Patients with heightened somatic awareness often experience unexplained symptoms - headaches, sore joints, nausea, constipation or itchy skin - that cause emotional distress, and are twice as likely to develop chronic pain. The condition is associated with illnesses such as fibromyalgia, rheumatoid arthritis and temporomandibular disorders, and is thought to be of psychological origin.

"Think of the fairy tale of the princess and the pea," says Samar Khoury, a postdoctoral fellow at McGill's Alan Edwards Centre for Research on Pain. "The princess in the story had extreme sensitivity where she could feel a small pea through a pile of 20 mattresses. This is a good analogy of how someone with heightened somatic awareness might feel; they have discomforts caused by a tiny pea that doctors can't seem to find or see, but it's very real."

Thanks to an existing study on genetic association, Samar Khoury and her colleagues might have found the elusive pea capable of explaining somatic awareness.

Their work, recently published in the Annals of Neurology, used data available through the Orofacial Pain: Prospective Evaluation and Risk Assessment cohort and demonstrates that patients who suffer from somatic symptoms share a common genetic variant. The mutation leads to the malfunctioning of an enzyme critical for the production of serotonin, a neurotransmitter with numerous biological functions.

"I am very happy and proud that our work provides a molecular basis for heightened somatic symptoms," says Luda Diatchenko, lead author of the new study and a professor in McGill's Faculty of Dentistry. "We believe that this work is very important to patients because we can now provide a biological explanation of their symptoms. It was often believed that there were psychological or psychiatric problems, that the problem was in that patient's head, but our work shows that these patients have lower levels of serotonin in their blood."

The results of their study have laid the groundwork for the development of animal models that could be used to better characterize the molecular pathways in heightened somatic awareness. Above all, Diatchenko and Khoury hope their work will pave the way for treatment options.

"The next step for us would be to see if we are able to target serotonin levels in order to alleviate these symptoms," says Diatchenko, who holds the Canada Excellence Research Chair in Human Pain Genetics.

Credit: 
McGill University

Astronomers uncover first polarized radio signals from gamma-ray burst

video: Scientists have measured the size of magnetic field patches in a gamma-ray burst jet for the first time by observing polarized radio waves. Individual patches of ordered magnetic fields in the jet each have random directions of polarization. The observed polarization signal is the average from all visible patches (inside the white circle), and is proportional to one over the square root of the number of patches we can see, and thus is much less than the value expected (about 60%) if the field were completely ordered within the white region. As the number of patches in the visible area grows, the measured polarization declines with time.

Image: 
Kitty Yeung (art) and Tanmoy Laskar (animation)

EVANSTON, Ill. -- An international team of astronomers has captured the first-ever polarized radio waves from a distant cosmic explosion.

This explosive event (known as gamma-ray burst GRB 190114C) is part of a class of the most energetic explosions in the universe. It was produced when a star -- much more massive than our sun -- collapsed to form a black hole.

Gamma ray bursts produce powerful jets that travel close to the speed of light and shine with the incredible luminosity of more than a billion suns combined. Astronomers have struggled to understand how these jets are formed and why they seem to appear only in gamma ray bursts -- but not other explosions, such as ordinary supernovae.

Because these jets are extremely bright at radio wavelengths, the discovery of polarized radio signals may offer new clues to help solve this mystery. Polarization is a property of light that indicates how a magnetic field is organized and structured in a jet.

"We know that only a very tiny fraction (less than 1%) of massive stars form jets when they collapse," said Northwestern University's Raffaella Margutti, who contributed to the study. "But we have not known how they manage to launch these outflows with such extreme properties, and we don't know why only a few stars do this."

"This measurement opens a new window into gamma-ray burst science and the studies of energetic astrophysical jets," said Tanmoy Laskar, a postdoctoral researcher at the University of Bath in the U.K. and lead author of the study. "We would like to understand whether the low level of polarization measured in this event is characteristic of all gamma-ray bursts and, if so, what this could tell us about the magnetic structures in gamma-ray burst jets and the role of magnetic fields in powering jets throughout the universe."

The paper was published last week in the Astrophysical Journal Letters.

The international team included three astrophysicists from Northwestern's Weinberg College of Arts and Sciences: Kate Alexander, Wen-fai Fong and Margutti. All are members of Northwestern's Center for Interdisciplinary and Exploratory Research in Astrophysics (CIERA).

Astronomers have hypothesized that cosmic magnetic fields might flow through the jets, helping them form and providing structural support. The physical extent of these magnetic fields, which have implications for the jet launching mechanism, however, had never before been measured.

To obtain these measurements, the international team employed a novel trick. They observed the jets in linearly polarized light, which is sensitive to the size of magnetic field patches. Larger magnetic field patches, for example, produce more polarized light.

On January 14, 2019, a flash of gamma rays triggered NASA's Swift satellite, which alerted astronomers of the burst's location in the direction of the constellation Fornax. The astronomers then used the Atacama Large Millimeter/Submillimeter Array (ALMA) telescope in Chile to search for radio waves from the explosion, which occurred more than 4.5 billion years ago in a galaxy 7 billion light-years away.

"Magnetic fields are ubiquitous but notoriously difficult to constrain in our universe," said Fong, an assistant professor of astrophysics. "The fact that we have been able to detect their presence -- let alone in the fastest jets we know of -- is an incredible and storied feat of observation."

The team detected a subtle, but revealing, polarization signal of 0.8%, implying magnetic field patches about the size of our solar system. Next, the researchers will combine this new information with data from X-ray and visible light telescopes.

"The lower frequency data from the Very Large Array (VLA) in New Mexico helped confirm that we were seeing the light from the jet itself rather than from the interaction of the jet with its environment," said Alexander, a NASA Einstein Fellow who led the VLA observations.

"This is a truly remarkable measurement," said Margutti, "both from the technical side and for its deep scientific implications on the nature of magnetic fields in the most relativistic sources known in our universe."

Credit: 
Northwestern University

State initiative to address disparities in mother's milk for very low birth weight infants

Boston - Researchers at Boston Medical Center initiated a statewide quality improvement initiative to increase mothers' ability to produce and provide milk for very low birth weight infants at their discharge, as well reduce the racial/ethnic disparities in milk production and provision to these infants. A new study, published June 18th in Pediatrics, indicates that the initiative yielded positive results on improving rates of prenatal human milk education, early milk expression and skin to skin care among mothers of very low birth weight infants during initial hospitalization, but did not lead to sustained improvement in mother's milk provision at hospital discharge.

Mother's milk has many benefits for very low-birth-rate infants, including a reduction of necrotizing enterocolitis (infection of the intestine), sepsis, and chronic lung disease, and improvement in later childhood development. However, mothers of very low birth rate infants that are born prematurely often have challenges making milk. In addition, previous research has shown racial/ethnic disparities in mother's milk provision at discharge or transfer, with white mothers having a higher rate of mother's milk provision.

The researchers examined three years of data from 1,670 mother-very low birth weight infant pairs from 10 level 3 neonatal intensive care units in Massachusetts. They found that the quality improvement program significantly improved hospital-based breastfeeding support practices, as well as first milk expression within six hours of birth, and any skin-to-skin care in the first month in all racial/ethnic groups. Although the researchers found no racial/ethnic disparities in provision of mother's milk for the first three weeks of hospitalization, disparities emerged after three weeks.

"Although we were able to show improvement in our process measures and any mother's milk for the first three weeks of hospitalization, these did not lead to sustained improvement in mother's milk provision at discharge," said corresponding author Margaret G. Parker, MD, MPH, a neonatologist at Boston Medical Center and assistant professor of pediatrics at BU School of Medicine. "While we did not find improvements in our main outcome, we did find several successful initiatives that can inform other hospitals looking to address this issue."

The authors note that, given these results, further research is needed on the effect of factors later in hospitalization on provision of mother's milk at discharge. This work was supported by the W.K. Kellogg Foundation.

Credit: 
Boston Medical Center

A forest of nano-mushroom structures keep this plastic clean and stain-free

PITTSBURGH (June 19, 2019) ¬--Technologies like solar panels and LEDs require a cover material that repels water, dirt and oil while still letting plenty of light through. There is also interest in new flexible materials so these devices can be incorporated into a variety of creative applications like curtains, clothes, and paper. Researchers from the University of Pittsburgh's Swanson School of Engineering have created a flexible optical plastic that has all of those properties, finding inspiration in a surprising place: the shape of Enoki mushrooms.

The research, "Stain-Resistant, Superomniphobic Flexible Optical Plastics Based on Nano-Enoki Mushrooms," was published in the Journal of Materials Chemistry A (doi:10.1039/C9TA01753D).

The researchers created a plastic sheet surface with tall, thin nanostructures that have larger tops, like an Enoki mushroom. Named nano-enoki polyethylene terephthalate (PET), the nano-structures in the coating make the plastic sheet superomniphobic, repelling a wide range of liquids, while maintaining a high transparency. The surface can repel a variety of liquids such as not only water, but milk, ketchup, coffee, and olive oil. It also has high transparency and high haze, meaning it allows more light through, but that light is scattered. That makes it ideal for integrating with solar cells or LEDs, and combined with its flexible and durability, means it could be used in flexible lighting or wearable technology.

"The key thing with these structures is the shape - it keeps liquid on top of the nanostructure. This is the best in the literature so far in terms of high transparency, high haze and high oil contact angle," explains Sajad Haghanifar, lead author of the paper and doctoral candidate in industrial engineering at Pitt. "We show that substances that usually stain and leave residue behind, like mustard and blood, fall completely off the surface, even after they've dried." Videos show how the dried mustard and blood flake off the surface when the surface is tilted.

"The lotus leaf is nature's gold standard in terms of a liquid-repellant and self-cleaning surface," says Paul Leu, PhD, associate professor of industrial engineering, whose lab conducted the research. Dr. Leu holds secondary appointments in mechanical engineering and materials science and chemical engineering. "We compared our nano-enoki PET with a lotus leaf and found that ours was better at repelling more kinds of liquids, including olive oil, blood, coffee, and ethylene glycol. The surfaces not only resist staining from various liquids, but may be adapted for medical applications to resist bacteria or blood clotting."

Credit: 
University of Pittsburgh

Plate tectonics may have driven 'Cambrian Explosion, study shows

The quest to discover what drove one of the most important evolutionary events in the history of life on Earth has taken a new, fascinating twist.

A team of scientists have given a fresh insight into what may have driven the "Cambrian Explosion" - a period of rapid expansion of different forms of animal life that occurred over 500 million years ago.

While a number of theories have been put forward to explain this landmark period, the most credible is that it was fuelled by a significant rise in oxygen levels which allowed a wide variety of animals to thrive.

The new study suggests that such a rise in oxygen levels was the result of extraordinary changes in global plate tectonics.

During the formation of the supercontinent 'Gondwana', there was a major increase in continental arc volcanism - chains of volcanoes often thousands of miles long formed where continental and oceanic tectonic plates collided. This in turn led to increased 'degassing' of CO2 from ancient, subducted sedimentary rocks.

This, the team calculated, led to an increase in atmospheric CO2 and warming of the planet, which in turn amplified the weathering of continental rocks, which supplied the nutrient phosphorus to the ocean to drive photosynthesis and oxygen production.

The study was led by Josh Williams, who began the research as an MSc student at the University of Exeter and is now studying for a PhD at the University of Edinburgh.

During his MSc project he used a sophisticated biogeochemical model to make the first quantification of changes in atmospheric oxygen levels just prior to this explosion of life.

Co-author and project supervisor Professor Tim Lenton, from the University of Exeter's Global Systems Institute said: "One of the great dilemmas originally recognised by Darwin is why complex life, in the form of fossil animals, appeared so abruptly in what is now known as the Cambrian explosion.

"Many studies have suggested this was linked to a rise in oxygen levels - but without a clear cause for such a rise, or any attempt to quantify it."

Not only did the model predict a marked rise in oxygen levels due to changes in plate tectonic activity, but that rise in oxygen - to about a quarter of the level in today's atmosphere - crossed the critical levels estimated to be needed by the animals seen in the Cambrian explosion.

Williams added: "What is particularly compelling about this research is that not only does the model predict a rise in oxygen to levels estimated to be necessary to support the large, mobile, predatory animal life of the Cambrian, but the model predictions also show strong agreement with existing geochemical evidence."

"It is remarkable to think that our oldest animal ancestors - and therefore all of us - may owe our existence, in part, to an unusual episode of plate tectonics over half a billion years ago" said Professor Lenton.

Credit: 
University of Exeter

How bacteria protect themselves from plasma treatment

image: Julia Bandow and Marco Krewing have investigated bacteria under the influence of plasmas.

Image: 
© Daniel Sadrowski

Considering the ever-growing percentage of bacteria that are resistant to antibiotics, interest in medical use of plasma is increasing. In collaboration with colleagues from Kiel, researchers at Ruhr-Universität Bochum (RUB) investigated if bacteria may become impervious to plasmas, too. They identified 87 genes of the bacterium Escherichia coli, which potentially protect against effective components of plasma. "These genes provide insights into the antibacterial mechanisms of plasmas," says Marco Krewing. He is the lead author of two articles that were published in the Journal of the Royal Society Interface this year.

A cocktail of harmful components stresses pathogens

Plasmas are created from gas that is pumped with energy. Today, plasmas are already used against multi-resistant pathogens in clinical applications, for example to treat chronic wounds. "Plasmas provide a complex cocktail of components, many of which act as disinfectants in their own right," explains Professor Julia Bandow, Head of the RUB research group Applied Microbiology. UV radiation, electric fields, atomic oxygen, superoxide, nitric oxides, ozone, and excited oxygen or nitrogen affect the pathogens simultaneously, generating considerable stress. Typically, the pathogens survive merely several seconds or minutes.

In order to find out if bacteria, may develop resistance against the effects of plasmas, like they do against antibiotics, the researchers analysed the entire genome of the model bacterium Escherichia coli, short E. coli, to identify existing protective mechanisms. "Resistance means that a genetic change causes organisms to be better adapted to certain environmental conditions. Such a trait can be passed on from one generation to the next," explains Julia Bandow.

Mutants missing single genes

For their study, the researchers made use of so-called knockout strains of E. coli. These are bacteria that are missing one specific gene in their genome, which contains approximately 4,000 genes. The researchers exposed each mutant to the plasma and monitored if the cells kept proliferating following the exposure.

"We demonstrated that 87 of the knockout strains were more sensitive to plasma treatment than the wild type that has a complete genome," says Marco Krewing. Subsequently, the researchers analysed the genes missing in these 87 strains and determined that most of those genes protected bacteria against the effects of hydrogen peroxide, superoxide, and/or nitric oxide. "This means that these plasma components are particularly effective against bacteria," elaborates Julia Bandow. However, it also means that genetic changes that result in an increase in the number or activity of the respective gene products are more capable of protecting bacteria from the effects of plasma treatment.

Heat shock protein boosts plasma resistance

The research team, in collaboration with a group headed by Professor Ursula Jakob from the University of Michigan in Ann Arbor (USA), demonstrated that this is indeed the case: the heat shock protein Hsp33, encoded by the hslO gene, protects E. coli proteins from aggregation when exposed to oxidative stress. "During plasma treatment, this protein is activated and protects the other E. coli proteins - and consequently the bacterial cell," Bandow points out. An increased volume of this protein alone results in a slightly increased plasma resistance. Considerably stronger plasma resistance can be expected when the levels of several protective proteins are increased simultaneously.

Credit: 
Ruhr-University Bochum

Artificial muscles powered by glucose

image: Researchers at Linköping University have demonstrated that artificial muscles made from polymers can now be powered by energy from glucose and oxygen, just like biological muscles.

Image: 
Thor Balkhed/Linköping University

Artificial muscles made from polymers can now be powered by energy from glucose and oxygen, just like biological muscles. This advance may be a step on the way to implantable artificial muscles or autonomous microrobots powered by biomolecules in their surroundings. Researchers at Linköping University, Sweden, have presented their results in the journal Advanced Materials.

The motion of our muscles is powered by energy that is released when glucose and oxygen take part in biochemical reactions. In a similar way, manufactured actuators can convert energy to motion, but the energy in this case comes from other sources, such as electricity. Scientists at Linköping University, Sweden, wanted to develop artificial muscles that act more like biological muscles. They have now demonstrated the principle using artificial muscles powered by the same glucose and oxygen as our bodies use.

The researchers have used an electroactive polymer, polypyrrole, which changes volume when an electrical current is passed. The artificial muscle, known as a "polymer actuator", consists of three layers: a thin membrane layer between two layers of electroactive polymer. This design has been used in the field for many years. It works by the material on one side of the membrane acquiring a positive electrical charge and ions being expelled, causing it to shrink. At the same time, the material on the other side acquires a negative electrical charge and ions are inserted, which causes the material to expand. The changes in volume cause the actuator to bend in one direction, in the same way that a muscle contracts.

The electrons that cause motion in artificial muscles normally come from an external source, such as a battery. But batteries suffer from several obvious drawbacks: they are usually heavy, and need to be charged regularly. The scientists behind the study decided instead to use the technology behind bioelectrodes, which can convert chemical energy into electrical energy with the aid of enzymes. They have used naturally occurring enzymes, integrating them into the polymer.

"These enzymes convert glucose and oxygen, in the same way as in the body, to produce the electrons required to power motion in an artificial muscle made from an electroactive polymer. No source of voltage is required: it's enough simply to immerse the actuator into a solution of glucose in water", says Edwin Jager, senior lecturer in Sensor and Actuator Systems, in the Department of Physics, Chemistry and Biology at Linköping University. Together with Anthony Turner, professor emeritus, he has led the study.

Just as in biological muscles, the glucose is directly converted to motion in the artificial muscles.

"When we had fully integrated enzymes on both sides of the actuator and it actually moved - well, it was just amazing", says Jose Martinez, a member of the research group.

The next step for the researchers will be to control the biochemical reactions in the enzymes, such that the motion can be reversible for many cycles. They have already demonstrated that the motion is reversible, but they had to use a small trick to do so. Now they want to create a system that is even closer to a biological muscle. The researchers also want to test the concept using other actuators as the "textile muscle", and apply it in microrobotics.

"Glucose is available in all organs of the body, and it's a useful substance to start with. But it is possible to switch to other enzymes, which would enable the actuator to be used in, for example, autonomous microrobots for environmental monitoring in lakes. The advances we present here make it possible to power actuators with energy from substances in their natural surroundings", says Edwin Jager.

Credit: 
Linköping University

Study: Behavior in kindergarten associated with earnings in adulthood

Researchers have examined what childhood behavior can tell us about how individuals will do economically later in life. But the methods they used to reach an answer were limited, which tempered the studies' findings. New longitudinal research that addressed these limitations examined the association between six prevalent childhood behaviors in kindergarten and annual earnings at ages 33 to 35 years. The study found that individuals who were inattentive at age 6 had lower earnings in their 30s after taking into consideration their IQ and family adversity. For males only, individuals who were physically aggressive or oppositional (e.g., who refused to share materials or blamed others) had lower annual earnings in their 30s. And males who were prosocial (e.g., who shared or helped) had higher later earnings.

The study, in the journal JAMA Psychiatry, was conducted by researchers at Carnegie Mellon University, the University of Montreal, University College Dublin, Observatoire Franc?ais des Conjonctures Economiques, Centre pour la Recherche Economique et Ses Applications, Statistics Canada, and Universite? de Bordeaux.

"Our study suggests that kindergarten teachers can identify behaviors associated with lower earnings three decades later," says Daniel Nagin, professor of public policy and statistics at Carnegie Mellon University's Heinz College, who coauthored the study. "Early monitoring and support for children who exhibit high levels of inattention, and for boys who exhibit high levels of aggression and opposition and low levels of prosocial behavior could have long-term socioeconomic advantages for those individuals and society."

The study used data from the Quebec Longitudinal Study of Kindergarten Children, a population-based sample of predominantly White boys and girls born in 1980 or 1981 in Quebec, Canada, who were followed from January 1, 1985, to December 31, 2015. In sum, 2,850 children were assessed. The data included behavioral ratings by kindergarten teachers when the children were 5 or 6 years old, as well as 2013 to 2015 government tax returns when the participants were 33 to 35 years old.

The study sought to test the associations between inattention (e.g., lacking concentration, being easily distracted), hyperactivity (e.g., feeling fidgety, moving constantly), physical aggression (e.g., fighting, bullying, kicking), opposition (e.g., disobeying, blaming others, being irritable), anxiety (e.g., worrying about many things, crying easily), and prosociality (e.g., helping someone who has been hurt, showing sympathy) when the children were in kindergarten and later reported annual earnings.

The study addressed limitations of prior research by assessing children earlier, including specific behaviors within a single model, so the results could be incorporated more easily into targeted intervention programs, and relied on reports from teachers instead of self-reports by children and tax records of income instead of adults' self-reported earnings.

Researchers found that boys and girls who were inattentive in kindergarten had lower earnings in their 30s. They also found that boys who were aggressive or oppositional at age 6 had lower earnings, and that boys who were prosocial at age 6 had higher earnings in their 30s.

"Early behaviors are modifiable, arguably more so than traditional factors associated with earnings, such as IQ and socioeconomic status, making them key targets for early intervention," explains Sylvana M. Côté, associate professor of social and preventative medicine at the University of Montreal, who coauthored the study. "If early behavioral problems are associated with lower earnings, addressing these behaviors is essential to helping children--through screenings and the development of intervention programs--as early as possible."

The study's authors acknowledge that they did not account for earnings through the informal economy or for unaccounted accumulation of debt. They also note that because they looked at associations, the study did not reach conclusions about causality.

Credit: 
Carnegie Mellon University

Deep submersible dives shed light on rarely explored coral reefs

video: Researchers dove to the mesophotic zone off the coast of Maui, Hawaii, in a submersible, then collected specimens using a robot arm. They also captured video footage and photos of life that has rarely been seen by humans.

Image: 
Kiyomi Taguchi/University of Washington

Just beyond where conventional scuba divers can go is an area of the ocean that still is largely unexplored. In waters this deep -- about 100 to at least 500 feet below the surface -- little to no light breaks through.

Researchers must rely on submersible watercraft or sophisticated diving equipment to be able to study ocean life at these depths, known as the mesophotic zone. These deep areas span the world's oceans and are home to extensive coral reef communities, though little is known about them because it is so hard to get there.

A collaborative research team from the University of Washington, College of Charleston, University of California Berkeley, University of Hawaii and other institutions has explored the largest known coral reef in the mesophotic zone, located in the Hawaiian Archipelago, through a series of submersible dives. There, they documented life along the coral reef, finding a surprising amount of coral living in areas where light levels are less than 1% of the light available at the surface.

Their findings were published April 8 in the journal Limnology and Oceanography.

"Because mesophotic corals live close to the limits of what is possible, understanding their physiology will give us clues of the extraordinary strategies corals use to adapt to low-light environments," said lead author Jacqueline Padilla-Gamiño, an assistant professor in the UW School of Aquatic and Fishery Sciences.

Knowing how these deep coral reefs function is important because they appear to be hotspots for biodiversity, and home to many species found only in those locations, Padilla-Gamiño explained. Additionally, close to half of all corals in the ocean have died in the past 30 years, mostly due to warm water temperatures that stress their bodies, causing them to bleach and eventually die. This has been documented mostly in shallower reefs where more research has occurred. Scientists say that more information about deeper reefs in the mesophotic zone is critical for preserving that habitat.

"Mesophotic reefs in Hawaii are stunning in their sheer size and abundance," said co-author Heather Spalding at College of Charleston. "Although mesophotic environments are not easily seen, they are still potentially impacted by underwater development, such as cabling and anchoring, and need to be protected for future generations. We are on the tip of the iceberg in terms of understanding what makes these astounding reefs tick."

Padilla-Gamiño was on board during two of the team's eight submersible dives off the coast of Maui that took place from 2010 to 2011. Each dive was a harrowing adventure: Researchers spent up to eight hours in cramped quarters in the submersible that was tossed from the back of a larger boat, then disconnected once the submersible reached the water.

Once in the mesophotic zone, they collected specimens using a robot arm, and captured video footage and photos of life that has rarely been seen by humans.

"It's a really unbelievable place," Padilla-Gamiño said. "What is surprising is that, in theory, these corals should not be there because there's so little light. Now we're finally understanding how they function to be able to live there."

By collecting coral samples and analyzing their physiology, the researchers found that different corals in the mesophotic zone use different strategies to deal with low amounts of light. For example, some species of corals change the amount of pigments at deeper depths, while other species change the type and size of symbionts, which are microscopic seaweeds living inside the tissue of corals, Padilla-Gamiño explained. These changes allow corals to acquire and maximize the light available to perform photosynthesis and obtain energy.

Additionally, the corals at deeper depths are likely eating other organisms like zooplankton to increase their energy intake and survive under very low light levels. They probably do this by filter feeding, Padilla-Gamiño said, but more research is needed to know for sure.

The researchers hope to collect more live coral samples from the mesophotic zone to be able to study in the lab how the symbionts, and the corals they live inside, function.

"The more we can study this, the more information we can have about how life works. This is a remarkable system with enormous potential for discovery," Padilla-Gamiño said. "Our studies provide the foundation to explore physiological ?exibility, identify novel mechanisms to acquire light and challenge current paradigms on the limitations of photosynthetic organisms like corals living in deeper water."

Credit: 
University of Washington

High reaction rates even without precious metals

image: Kristina Tschulik, Abdelilah El Arrassi, Niclas Blanc, Mathies Evers and Zhibin Liu (from left).

Image: 
RUB, Kramer

Non-precious metal nanoparticles could one day replace expensive catalysts for hydrogen production. However, it is often difficult to determine what reaction rates they can achieve, especially when it comes to oxide particles. This is because the particles must be attached to the electrode using a binder and conductive additives, which distort the results. With the aid of electrochemical analyses of individual particles, researchers have now succeeded in determining the activity and substance conversion of nanocatalysts made from cobalt iron oxide - without any binders. The team led by Professor Kristina Tschulik from Ruhr-Universität Bochum reports together with colleagues from the University of Duisburg-Essen and from Dresden in the Journal of the American Chemical Society, published online on 30 May 2019.

"The development of non-precious metal catalysts plays a decisive role in realising the energy transition as only they are cheap and available in sufficient quantities to produce the required amounts of renewable fuels," says Kristina Tschulik, a member of the Cluster of Excellence Ruhr Explores Solvation (Resolv). Hydrogen, a promising energy source, can thus be acquired by splitting water into hydrogen and oxygen. The limiting factor here has so far been the partial reaction in which oxygen is produced.

Better than reaction rates currently achieved in industry

How efficiently cobalt iron oxide particles are able to catalyse oxygen generation was investigated by the researchers in the current work. They analysed many individual particles one after the other. The chemists allowed a particle to catalyse oxygen generation on the surface of the electrode and measured the current flow from this, which provides information about the reaction rate. "We have measured current densities of several kiloamps per square metre," says Tschulik. "This is above the reaction rates currently possible in industry."

The team showed that, for particles smaller than ten nanometres, the current flow is dependent on the particle size - the smaller the catalyst particle, the smaller the current. The current is also limited by the oxygen that is produced in the reaction and that diffuses away from the particle surface.

Extremely stable despite high stress

Following the catalysis experiments, the chemists observed the catalyst particles under the transmission electron microscope. "Despite the high reaction rates, i.e. although the particles had created so much oxygen, they hardly changed," summarises Tschulik. "The stability under extreme conditions is exceptional."

The analysis approach used in the current work can also be transferred to other electrocatalysts. "It is essential to find out more about the activities of nanocatalysts in order to be able to efficiently further develop non-precious metal catalysts for the renewable energy technologies of tomorrow," says the Bochum-based chemist. In order to analyse the effect of the particle size on the catalytic activity, it is important to synthesise nanoparticles with defined size. As part of the University Alliance Ruhr, the Bochum team cooperates closely with researchers from the University of Duisburg-Essen led by Professor Stephan Schulz, who produce the catalyst particles.

Credit: 
Ruhr-University Bochum

Research brief: Stabilizing nations' food production through crop diversity

With increasing demand for food from the planet’s growing population and climate change threatening the stability of food systems across the world, University of Minnesota research examined how the diversity of crops at the national level could increase the harvest stability of all crops in a nation.

The research, published Wednesday in the journal Nature, examined 50 years of data (1961-2010) from the Food and Agricultural Organization (FAO) of the United Nations on annual yields of 176 crop species in 91 nations to determine how stable and predictable the food supply is in each country. This is the first research of its kind to examine the relationship between crop diversity and food stability at the scale of nations.

“We found an intriguing pattern — nations that grow more crops tend to have more stable food supplies,” said G. David Tilman, co-author of this study and director of the Cedar Creek Ecosystem Science Reserve in the College of Biological Sciences. “Our analysis also shows that nations with a variety of crops are less likely to experience a severe food shortage.”

That type of shortage is described as a year in which a nation has a 25% or greater decline in the total yield all of its crops combined. After examining the FAO data, Tilman and Delphine Renard — a postdoctoral researcher at the University of California Santa Barbara — found that:

nations with some of the lowest crop diversities experienced a severe food shortage about every eight years;
countries with some of the highest diversities of crops experienced a severe food shortage about every 100 years;
robust irrigation capabilities in nations also have significant stabilizing effects on crop production, leading to fewer years with severe food shortages.

The research suggests that nations that appropriately increase crop diversity may have more stable food supplies. In areas of the world where there are limited water resources or where increased irrigation is unaffordable, researchers suggest greater crop diversity may be particularly useful as it may allow farmers to not only help stabilize the food supply, but their income as well.

The research also found:

that the stability of a nation’s food supply depended on the types of crops grown, with grains and legumes seeming to lead to more stable food supplies;
food supply stability depended on having not just more crops, but rather having those crops be more evenly abundant;
heat waves harmed yields and stability and while fertilization greatly increased yields, it did not increase stability.

“Food supplies are expected to become less stable because of climate change,” Tilman said. “We encourage nations around the world to evaluate their crop diversity and determine if there might be some additional crops that would work well for them. Increasing crop diversity is one tactic to prepare for the potential impacts of climate change on crop production.”

Tilman suggests that alongside crop diversity planning, nations should consider the benefits of new drought-tolerant crop varieties, increased irrigation, intercropping and more transparent agricultural trade.

“There are 7.7 billion people on Earth and we all depend on a stable, global food supply,” said Tilman. “These tactics are just one of many we can follow to plan our future and better prepare future generations to live healthily on this planet.”

Future research needs to be taken to understand which types and combinations of crops better suit specific climates and soils to enhance food supply stability.

Credit: 
University of Minnesota

Directed evolution comes to plants

image: Flow chart describing how CRISPR-directed evolution mimics Darwinism.

Image: 
2019 KAUST

A new platform for speeding up and controlling the evolution of proteins inside living plants has been developed by a KAUST-led team.

Previously, this type of directed evolution system was only possible in viruses, bacteria, yeast and mammalian cell lines. The Saudi research--part of KAUST's Desert Agriculture Initiative--has now expanded the technique to rice and other food plants. It means that plant breeders now have an easy way to rapidly engineer new crop varieties capable of withstanding weeds, diseases, pests and other agricultural stresses.

"We expect that our platform will be used for crop bioengineering to improve key traits that impact yield and immunity to pathogens," says group leader Magdy Mahfouz. "This technology should help improve plant resilience under climate change conditions."

To experimentally build their directed evolution platform, Mahfouz and his colleagues used a combination of targeted mutagenesis and artificial selection in the rice plant, Oryza sativa. They took advantage of the gene-editing tool known as CRISPR to generate DNA breaks at more than 100 sites throughout the SF3B1 gene, which encodes a protein involved in the processing of other gene transcripts. After manipulating the DNA of small bundles of rice cells in this way, the researchers then grew the mutated seedlings in the presence of herboxidiene, a herbicide that normally targets the SF3B1 protein to inhibit plant growth and development.

This strategy ultimately yielded more than 20 new rice variants with mutations that conferred resistance to herboxidiene to varying degrees. In collaboration with Stefan Arold's group at the KAUST Computational Bioscience Research Center, Mahfouz and his colleagues then characterized the structural basis of the resistance--showing, for example, how particular mutations helped destabilize herbicide binding to the SF3B1 protein.

Herboxidiene is not widely used in industrial agriculture, but the same basic directed evolution strategy could now be used to design crops resistant to more common weed-killers. The herbicides would then eliminate unwanted surrounding plants while leaving the desired cultivated crop intact.

Breeders could also begin to evolve practically any trait of interest, notes Haroon Butt, a postdoctoral fellow in Mahfouz's lab. "This is a proof-of-principle study with wide applicability," says Butt, the first author of the paper that outlines the technology. "Our platform mimics Darwinism, and the selection pressure involved helps enforce the development of new gene variants and traits that would not be possible by any other known method."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Mapping and measuring proteins on the surfaces of endoplasmic reticulum (ER) in cells

image: Vesamicol derivatives, in which alkyl chains of varying chain lengths were introduced between a piperazine ring and a benzene ring, were synthesized and evaluated. Screening of the binding affinity of these vesamicol derivatives for the sigma-1 receptor was performed. The radioiodine labeled probe [I-125]2-(4-(3-(4-iodophenyl)propyl)piperazin-1-yl)cyclohexan-1-ol was prepared and evaluated in in vitro and in vivo.

Image: 
Kanazawa University

Sigma receptors are proteins found on mainly the surface of endoplasmic reticulum (ER) in certain cells. Sigma-1 and sigma-2 are the two main classes of these receptors. The sigma-1 receptor is involved neurological disorders and certain types of cancer. To understand better how the receptor is involved in disease and whether drugs developed to target it are working, it is important to be able to accurately trace the sigma-1 receptor. Researchers at Kanazawa University have developed a probe, which can identify and latch onto the sigma-1 receptor.

The research team led by Kazuma Ogawa had previously developed molecules with such binding potential. However, upon detailed analysis of the sigma-1 receptor structure, they realized that extending the length of these molecules would increase their binding affinity further. The team therefore created molecules with varying lengths and found one probe that bound to the receptor exceptionally well. For measuring and mapping the sigma receptors by nuclear imaging, radiolabeled iodine was introduced into the probe. The structure created subsequently bound to both sigma-1 and sigma-2 receptors.

Since sigma-1 and sigma-2 receptors are involved in prostate cancer, the team then tested the effects of their newly created molecular probe on prostate cancer cells. These cells have sigma receptors and thus any probe with a high affinity will attach to them and enter the cell. As expected, the probe signal from within the cell was high. When haloperidol--a drug specific to the sigma-1 receptor--was added to the mix this signal dropped, suggesting a competition between the two.

To finally assess the affinity of the probe for different tissues within the body, mice with prostatic tumors were used. While the probe easily entered and stayed within the tumors, its presence in the muscles and blood was less. The probe was thus highly specific for tissues with the presence of sigma receptors.

This study reports a sophisticated probe that binds to sigma-1 receptors better than previous probes developed. This coupled with its ability to escape non-specific tissues is a promising step forward in studying changes induced in the sigma-1 receptor in various disorders. The probe can also be used when developing drugs against the sigma-1 receptor to compare the binding affinities of such drugs. "These results provide useful information for developing sigma-1 receptor imaging probes", conclude the researchers.

Credit: 
Kanazawa University