Culture

Protecting cell powerhouse paves way to better treatment of acute kidney injury

image: Drs. Zheng Dong and Qingqing Wei in the Cellular Biology Anatomy lab at the Medical College of Georgia at Augusta University

Image: 
Phil Jones, Senior Photographer, Augusta University

AUGUSTA, Ga. (Dec. 6, 2018) - For the first time, scientists have described the body's natural mechanism for temporarily protecting the powerhouses of kidney cells when injury or disease means they aren't getting enough blood or oxygen.

Powerhouses, called mitochondria, which provide fuel for our cells, start to fragment, likely as one of the first steps in the kidney cell damage and death that often result from an acute kidney injury, says Dr. Zheng Dong, cellular biologist in the Department of Cellular Biology and Anatomy at the Medical College of Georgia at Augusta University.

Dong and his colleagues appear to have delineated the natural mitochondrial protection pathway in kidney cells and say it's a logical therapeutic target for treating acute kidney injury.

"We know there is a natural protective mechanism. Maybe we need to upregulate it," says Dong, also a senior research career scientist and director of research development at the Charlie Norwood Veterans Affairs Medical Center in Augusta. Dong is senior author of the study published in the Journal of Clinical Investigation.

In fact, drugs that target at least one key part of the pathway already have been studied in patients experiencing anemia - a deficiency in the red blood cells that carry oxygen - because of chronic kidney disease.

The scientists started by examining a large number of microRNAs, small RNAs known to regulate gene expression. They found one, microRNA-668, consistently elevated in both patients with an acute kidney injury as well as animal models of the condition, which is common in patients in intensive care, particularly older patients.

Mitochondrial fission and fusion are polar opposites but their balance is key to a healthy cell powerhouse. They are governed by two distinct classes of proteins, which emerging evidence suggests are regulated by microRNAs.

Proteins like mitochondrial Protein 18 KDa, or MTP18, for example, have already been implicated in powerhouse fission, at least in periods of stress. The MCG scientists and their collaborators have now confirmed it's a direct target of microRNA-668.

But the pathway has at least one earlier point of action: hypoxia-inducible factor-1, or HIF-1, a transcription factor that increases when oxygen levels decrease to help cells adjust by controlling expression of genes that can protect them.

They found that ischemic acute kidney injury induces HIF-1, which upregulates microRNA-668, which suppresses MTP18 and the result is kidney cell protection.

One key discovery was an HIF-1 binding site on the gene that promotes microRNA-668, and the related finding that too little HIF-1 reduced the expression of microRNA-668.

"The microRNA-668 gene is a new targeted gene for HIF-1, which may help explain some of HIF-1's protective function," Dong says.

When scientists restricted microRNA-668, more kidney cells died. Conversely, giving a mimic of microRNA-668 - to increase its presence - protected kidney cells. More microRNA-668 also meant less MTP18 and vice versa.

"We don't know what MTP18 does normally, but now we know what it does when stressed," Dong says. "It induces fragmentation of the mitochondria."

They've shown that increased levels of microRNA-668 can prevent most of that damage so the cell can keep functioning ideally until blood and oxygen are restored. "This is like a temporary mechanism for cell survival," Dong says.

One way physicians might one day improve the odds for mitochondrial and kidney survival, may be a class of drugs called PHD inhibitors, which have already been studied in chronic kidney disease. PHD - prolyl hydroxylase - is a protein that induces the degradation of protective HIF-1 and Dong suspects PHD inhibitors could benefit patients with acute kidney injury as well. A microRNA-668 mimic, similar to that used in the studies, might one day be another option.

Right now there aren't any targeted therapies for acute kidney injury, says Dong, rather supportive therapies like hydration, possible short-term dialysis and addressing the injury cause.

With an acute kidney injury, kidney function deteriorates in a few hours or days. It can result from a literal blow to the kidney, in a fall or car accident, or from dehydration in an overzealous student athlete. In the face of general good health, most patients recover fully and quickly, Dong says.

However, acute kidney injury mostly occurs in people who already have another medical problem like diabetes. In fact, most are in the hospital when it happens, with problems like bleeding or shock, failure of other organs like the heart, even an overdose of over-the-counter nonsteroidal anti-inflammatories for problems like a cold or flu, according to the National Kidney Foundation.

Dong's lab was the first to show that as a class of molecules, microRNAs could play an important role in reducing acute kidney injury. They reported in 2010 that deletion of a key enzyme for microRNA production from kidney tubules made mice resistant to ischemia-induced acute kidney injury, suggesting an important destructive role for at least some microRNAs.

Subsequent work by Dong and colleagues led to identification of specific microRNAs with significant changes in expression in the face of ischemic acute kidney injury. Those studies found some microRNAs definitely promote fission but others seem to help protect kidney cells.

Just what microRNA-668 does has been largely unknown other than another recent report implicating it in protecting human breast cancer cells from radiation therapy.

Credit: 
Medical College of Georgia at Augusta University

Unknown treasure trove of planets found hiding in dust

image: The Taurus Molecular Cloud, pictured here by ESA's Herschel Space Observatory, is a star-forming region about 450 light-years away. The image frame covers roughly 14 by 16 light-years and shows the glow of cosmic dust in the interstellar material that pervades the cloud, revealing an intricate pattern of filaments dotted with a few compact, bright cores -- the seeds of future stars.

Image: 
ESA/Herschel/PACS, SPIRE/Gould Belt survey Key Programme/Palmeirim <em>et al</em>. 2013

"Super-Earths" and Neptune-sized planets could be forming around young stars in much greater numbers than scientists thought, new research by an international team of astronomers suggests.

Observing a sampling of young stars in a star-forming region in the constellation Taurus, researchers found many of them to be surrounded by structures that can best be explained as traces created by invisible, young planets in the making. The research, published in the Astrophysical Journal, helps scientists better understand how our own solar system came to be.

Some 4.6 billion years ago, our solar system was a roiling, billowing swirl of gas and dust surrounding our newborn sun. At the early stages, this so-called protoplanetary disk had no discernable features, but soon, parts of it began to coalesce into clumps of matter - the future planets. As they picked up new material along their trip around the sun, they grew and started to plow patterns of gaps and rings into the disk from which they formed. Over time, the dusty disk gave way to the relatively orderly arrangement we know today, consisting of planets, moons, asteroids and the occasional comet.

Scientists base this scenario of how our solar system came to be on observations of protoplanetary disks around other stars that are young enough to currently be in the process of birthing planets. Using the Atacama Large Millimeter Array, or ALMA, comprising 45 radio antennas in Chile's Atacama Desert, the team performed a survey of young stars in the Taurus star-forming region, a vast cloud of gas and dust located a modest 450 light-years from Earth. When the researchers imaged 32 stars surrounded by protoplanetary disks, they found that 12 of them - 40 percent - have rings and gaps, structures that according to the team's measurements and calculations can be best explained by the presence of nascent planets.

"This is fascinating because it is the first time that exoplanet statistics, which suggest that super-Earths and Neptunes are the most common type of planets, coincide with observations of protoplanetary disks," said the paper's lead author, Feng Long, a doctoral student at the Kavli Institute for Astronomy and Astrophysics at Peking University in Bejing, China.

While some protoplanetary disks appear as uniform, pancake-like objects lacking any features or patterns, concentric bright rings separated by gaps have been observed, but since previous surveys have focused on the brightest of these objects because they are easier to find, it was unclear how common disks with ring and gap structures really are in the universe. This study presents the results of the first unbiased survey in that the target disks were selected independently of their brightness - in other words, the researchers did not know whether any of their targets had ring structures when they selected them for the survey.

"Most previous observations had been targeted to detect the presence of very massive planets, which we know are rare, that had carved out large inner holes or gaps in bright disks," said the paper's second author Paola Pinilla, a NASA Hubble Fellow at the University of Arizona's Steward Observatory. "While massive planets had been inferred in some of these bright disks, little had been known about the fainter disks."

The team, which also includes Nathan Hendler and Ilaria Pascucci at the UA's Lunar and Planetary Laboratory, measured the properties of rings and gaps observed with ALMA and analyzed the data to evaluate possible mechanisms that could cause the observed rings and gaps. While these structures may be carved by planets, previous research has suggested that they may also be created by other effects. In one commonly suggested scenario, so-called ice lines caused by changes in the chemistry of the dust particles across the disc in response to the distance to the host star and its magnetic field create pressure variations across the disk. These effects can create variations in the disk, manifesting as rings and gaps.

The researchers performed analyses to test these alternative explanations and could not establish any correlations between stellar properties and the patterns of gaps and rings they observed.

"We can therefore rule out the commonly proposed idea of ice lines causing the rings and gaps," Pinilla said. "Our findings leave nascent planets as the most likely cause of the patterns we observed, although some other processes may also be at work."

Since detecting the individual planets directly is impossible because of the overwhelming brightness of the host star, the team performed calculations to get an idea of the kinds of planets that might be forming in the Taurus star-forming region. According to the findings, Neptune-sized gas planets or so-called super-Earths - terrestrial planets of up to 20 Earth masses - should be the most common. Only two of the observed disks could potentially harbor behemoths rivaling Jupiter, the largest planet in the solar system.

"Since most of the current exoplanet surveys can't penetrate the thick dust of protoplanetary disks, all exoplanets, with one exception, have been detected in more evolved systems where a disk is no longer present," Pinilla said.

Going forward, the research group plans to move ALMA's antennas farther apart, which should increase the array's resolution to around five astronomical units (one AU equals the average distance between the Earth and the sun), and to make the antennas sensitive to other frequencies that are sensitive to other types of dust.

"Our results are an exciting step in understanding this key phase of planet formation," Long said, "and by making these adjustments, we are hoping to better understand the origins of the rings and gaps."

Credit: 
University of Arizona

Food system organizations must strengthen their operations to safeguard against potential threats

image: Ten factors identified through semi-structured interviews with food system stakeholders in Baltimore, MD that may impact food system organizational resilience, mapped along the resilience curve.

Image: 
Johns Hopkins Center for a Livable Future

Philadelphia, December 6, 2018 - Food systems face growing threats as extreme weather events become more common and more extreme due to climate change. Events such as Hurricane Harvey in Texas and Hurricane Maria in Puerto Rico in 2017 have drawn attention to the havoc natural disasters can wreak. A new study from the Johns Hopkins Bloomberg School of Public Health, published in the Journal of the Academy of Nutrition and Dietetics, highlights characteristics of organizations involved in the food system that may lead them to be more prepared to respond to such disasters, and opportunities for local, state, and federal organizations to improve resilience across the urban food system.

Businesses and organizations involved in growing, distributing, and supplying food must be able to withstand and rebound from acute disruptions such as civil unrest and cyber attacks, as well as those with more gradual impact, such as drought, sea level rise, or funding cuts. Policymakers and researchers are in the early stages of considering ways to improve resilience to both natural and human-generated threats across the food system.

Amelie Hecht, a doctoral candidate in the Department of Health Policy and Management at the Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA, wanted to explore the following issues: what factors may be associated with organization-level food system resilience; how might these factors play out in disaster response; and how do they relate to organizations' confidence in their ability to withstand disruptive events?

The research was performed as part of a larger project led by Roni A. Neff, PhD, Assistant Professor, Center for a Livable Future, Department of Environmental Health & Engineering, Department of Health Policy and Management. Dr. Neff and colleagues interviewed representatives of 26 businesses and organizations in Baltimore that supply, distribute, and promote access to food. The organizations were asked about how they have tried to prevent, minimize, and respond to the effects of disruptive events like snowstorms and civil unrest in the past, and how they plan to address similar challenges in the future.

Researchers identified several factors that influence how resilient an organization is during times of emergency. They found that the organizations able to recover more quickly had ten characteristics in common: formal emergency planning; staff training; reliable staff attendance; redundancy of food supply, food suppliers, infrastructure, location, and service providers; insurance; and post-event learning after a disruptive event. Organizations that were large, well-resourced, and affiliated with national or government partners tended to display more of these characteristics.

The authors conclude that a more resilient food system is needed in order to ensure all people have safe and reliable access to food following both acute and longer-term crises. They highlight several critical areas for targeted intervention by local, state, and federal governments, such as creating opportunities for smaller, less-resourced organizations to share information and pool resources. Further research is needed to add to an emerging understanding of the factors that contribute to resilience in order to help food system organizations, researchers, and government officials identify vulnerabilities in their regional food systems and strategies to improve food system resilience in the face of ongoing and growing threats.

Credit: 
Elsevier

Novel approach for the treatment of cannabis use disorder shows promise in phase 2 trial

Experimental drug reduced cannabis use and withdrawal symptoms compared with placebo

Results of a phase 2 randomised trial of 70 men suggest that an experimental drug that boosts the brain's own cannabis-like chemical may help reduce withdrawal symptoms and cannabis use in men with cannabis dependence or cannabis use disorder.

The findings published in The Lancet Psychiatry journal, show for the first time that men with cannabis dependence or cannabis use disorder treated with the fatty acid amide hydrolase (FAAH) inhibitor 'PF-04457845' used less cannabis and experienced fewer withdrawal symptoms--such as sleep disturbance--at 4-week follow-up compared to those given placebo, and there were no safety concerns.

PF-04457845 works by blocking FAAH, an enzyme that breaks down a principal natural endocannabinoid chemical in the brain called anandamide (that acts on brain cannabinoid receptors like cannabis does). Less FAAH means higher anandamide levels, which may potentially improve mood and reduce anxiety.

"A lot of other drugs have been tested for their ability to reduce cannabis use and withdrawal, but until now none have been consistently shown to work against both withdrawal symptoms and relapse. Furthermore, unlike cannabis or its principal active constituent delta-9 tetrahydrocannabinol (THC), FAAH inhibitors do not appear to have psychoactive or rewarding effects, and are therefore not likely to be abused", says Professor Deepak Cyril D'Souza from Yale University School of Medicine, USA, who led the research.

"PF-04457845 was well tolerated. However, more research is needed to demonstrate that PF-04457845 is safe and effective in a larger sample of treatment-seeking individuals, particularly women, and in other outpatient settings over the long-term." [1]

Cannabis use disorder is characterised by a continued problematic pattern of use despite negative consequences such as social and functional impairment, risky use, tolerance, and withdrawal symptoms. Cannabis withdrawal symptoms include craving for cannabis, irritability, anger, depression, sleep disturbances, and decrease in appetite and weight, that make it difficult to quit. Cannabis use disorder affects around 13 million people worldwide [2]. In the USA, around a third of all current cannabis users meet diagnostic criteria for cannabis use disorder, and more than 250,000 people were admitted for cannabis abuse treatment in 2016 [3]. Long-term recovery is achieved by only a few of those who seek treatment with behavioural interventions like cognitive behavioural therapy and motivational enhancement therapy.

Currently, there are no approved pharmacological treatments for problematic cannabis use. Almost every class of psychotropic drug has been tested for cannabis withdrawal or dependence, but none has been consistently effective or well tolerated. Substitution therapy with THC, the psychoactive compound in cannabis, has shown some promise in reducing withdrawal symptoms but does not prevent relapse and is limited by its psychoactive effects and abuse potential. In mice dependent on THC, blocking the FAAH enzyme reduced cannabis withdrawal syndrome.

In the study, 70 men (aged 18-55 years) with cannabis use disorder were randomised to receive the FAAH inhibitor, PF-04457845, (4mg daily; 46 men) or matching placebo (24 men) for 4 weeks. All participants were admitted to hospital for about one week of the treatment phase to achieve abstinence and cannabis withdrawal. Participants were then discharged to continue the remaining 3 weeks of treatment as outpatients.

Adherence to medication was confirmed by video-calling and pill count, and corroborated by weekly blood concentration of the PF-04457845 and anandamide. Cannabis use was assessed by self-report and urine screening for levels of the THC metabolite THC-COOH. Sleep problems, that feature prominently in cannabis withdrawal, were assessed using questionnaires and polysomnography (a test that records brain waves, blood oxygen level, heart rate, breathing, and eye and leg movements overnight).

At the start of the study, participants were smoking on average more than three cannabis joints a day. Admission to hospital reduced cannabis use to zero in both groups. During the inpatient phase (week 1), men treated with PF-04457845 reported fewer symptoms of cannabis withdrawal including depression, irritability, and anxiety compared with those given placebo (table 2).

At the end of treatment (4 weeks), the PF-04457845 group reported less cannabis use compared to the placebo group (average 0.40 vs 1.27 joints per day), and also had lower levels of THC-COOH in their urine (average concentrations of THC-COOH 266ng/mL vs 658 ng/mL).

Additionally, improvements in overall sleep (longer sleep times, deeper sleep, and feeling more rested) were noted compared with placebo. In contrast, reductions in the time spent in deep sleep occurred immediately following abstinence in the placebo group, consistent with the evidence of sleep disturbances in cannabis withdrawal syndrome.

The authors note that withdrawal-induced deep sleep disturbances could play a key role in relapse, and treatment via FAAH inhibition might be useful in correcting it, which in turn could facilitate maintenance of abstinence from cannabis.

Adherence to the study medication was 88%, and urinary THC-COOH concentrations correlated with self-reported cannabis use over time. PF-04457845 was well tolerated, and adverse events were mild and similar in both groups (20 [43%] of 46 participants in the PF-04457845 group vs 11 [46%] of 24 participants in the placebo group had an adverse event during the 4-week treatment phase). No serious adverse events were reported. Drop-out rates were similar between the PF-04457845 (8 [17%] of 46 men) and placebo groups (4 [17%] of 24 men).

The authors note some limitations, including that the study did not include women because of a lack of safety and toxicity data at the time, and did not fully assess motivation to quit cannabis use or the functional consequences of problematic cannabis use. In the future, studies will be needed to compare the advantages and disadvantages of direct agonists like THC with FAAH inhibitors.

Writing in a linked Comment, Dr Tony George from the University of Toronto and Centre for Addiction and Mental Health, Ontario, Canada, says FAAH might prove to be a safe and effective treatment approach but several questions remain to be answered: "No assessments of cannabis related functional impairment...were done, and thus the effect on functional outcomes achieved during this FAAH inhibitor trial is not clear...The population studied seemed not to include adults with psychiatric comorbidity, but it will be important to include these patients in future studies as they seem to be at much higher risk for the initiation and maintenance of cannabis use disorder. Finally, the endurability of FAAH inhibition needs to be rigorously tested with sufficient follow-up assessment periods (eg, 3-6 months after treatment)."

He concludes: "Most pharmacotherapy trials in addiction have sought to develop medications as adjuncts to behavioural interventions. The development of FAAH inhibitors as putative pharmacotherapies for cannabis use disorder should therefore make use of behavioural supports in both abstinence initiation and relapse-prevention designs. In particular, the use of cognitive-behavioural therapy in combination with contingency management could be the optimal approach to testing of putative cannabis pharmacotherapies, because they are most effective in achieving initial abstinence, facilitating the study of relapse-prevention efficacy, which might be the most sensitive test for medications development."

Credit: 
The Lancet

Artificial synapses made from nanowires

image: Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance.

Image: 
Forschungszentrum Jülich

Scientists from Jülich together with colleagues from Aachen and Turin have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired "neuromorphic" processors, able to take over the diverse functions of biological synapses and neurons.

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

"With today's semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy," says Dr. Ilia Valov from Forschungszentrum Jülich. "Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient," explains the researcher from Jülich's Peter Grünberg Institute.

For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning.

In order to create these properties, scientists at Forschungszentrum Jülich and RWTH Aachen University used a single zinc oxide nanowire, produced by their colleagues from the polytechnic university in Turin. Measuring approximately one ten-thousandth of a millimeter in size, this type of nanowire is over a thousand times thinner than a human hair. The resulting memristive component not only takes up a tiny amount of space, but also is able to switch much faster than flash memory.

Nanowires offer promising novel physical properties compared to other solids and are used among other things in the development of new types of solar cells, sensors, batteries and computer chips. Their manufacture is comparatively simple. Nanowires result from the evaporation deposition of specified materials onto a suitable substrate, where they practically grow of their own accord.

In order to create a functioning cell, both ends of the nanowire must be attached to suitable metals, in this case platinum and silver. The metals function as electrodes, and in addition, release ions triggered by an appropriate electric current. The metal ions are able to spread over the surface of the wire and build a bridge to alter its conductivity.

Components made from single nanowires are, however, still too isolated to be of practical use in chips. Consequently, the next step being planned by the Jülich and Turin researchers is to produce and study a memristive element, composed of a larger, relatively easy to generate group of several hundred nanowires offering more exciting functionalities.

Credit: 
Forschungszentrum Juelich

Gender gaps in political perspectives among college students

image: Meredith Worthen is an associate professor in the Department of Sociology, OU College of Arts and Sciences

Image: 
University of Oklahoma

NORMAN--A University of Oklahoma sociologist, Meredith Worthen, has published a new study in the journal, Sexuality Research and Social Policy, on sexuality and gender gaps in political perspectives among lesbian, gay, bisexual, mostly heterosexual and heterosexual college students in the southern United States. Worthen confirms a clear "sexuality gap" between exclusive heterosexuals and all others as well as gender gaps among mostly heterosexual and lesbian, gay and bisexual students, though some gaps are in the opposite direction from the results expected.

"This study fills the gaps in the research, expands our knowledge about sexuality and gender gaps in political attitudes and contributes to new ways of thinking about the perspectives of mostly heterosexual and lesbian, gay and bisexual people," said Worthen, associate professor in the OU College of Arts and Sciences. "This study works toward a deeper understanding of ways college students can promote political change and advocate for social justice."

Overall, Worthen proposes that social justice perspectives may be more common among lesbian, gay and bisexual people as a group, and especially among lesbian and bisexual women due to their oppressed identities. She suggests that these patterns may lead to more liberal lesbian, gay and bisexual political views and contribute to sexuality and gender gaps in political perspectives. In this study, liberal refers to liberal ideology, feminist identity and attitudes toward the death penalty and abortion.

The study found a distinct "lavender liberalism" among mostly heterosexual, lesbian, gay and bisexual college students. Exclusive heterosexuals, on the other hand, are significantly less liberal. Research indicates mostly heterosexual individuals are a growing and visible group on college campuses, so this study's inclusion of mostly heterosexuals as a distinct group that differs from exclusive heterosexuals contributes to the gap in the existing literature.

Overall, these findings support the stereotype that "all gays are liberal." When Worthen explored other sexuality gaps, among mostly heterosexual and LGB respondents, findings were less consistent. However, among the results, there is evidence of a bisexual woman consciousness that relates to liberalism among bisexual college women. In previous literature, a sexuality gap in political perspectives between lesbian and gay and bisexual people indicates lesbian and gay people are more liberal than bisexual people, however, findings do not support this and indicate that bisexual people are more liberal than gay and lesbian people. This finding has important implications for future work that centers bisexual women in conversations about political attitudes and liberal ideology.

Credit: 
University of Oklahoma

Diabetes drug liraglutide linked to lower risk of cardiovascular events

image: This is Björn Pasternak, senior researcher at the Department of Medicine, Solna, Karolinska Institutet, Sweden.

Image: 
Stefan Zimmerman

Real world data from a large Nordic study shows that use of liraglutide, a drug for type 2 diabetes, is associated with a lower risk of myocardial infarction, stroke or cardiovascular death. The study, led by researchers from Karolinska Institutet in Sweden, is published in The Lancet Diabetes & Endocrinology.

The number of patients with type 2 diabetes is increasing rapidly in the world. Cardiovascular disease is a serious complication of diabetes and represents a major cause of mortality in this patient group.

Liraglutide, a diabetes medication, became available for clinical use in 2009. This drug is a glucagon-like peptide 1 receptor agonist that lowers blood sugar and reduces body weight. A large clinical trial published previously showed that liraglutide reduced the risk of major cardiovascular events among patients with diabetes who had established cardiovascular disease or were at high cardiovascular risk. It has been unclear if these findings also translate to cardiovascular benefit in the broad patient population seen in routine clinical practice.

The current study was a collaborative project between researchers at Karolinska Institutet in Sweden, Statens Serum Institut in Denmark, NTNU in Norway and the Swedish National Diabetes Register. The researchers used several nationwide registers with information on prescription drugs, diseases and other data from more than 46,000 patients in Sweden and Denmark, 2010-2016.

Around 23,000 patients initiating treatment with liraglutide were compared with the same number of patients initiating treatment with another diabetes drug, DPP4 inhibitors. The main outcome in the study was major cardiovascular events, defined as myocardial infarction, stroke, or cardiovascular death.

The rate of major cardiovascular events was 14.0 per 1,000 person-years among patients using liraglutide and 15.4 per 1,000 among patients using DPP4 inhibitors, a statistically significant difference. This corresponded to 5 fewer major cardiovascular events per 1,000 patients followed up for 3 years.

Use of liraglutide was also associated with reduced risk of cardiovascular death and any cause of death. In a subgroup analysis, patients with a history of major cardiovascular disease appeared to benefit most from treatment with liraglutide, although this was not a statistically significant difference compared with patients without such history.

"Our study provides support for the cardiovascular effectiveness of liraglutide among a broader unselected group of patients, providing important confirmatory evidence from routine clinical practice. We believe it may be of interest to drug regulators, clinical guidelines, physicians, and patients," says last author Björn Pasternak, senior researcher at the Department of Medicine, Solna, Karolinska Institutet, and affiliated with Statens Serum Institut.

Credit: 
Karolinska Institutet

Infections during childhood increase the risk of mental disorders  

 
 

A new study from iPSYCH shows that the infections children contract during their childhood increase the risk of mental disorders during childhood and adolescence. This knowledge expands our understanding of the role of the immune system in the development of mental disorders. 
 

High temperatures, sore throats and infections during childhood can increase the risk of also suffering from a mental disorder as a child or adolescent. This is shown by the first study of its kind to follow all children born in Denmark between 1 January 1995 and 30 June 2012. The researchers have looked at all infections that have been treated from birth and also at the subsequent risk of childhood and adolescent psychiatric disorders. 
 

"Hospital admissions with infections are particularly associated with an increased risk of mental disorders, but so too are less severe infections that are treated with medicine from the patient's own general practitioner," says Ole Köhler-Forsberg from Aarhus University and Aarhus University Hospital's ?Psychoses Research Unit. He is one of the researchers behind the study. 
 

The study showed that children who had been hospitalised with an infection had an 84 per cent increased risk of suffering a mental disorder and a 42 per cent increased risk of being prescribed medicine to treat mental disorders. Furthermore, the risk for a range of specific mental disorders was also higher, including psychotic disorders, OCD, tics, personality disorders, autism and ADHD. 
 

"This knowledge increases our understanding of the fact that there is a close connection between body and brain and that the immune system can play a role in the development of mental disorders. Once again research indicates that physical and mental health are closely connected," says Ole Köhler-Forsberg. 
 

Highest risk following an infection

 

The study has just been published in JAMA Psychiatry and is a part of the Danish iPSYCH psychiatry project.  
 

"We also found that the risk of mental disorders is highest right after the infection, which supports the infection to some extent playing a role in the development of the mental disorder," says Ole Köhler-Forsberg.
 

It therefore appears that infections and the inflammatory reaction that follows afterwards can affect the brain and be part of the process of developing severe mental disorders. This can, however, also be explained by other causes, such as some people having a genetically higher risk of suffering more infections and mental disorders. 
 

The new knowledge could have importance for further studies of the immune system and the importance of infections for the development of a wide range of childhood and adolescent mental disorders for which the researchers have shown a correlation. This is the assessment of senior researcher on the study, Research Director Michael Eriksen Benrós from the Psychiatric Centre Copenhagen at Copenhagen University hospital.  
 

"The temporal correlations between the infection and the mental diagnoses were particularly notable, as we observed that the risk of a newly occurring mental disorder was increased by 5.66 times in the first three months after contact with a hospital due to an infection and were also increased more than twofold within the first year," he explains. 
 

Michael Eriksen Benrós stresses that the study can in the long term lead to increased focus on the immune system and how infections play a role in childhood and adolescent mental disorders. 
 

"It can have a consequence for treatment and the new knowledge can be used in making the diagnosis when new psychiatric symptoms occur in a young person. But first and foremost it corroborates our increasing understanding of how closely the body and brain are connected," he says. 
 

Credit: 
Aarhus University

Eliminating microglia prevents heightened immune sensitivity after stress

Philadelphia, December 4, 2018 -- Using an animal model of chronic stress, researchers at The Ohio State University have shown that the immune cells of the brain, called microglia, hold unique signatures of chronic stress that leave the animal more sensitive to future stressful experiences, evident by increased anxiety and immune responses. Eliminating microglia so that these “stress memories” could not be maintained did not prevent the increased anxiety in response to later stress but did prevent the hypersensitive immune response.

The study, published in Biological Psychiatry, indicates that eliminating the microglia can reverse some aspects of stress sensitization, which lasts for over 3 weeks after chronic stress ends. The increased anxiety behavior, which was not prevented by elimination of the microglia, may have resulted from stress signatures maintained in neurons, which also persist for weeks after chronic stress.

“It is remarkable that memories of stress are not only stored in nerve cells, but also in the microglia, the immune cells of the brain. It is not the case that these immune cells can generate a representation of the stressful events. However, the microglia appear to be primed to produce a heightened immune response long after the stressful events that sensitized them have passed,” said John Krystal, MD, Editor of Biological Psychiatry.

Co-senior authors of the study, John Sheridan, PhD, and Jonathan Godbout, PhD, study how chronic stress makes a person more vulnerable to events later in life that otherwise might not have caused stress. Using the same mouse model of chronic stress called repeated social defeat (RSD), they had previously shown that over 3 weeks after the stress ended, when the anxiety and the inflammatory response had diminished, they could recall both the behavioral and inflammatory responses with even just a brief exposure to the stressor. “This recall response indicated that the initial exposure to repeated social defeat resulted in sensitization of both neural and microglial populations that responded to less intense exposure to the stressor,” said Dr. Sheridan.

In this study, when the animals were briefly exposed to a stressful event 24 days after RSD, the sensitized microglia recruited large amounts of inflammatory cells called monocytes to the brain, a process that increases the chance that anxiety will return in previously stressed-out mice. This recruitment process depended on the presence of microglia, as it was prevented when the microglia were missing.

“Overall, microglia-specific priming can be reversed, but the effectiveness of this approach depends on the context in which you are testing,” said Dr. Godbout. Stress sensitization involves hyperactive behavioral and immune responses, but only the immune component was prevented by eliminating and repopulating microglia.

Credit: 
Elsevier

Negative views of flexible working prevalent

Flexible working often leads to negative views from other employees, with 1/3 of all UK workers believing those who work flexibly create more work for others, while a similar proportion believe their career will suffer if they use flexible working arrangements, according to new research.

This is the main finding from Dr Heejung Chung from the University of Kent who set out to analyse data from the 2011 Work-Life Balance Survey conducted by the government. Specifically she wanted to examine whether stigma against flexible workers exists, who is most likely to hold such beliefs and who is most likely to suffer from it.

The research also found that the majority of respondents that held negative views against flexible workers were male, while women and especially mothers were the ones who were most likely to suffer from such stereotypes.

Furthermore, one out of five workers (18%) said they had experienced direct negative career consequence as a result of working flexibly. This perhaps accounts for the very low uptake of the right to request flexible working since it was made law in 2003 and expanded to cover all workers as of 2014.

It was women, especially mothers who worked part-time and on reduced hours, rather than full-time workers who work flexibly - i.e. teleworking or on flexitime - that reported that their careers were negatively impacted by working flexibly. On the other hand, men, especially fathers (almost half of respondents), were likely to have reported that their own jobs was negatively impacted due to others working flexibly.

Commentating on the research Dr Chung, from the School of Social Policy, Sociology and Social Research at Kent, said: 'It is clear there are still many people who view flexible working as a negative and for different reasons. This has major implications for how employers introduce and offer flexible working arrangements in their organisation, especially as the government looks to increase the rights of workers to request flexible working.'

'A simple introduction and expansion of the right to request flexible working will not be enough. We need to challenge our prevalent organisational cultures which privileges work above everything else, with long hours considered to be synonymous with productivity and commitment. Such change is crucial especially if flexible working is to help reduce the gender wage gap.'

Credit: 
University of Kent

Understanding the rise of the modern far right using Marx and Lacan

As the end of 2018 approaches, a year that celebrated 200 years of the German philosopher Karl Marx, new research detailing core concepts coined by Karl Marx and French psychiatrist Jacques Lacan offers a fresh perspective on the rise of the far right.

We live in an age of increasing far right groups supported by a variety of media outlets which sympathize with their ideology. Researchers are curious as to how, in this day and age, and in light of recent history, we got to where we are.

New research presented in the article, "Mystified Consciousness: Rethinking the Rise of the Far Right with Marx and Lacan" by Claudia Leeb from Washington State University published in De Gruyter's journal Open Cultural Studies, posits several arguments suggesting that we must turn to thinkers Marx and Lacan and the philosophical concepts they coined to understand the rise of the far right. In the article, Leeb uses the theory of psychoanalysis to explain why white working classes - in the US and throughout the world - seem to have turned to the far right instead of forming an anti-capitalist emancipatory proletariat.

Marx and Lacan's concepts have largely been ignored in literature on the rise of the far right, but Leeb's article draws them together to argue that not only economic, but also psychological factors, brought about its rise.

According to psychoanalytic theorists, our identities are fundamentally incomplete or non-whole, which generates our desire to have whole identities along with fears that we remain incomplete or non-whole. Furthermore, in the ideologies of neo-liberal capitalist societies, we are only considered to be complete and whole if we have achieved economic success. However, since achieving economic success has become so difficult for most people in neo-liberal capitalist societies, such as the United States, the article posits that large groups of citizens may have heightened feeling of being incomplete and thus inadequate. White, male working-class Americans, says Leeb, embrace the ideology of the far right to fulfil the unconscious yearning to be whole again. This ideology provides them with fantasies that compensate for feeling non-whole or inadequate, such as achieveing the American Dream of economic success, finding fulfillment in an afterlife through religion, hatred of ethnic minorities and disdain for women.

The far right fantasy is that of being more whole than "the Other". By branding certain groups of people such as Muslims, immigrants, women as limited or non-whole, the far right displaces the anxieties of the white male working classes on others, allowing them to feel superior and therefore, whole and "great again".

"The article provides an alternative explanation for the far right that the mystification and division in the working classes has to do with the expression of white, masculine supremacy, more so than economic dislocation," says political scientist Laurie Naranch from Siena College in Albany, New York.

The recent resurgence of far right extremism demonstrates just how much more study and research must occur in this field if it is to be curtailed.

Credit: 
De Gruyter

Ibrutinib plus rituximab superior to standard treatment for patients with chronic leukemia

San Diego - An interim analysis of a large phase 3 clinical trial found that the combination of ibrutinib plus rituximab was superior to standard treatment for patients age 70 and younger with previously untreated chronic lymphocytic leukemia (CLL). The trial met its primary endpoint of an improvement in progression-free survival (the length of time patients live before their disease worsens). The combination also improved overall survival, the trial's secondary endpoint. In general, patients in the ibrutinib-rituximab arm were less likely to experience serious side effects than those in the standard treatment arm. Until now, the standard treatment for previously untreated CLL has been a six-month course of FCR, which combines the chemotherapy drugs fludarabine and cyclophosphamide with rituximab.

The data and safety monitoring board overseeing the trial, known as E1912, recommended that these results be released immediately given their significance to public health. The findings were presented as a late-breaking abstract at the American Society of Hematology (ASH) annual meeting on December 4, 2018. The trial was sponsored by the National Cancer Institute (NCI), part of the National Institutes of Health, and designed by researchers with the ECOG-ACRIN Cancer Research Group.

"These results are practice-changing and immediately establish ibrutinib and rituximab as the new standard of care for the initial treatment of CLL in patients age 70 and younger," said lead investigator Tait Shanafelt, M.D., a professor of hematology at the Stanford University School of Medicine in Palo Alto, California. "The E1912 trial showed that the combination of ibrutinib and rituximab not only provided better leukemia control, it also prolonged life and had fewer side effects."

"These definitive results show why large trials like this, that test new therapies in an effort to achieve clinically meaningful benefit for patients, are so important," said Richard F. Little, M.D., of the Cancer Therapy Evaluation Program at NCI.

The study was conducted through NCI's National Clinical Trials Network. Pharmacyclics LLC provided ibrutinib and clinical trial support funding under a cooperative research and development agreement with NCI and a separate agreement with ECOG-ACRIN.

CLL is one of the most common types of leukemia in adults. It typically occurs during or after middle age and rarely occurs in individuals under the age of 40. Ibrutinib and rituximab are targeted treatments. Ibrutinib interferes with the survival of lymphocytic leukemia cells, and rituximab enhances the ability of the body's immune system to destroy the cells. Ibrutinib is approved by the U.S. Food and Drug Administration for the treatment of some blood cancers, including CLL.

The trial enrolled 529 patients between January 2014 and June 2016. Those enrolled in the trial were adults age 70 and younger who had never received treatment for CLL and required treatment. Patients were randomly assigned to receive either the ibrutinib-rituximab combination or FCR.

The first planned interim analysis for progression-free survival was performed in September 2018. With a median follow-up of 33.4 months, the hazard ratio for progression-free survival favored the ibrutinib group over the FCR group (HR=0.352). This means that, at any given time, the risk of disease progression was reduced by about two-thirds (65 percent) for patients in the ibrutinib group compared with the FCR group. This observed improvement in progression-free survival exceeded the trial design target. Overall survival was also superior for patients in the ibrutinib arm.

According to the data and safety monitoring board's recommendation, the outcome has been disclosed to all patients participating in the study and their physicians. Patients who are receiving ibrutinib in the trial can continue therapy, as long as it remains effective. All patients assigned to FCR have completed treatment and are continuing to be monitored per standard of care. Quality of life was rigorously measured in both arms, and the data are awaiting analysis.

Findings from another NCI-supported trial on ibrutinib in patients with CLL were also presented at the ASH meeting and published in The New England Journal of Medicine. The A041202 trial--an international phase 3 clinical trial coordinated by the Alliance for Clinical Trials in Oncology--demonstrated that ibrutinib produces superior progression-free survival compared with standard chemoimmunotherapy (bendamustine plus rituximab) in previously untreated patients with CLL who are age 65 and older. The study found that adding rituximab to ibrutinib did not improve progression-free survival beyond ibrutinib alone.

"These two NCI-funded trials have collectively established ibrutinib-based therapy as the first line treatment for CLL patients of any age," Dr. Little said.

Credit: 
ECOG-ACRIN Cancer Research Group

Consumption of children's antibiotics varies widely globally

4 December 2018]

Researchers analyzing the sales of oral antibiotics for children in 70 high- and middle-income countries found that consumption varies widely from country to country with little correlation between countries' wealth and the types of antibiotics. Of concern is the relatively low-level use of amoxicillin, an antibiotic to treat the most common childhood infections. In addition, the review found the sale of antibiotics which should only be used for specific indications, or 'Watch' antibiotics in a quarter of all countries accounted for 20% of total antibiotic consumption. This is of concern since there is a higher risk of bacteria developing resistance to 'Watch' antibiotics.

In 2017, the World Health Organization (WHO) grouped antibiotics into three categories - Access, Watch, and Reserve - with recommendations on when each category should be used to ensure antibiotics are available when needed, and that the right antibiotics are prescribed for the right infections. This categorization is designed to enhance treatment outcomes, reduce the development of drug-resistant bacteria, and preserve the effectiveness of 'last-resort' antibiotics when all others fail.

While the report finds the consumption of 'Access' antibiotics made up on average 76% of child-appropriate antibiotic formulations across all countries, the use of amoxicillin in community practice is relatively low (median 31%). Categorized by WHO as an 'Access' antibiotic, amoxicillin should be used as first choice for most common antibiotic treatment indications encountered in community practice.

Dr Julia Bielicki, Senior Lecturer at St George's, University of London, and study lead said: "This is the first attempt at developing simple metrics of global child community antibiotic use based on the WHO's grouping. The data can be used by countries to assess their antibiotic use patterns for young children. Countries with low Access percentages can identify opportunities for greater use of these antibiotics. Unnecessary use of Watch antibiotics is more clearly identifiable."

The research was supported by GARDP, the Global Antibiotic Research and Development Partnership. Dr Manica Balasegaram, Executive Director of GARDP, said: "WHO strongly encourages use of 'Access' antibiotics to treat the majority of infections for children and adults as they are affordable, generally less toxic and less likely to drive future antibiotic resistance. Providing country policymakers with evidence on what antibiotics are being prescribed in their country is an important first step to help countries tackle inappropriate prescribing of antibiotics. This in turn will help countries deliver their National Action Plan on antimicrobial resistance and ensure antibiotics remain available and effective for generations to come."

"Consumption of oral antibiotic formulations for young children according to the WHO AWaRe groups; an analysis of sales data from 70 middle and high-income countries" was published in Lancet Infectious Diseases on 3 December 2018.

Credit: 
Drugs for Neglected Diseases Initiative

Personalised ultrasound scan showing atherosclerosis helps reduce cardiovascular risk

A new randomised trial of over 3000 people in The Lancet finds that sharing pictorial representations of personalised scans showing the extent of atherosclerosis (vascular age and plaque in the arteries) to patients and their doctors results in a decreased risk of cardiovascular disease one year later, compared to people receiving usual information about their risk.

Smoking cessation, physical activity, statins, and antihypertensive medication to prevent cardiovascular disease are among the most evidence-based and cost-effective interventions in health care. However, low adherence to medication and lifestyle changes mean that these types of prevention efforts often fail.

"Cardiovascular disease is the leading cause of death in many countries, and despite a wealth of evidence about effective prevention methods from medication to lifestyle changes, adherence is low," says Professor Ulf Näslund, Umea University (Sweden). "Information alone rarely leads to behaviour change and the recall of advice regarding exercise and diet is poorer than advice about medicines. Risk scores are widely used, but they might be too abstract, and therefore fail to stimulate appropriate behaviours. This trial shows the power of using personalised images of atherosclerosis as a tool to potentially prompt behaviour change and reduce the risk of cardiovascular disease." [1]

3532 individuals who were taking part in the Västerbotten County (Sweden) cardiovascular prevention programme were included in the study and underwent vascular ultrasound investigation of the carotid arteries. Half (1749) were randomly selected to receive the pictoral representation of carotid ultrasound, and half (1783) did not receive the pictorial information.

Participants aged 40 to 60 years with one or more cardiovascular risk factors were eligible to participate. All participants underwent blood sampling, a survey of clinical risk factors and ultrasound assessment for carotid intima media wall thickness and plaque formation. Each person in the intervention group received a pictoral representation of plaque formation in their arteries, and a gauge ranging from green to red to illustrate their biological age compared with their chronological age. They then received a follow up call from a nurse after 2-4 weeks to answer any questions. The same pictorial presentation of the ultrasound result was also sent to their primary care doctor. Thus, the study had dual targets.

Both groups received information about their cardiovascular risk factors and a motivational health dialogue to promote healthier life style and, if needed according to clinical guidelines, pharmacological treatment.

At one year follow up, the cardiovascular risk score for all participants (3175 completed the follow up) was calculated showing differences between the two groups (Framingham Risk Score decreased in the intervention group but increased in the control group [-0.58 vs +0.35]; SCORE increased by twice as much in control group compared to the intervention group [0.27 vs 0.13]).

Improvements were also seen for total and LDL cholesterol in both groups, but the reduction was greater in the intervention group than in the control group. A graded effect was also noted, with the strongest effect seen for those with the worst results.

"The differences at a population level were modest, but important, and the effect was largest among those at highest risk of cardiovascular disease, which is encouraging. Imaging technologies such as CT and MRI might allow for a more precise assessment of risk, but these technologies have a higher cost and are not available on an equitable basis for the entire population. Our approach integrated an ultrasound scan, and a follow up call with a nurse, into an already established screening programme, meaning our findings are highly relevant to clinical practice," says Prof Näslund [1].

Importantly, the effect of the intervention did not differ by education level, suggesting that this type of risk communications might contribute to a reduction of the social gap in health. The findings come from a middle-aged population with low to moderate cardiovascular disease risk.

Further research is needed to understand whether the results are sustainable beyond one year, and whether the intervention will lead to a reduction of cardiovascular disease in the long-term. Formal cost-effectiveness analyses will be done after 3-year follow-up.

Writing in a linked Comment, Dr Richard Kones, Umme Rumana and Alberto Morales Salinas, Cardiometabolic Research Institute (USA), says:

"Despite advances in cardiovascular therapies, coronary heart disease remains the leading cause of death in almost all countries. Two of the most remarkable recent treatments, percutaneous coronary intervention and the availability of proprotein convertase subtilisin/ kexin type 9 inhibitor drugs, have revolutionised cardiology practice. Although life-saving and now essential therapies, whether they will be able to reduce the incidence and associated morbidity and mortality of coronary heart disease remains unlikely since the increase in prevalence of obesity and diabetes is raising the background level of cardiovascular risk... Although there are proven methods of lowering cardiovascular risk and these are generally being better used generally in high-income countries, poor adherence and uneven availability and access in low income and middle-income countries still pose serious challenges... About less than half of all patients taking medications are adherent, which substantially increases morbidity and mortality. Non-adherence to medication accounts for 33-69% of all hospital admissions in the USA, and, among patients with coronary heart disease, the extent of low adherence is related to the number of adverse cardiovascular events. Poor adherence is multifactorial and can broadly be grouped into categories related to patients, physicians and therapies, communication, health-care systems, socioeconomic factors, and unpredictable negative effects of the internet. One of the most pertinent factors is patient-related perceived risk and motivation. Despite the many methods that have been proposed, effectiveness in improving adherence and outcomes has been relatively disappointing. It is in this context that the randomised controlled trial by Ulf Näslund and colleagues in The Lancet is relevant."

Credit: 
The Lancet

Solving 21st-century problems requires skills that few are trained in, scientists find

From companies trying to resolve data security risks to coastal communities preparing for rising sea levels, solving modern problems requires teamwork that draws on a broad range of expertise and life experiences. Yet individuals receive little formal training to develop the skills that are vital to these collaborations.

In a new scientific report published in Psychological Science in the Public Interest, an interdisciplinary team of researchers identifies the essential cognitive and social components of collaborative problem solving (CPS) and shows how integrating existing knowledge from a variety of fields can lead to new ways of assessing and training these abilities.

The report, authored by Arthur C. Graesser (University of Memphis), Stephen M. Fiore (University of Central Florida), Samuel Greiff (University of Luxembourg), Jessica Andrews-Todd (Educational Testing Service), Peter W. Foltz (Pearson and University of Colorado), and Friedrich W. Hesse (Leibniz-Institut fur Wissensmedien and University of Tubingen), is accompanied by a commentary from cognitive development expert Mary Gauvain (University of California, Riverside).

"CPS is an essential skill in the workforce and the community because many of the problems faced in the modern world require teams to integrate group achievements with team members' idiosyncratic knowledge," the authors of the report say.

As societies and technologies become increasingly complex, they generate increasingly complex problems. Devising efficient, effective, and innovative solutions to these complex problems requires CPS skills that most students lack. According to a 2015 assessment of more than 500,000 15-year-old students conducted by the Organisation for Economic Cooperation and Development, only 8% of students around the world showed strong CPS skills.

"The experiences of students in and out of the classroom are not preparing them for these skills that are needed as adults," Graesser and colleagues write.

This unique set of cognitive and social skills support core aspects of CPS, including:

Shared understanding: Group members share common goals when solving a new problem.

Accountability: The contributions that each member makes are visible to the rest of the group.

Differentiated roles: Group members draw on their specific expertise to complete different tasks.

Interdependency: Group members depend on the contributions of others to solve the problem.

One reason for the lack of CPS training is a deficit in evidence-based standards and curricula. Secondary school curricula typically focus on educating task- and discipline-specific knowledge, placing little emphasis on educating students' ability to communicate and collaborate effectively.

"Students rarely receive meaningful instruction, modeling, and feedback on collaboration," the researchers note.

When students do receive training relevant to CPS, it is often because they participate in extracurricular activities such as band, sports, student newspapers, and volunteer activities. Even then, the collaborative competencies are not directly relevant to problem solving. The authors argue that it is time to make CPS activities a core part of the curriculum.

Although considerable psychological, educational, and management research has examined factors that contribute to effective learning, teamwork, and decision making, research that directly examines how to improve collaborative problem solving is scarce.

According to the authors, "we are nearly at ground zero in identifying pedagogical approaches to improving CPS skills."

Developing and implementing effective CPS training stands to have significant societal impacts across a wide range of domains, including business, science, education, technology, environment, and public health. In a project funded by the National Science Foundation, for example, Fiore and other research team members are training students to collaborate across a range of disciplines -- including environmental science, ecology, biology, law, and policy -- to identify ways to address social, business, and agricultural effects of rising sea levels in Virginia's Eastern Shore.

"It's exciting to engage in real world testing of methods developed in laboratory studies on teamwork, to see how feedback on collaboration, and reflection on that feedback to improve teamwork strategies, can improve students' problem solving," Fiore explains.

Identifying the necessary components of this kind of training and determining how to translate those components across a variety of real-world settings will, itself, require interdisciplinary cooperation among researchers, educators, and policymakers.

In the commentary, Gauvain emphasizes that achieving a comprehensive understanding of CPS requires taking a developmental perspective and she notes that psychological scientists will be essential in this endeavor. Graesser and colleagues agree:

"When psychological scientists collaborate with educational researchers, computer scientists, psychometricians, and educational experts, we hope to move forward in addressing this global deficit in CPS," they conclude.

Credit: 
Association for Psychological Science