Culture

Study reveals a short bout of exercise enhances brain function

Most people know that regular exercise is good for your health. New research shows it may make you smarter, too.

Neuroscientists at OHSU in Portland, Oregon, working with mice, have discovered that a short burst of exercise directly boosts the function of a gene that increases connections between neurons in the hippocampus, the region of the brain associated with learning and memory.

The research is published online in the journal eLife.

"Exercise is cheap, and you don't necessarily need a fancy gym membership or have to run 10 miles a day," said co-senior author Gary Westbrook, M.D., senior scientist at the OHSU Vollum Institute and Dixon Professor of Neurology in the OHSU School of Medicine.

Previous research in animals and in people shows that regular exercise promotes general brain health. However, it's hard to untangle the overall benefits of exercise to the heart, liver and muscles from the specific effect on the brain. For example, a healthy heart oxygenates the whole body, including the brain.

"Previous studies of exercise almost all focus on sustained exercise," Westbrook said. "As neuroscientists, it's not that we don't care about the benefits on the heart and muscles but we wanted to know the brain-specific benefit of exercise."

So the scientists designed a study in mice that specifically measured the brain's response to single bouts of exercise in otherwise sedentary mice that were placed for short periods on running wheels. The mice ran a few kilometers in two hours.

The study found that short-term bursts of exercise - the human equivalent of a weekly game of pickup basketball, or 4,000 steps - promoted an increase in synapses in the hippocampus. Scientists made the key discovery by analyzing genes that were increased in single neurons activated during exercise.

One particular gene stood out: Mtss1L. This gene had been largely ignored in prior studies in the brain.

"That was the most exciting thing," said co-lead author Christina Chatzi, Ph.D.

The Mtss1L gene encodes a protein that causes bending of the cell membrane. Researchers discovered that when this gene is activated by short bursts of exercise, it promotes small growths on neurons known as dendritic spines - the site at which synapses form.

In effect, the study showed that an acute burst of exercise is enough to prime the brain for learning.

In the next stage of research, scientists plan to pair acute bouts of exercise with learning tasks to better understand the impact on learning and memory.

Credit: 
Oregon Health & Science University

The neuroscience of autism: New clues for how condition begins

image: In green, a normal radial glial cell and its neuronal progeny superimposed in cortex scaffold illustration. In purple, a Memo1-deficient (purple) cell and progeny. MEMO1 mutations are associated with autism and epilepsy.

Image: 
Anton Lab, UNC School of Medicine

CHAPEL HILL, N.C. - UNC School of Medicine scientists unveiled how a particular gene helps organize the scaffolding of brain cells called radial progenitors necessary for the orderly formation of the brain. Previous studies have shown that this gene is mutated in some people with autism.

The discovery, published in Neuron, illuminates the molecular details of a key process in brain development and adds to the scientific understanding of the biological basis of autism spectrum disorder (ASD), a condition linked to brain development and estimated to affect about one in 59 children born in the United States.

"This finding suggests that ASD can be caused by disruptions occurring very early on, when the cerebral cortex is just beginning to construct itself," said study senior author Eva S. Anton, PhD, professor of cell biology and physiology at the UNC School of Medicine and member of the UNC Neuroscience Center and the UNC Autism Research Center.

The cerebral cortex - which in humans is responsible for higher brain functions including perception, speech, long-term memory, and consciousness - is relatively large and dominant compared to other brain structures.

How the cortex constructs itself in the developing brain of a human or other mammal is far from fully understood. But scientists know that early in cortical development, precursor cells called radial glial cells (RGCs) appear at the bottom of the developing cortex in a regularly spaced or tiled pattern. Each RGC sprouts a single stalk-like structure, called a basal process that extends to the top of the cortex. Collectively these RGCs and their basal processes form a scaffold, much like the scaffolds of a construction site.

RGCs divide to form young cortical neurons, and these baby neurons climb the scaffold to find their proper places in the developing brain. The cortex, thanks to this scaffolding system, normally develops a highly regular structure with six distinct layers of neurons required for the normal formation of functional neural cortical circuits.

Anton and colleagues discovered that a gene encoding for a protein called Memo1 is needed to organize the tiled radial glial cell scaffold. Mutations in the Memo1 gene also have been found in some people with autism and are suspected of causing the condition. To explore Memo1's role in brain development and autism, Anton's team first engineered mice in which the Memo1 gene is deleted early in brain development in RGCs.

They found the resulting RGC scaffold is disrupted. Each RGC's stalk-like basal process formed too many branches and no longer forms a guiding scaffold, resulting in neuronal misplacement and disorganized layers. The scientists traced this ill effect, in part, to unstable microtubules, which normally help reinforce the scaffold structure and serve as railways for the internal traffic of key molecules necessary for RGC function.

Intriguingly, studies of the brains of children with autism found patches of similar neuronal disorganization. The scientists then analyzed MEMO1 gene mutations reported recently in individuals with autism behaviors and intellectual disabilities. They discovered the human MEMO1 genetic mutation resulted in a shortened form of the Memo1 protein and this can disrupt RGC development

Further supporting the autism connection, Anton and his colleagues discovered the mice lacking Memo1 in their RGCs behaved abnormally, showing a lack of explorative activity similar to those seen in some people with autism.

The findings overall suggest that Memo1-associated autism may be wired into the brain very early in development than are other forms of autism with origins in disrupted neuronal differentiation and connectivity.

"For disorders of brain development such as ASD, it is important to understand the origins of the problem even if we are still far away from being able to correct developmental disruptions occurring in utero," Anton said. "We need this foundational knowledge if we are to truly get to the root causes of these conditions and eventually develop better diagnostic or therapeutic strategies."

Anton and colleagues are continuing to evaluate MEMO1 in cortical development and autism, and as more human mutations are being identified in this gene family and other ASD genes, they plan to shift from experiments in mice to the study of human brain organoids - kind of mini brains that can be grown from patient derived stem cells with ASD related mutations.

Credit: 
University of North Carolina Health Care

Gut microbes protect against flu virus infection in mice

image: This visual depicts the findings of Bradley and Finsterbusch et al. who identify lung stroma as the target of microbiota-driven signals that set the interferon signature in these cells. Antibiotic treatment reduces gut microbiota and the lung stromal interferon signature and facilitates early influenza virus replication in lung epithelia, effects that can be reversed by fecal transplantation.

Image: 
Bradley and Finsterbusch et al./<i>Cell Reports</i>

Commensal gut microbes stimulate antiviral signals in non-immune lung cells to protect against the flu virus during early stages of infection, researchers report July 2nd in the journal Cell Reports. Enhanced baseline type I interferon (IFNα/β) signaling, which drives antiviral responses, reduced flu virus replication and weight loss in mice, but this protective effect was attenuated by antibiotic treatment.

"This study supports that taking antibiotics inappropriately not only promotes antibiotic resistance and wipes out the commensals in your gut that are useful and protective, but it may also render you more susceptible to viral infections," says senior study author Andreas Wack of the Francis Crick Institute in the UK. "In some countries, the livestock industry uses antibiotics a lot, prophylactically, so treated animals may become more susceptible to virus infections."

IFNα/β signalling plays a central role in the immune defense against viral infections. These pathways are fine-tuned to elicit antiviral protection while avoiding tissue damage due to inflammation. This trade-off is apparent in individuals with a genetic variant that results in high interferon production. They can mount enhanced immune responses against viruses, but the flip side is that they show signs of chronic auto-inflammation. It has not been clear exactly how IFNα/β signalling strikes the right balance, maximizing antiviral protection while minimizing excessive inflammation.

To address this question, Wack and his team used mice with enhanced baseline IFNα/β signalling due to a mutation that increases expression levels of the IFNα/β receptor. These mice were more resistant to influenza virus infection, with less weight loss, lower virus gene expression eight hours after infection, and reduced influenza virus replication two days later. Given that the viral load was controlled early, subsequent IFNα/β signalling and antiviral immune responses were never fully set in motion. The results suggest that regulating expression levels of the IFNα/β receptor could be key to fine-tuning IFNα/β signalling in the lungs.

But the protective effect of enhanced baseline IFNα/β signalling was reduced by two to four weeks of antibiotic treatment, which decreased IFNα/β signalling mainly in lung stromal cells -- non-immune cells that make up the structural tissue of organs. Conversely, fecal transplant reversed the antibiotic-induced susceptibility to influenza virus infection, suggesting a potential role for gut microbes.

Taken together, the results suggest that microbiota increase IFNα/β signalling in lung stromal cells, thereby enhancing protection against influenza virus infection. The new findings are consistent with those from previous studies showing that mice treated with oral antibiotics are more susceptible to viral infections, including the influenza A virus.

"This and previous studies demonstrate that microbiota-driven signals can act at multiple levels, inducing an antiviral state in non-immune cells to control infection early on, and enhancing the functionality of immune cells later in infection," says Wack.

Moving forward, the researchers plan to further investigate the exact origins and mechanisms underlying microbiota-driven antiviral resistance. "Previous research has suggested that the microbiota-driven signal in lung stromal cells could originate either from the gut or the lung," Wack says. "However, in the work presented here, the results of the fecal transplant experiments strongly suggest a gut involvement in this effect. We would love to understand the exact nature of the signal from the gut to the lung, and we are working on several hypotheses."

Credit: 
Cell Press

New study challenges claim that exogenous RNA is essential for sperm function

Scientists from the University of Bath are challenging the claims of two high profile papers from 2018 which reported that in the mouse, RNA has to be added to sperm for them to be fully fertile. The Bath findings undermine a proposed mechanism of epigenetic inheritance in which offspring inherit traits acquired by their parents.

In double-blind experiments, researchers from the Department of Biology & Biochemistry have shown that healthy mice pups can be born from sperm which haven't gained short RNA chains as they migrate through the epididymis - a ductular organ in which sperm acquire forward motility after they emerge from the testis.

This contradicts the results of the 2018 papers, which reported that mouse eggs fertilised with sperm taken from the 'caput' region of the epididymis - where sperm first enter the epididymis on leaving the testis - would not develop into viable embryos.

The results are published in Developmental Cell.

Lead author Professor Tony Perry said: "When I saw these two papers I just thought 'this can't be right' and with some quite straightforward experiments we have shown that it probably isn't.

"We have known for years that sperm taken from mouse testis contribute to full-term embryonic development following fertilisation. The 2018 studies proposed that sperm would unaccountably have lost this ability in the caput region of the epididymis but then reacquired it.

"Here we have shown that sperm taken from the caput region of the epididymis can, in fact, support full term development.

The Bath team took sperm from two regions of the epididymis, the caput and the cauda; the cauda region is where sperm are usually taken from mice for in vitro fertilisation, so we know they should work. Eggs were fertilised with the sperm and healthy pups were born from both sperm types (caput and cauda) with no significant difference in the number of pups born, their health, weight or fertility.

Professor Perry added: "Not only does this set the record straight in terms of tallying with well-established developmental biology, but the conclusion of the previous research was that acquired RNA was in some way essential for healthy embryo development - which doesn't seem to be the case.

"The 2018 papers would have provided one possible mechanism for epigenetic inheritance, but it's not supported by our data. It's important to suggest corrections to the record where they come to light, and publish results that fail to replicate so we can build confidence in our view of biology, especially where it has clinical implications, as is the case for epigenetic inheritance."

Credit: 
University of Bath

The world needs a global agenda for sand

What links the building you live in, the glass you drink from and the computer you work on? The answer is smaller than you think and is something we are rapidly running out of: sand.

In a commentary published today in the journal Nature, a group of scientists from the University of Colorado Boulder, the University of Illinois, the University of Hull and Arizona State University highlight the urgent need for a global agenda for sand.

Sand is a key ingredient in the recipe of modern life, and yet it might be our most overlooked natural resource, the authors argue. Sand and gravel are being extracted faster than they can be replaced. Rapid urbanization and global population growth have fueled the demand for sand and gravel, with between 32 and 50 billion tons extracted globally each year.

"From 2000-2100 it is projected there will be a 300 percent increase in sand demand and 400 percent increase in prices," said Mette Bendixen, a researcher at CU Boulder's Institute of Arctic and Alpine Research (INSTAAR). "We urgently require a monitoring program to address the current data and knowledge gap, and thus fully assess the magnitude of sand scarcity. It is up to the scientific community, governments and policy makers to take the steps needed to make this happen."

A lack of oversight and monitoring is leading to unsustainable exploitation, planning and trade. Removal of sand from rivers and beaches has far-reaching impacts on ecology, infrastructure, national economies and the livelihoods of the 3 billion people who live along the world's river corridors. Illegal sand mining has been documented in 70 countries across the globe, and battles over sand have reportedly killed hundreds in recent years, including local citizens, police officers and government officials.

"Politically and socially, we must ask: If we can send probes to the depths of the oceans or the furthest regions of the solar system, is it too much to expect that we possess a reliable understanding of sand mining in the world's great rivers, and on which so much of the world's human population, rely?" said Jim Best, a professor at the University of Illinois Department of Geology. "Now is the time to commit to gaining such knowledge by fully grasping and utilizing the new techniques that are at our disposal."

In order to move towards globally sustainable sand extraction, the authors argue that we must fully understand the occurrence of sustainable sources and reduce current extraction rates and sand needs, by recycling concrete and developing alternative to sand (such as crushed rocks or plastic waste materials). This will rely on a knowledge of the location and extent of sand mining, as well as the natural variations in sand flux in the world's rivers.

"The fact that sand is such a fundamental component of modern society, and yet we have no clear idea of how much sand we remove from our rivers every year, or even how much sand is naturally available, makes ensuring this industry is sustainable very, very difficult" said Chris Hackney, research fellow at the University of Hull's Energy and Environment Institute. "It's time that sand was given the same focus on the world stage as other global commodities such as oil, gas and precious metals."

"The issue of sand scarcity cannot be studied in geographical isolation as it has worldwide implications," said Lars L. Iversen, a research fellow at Arizona State University's Julie Ann Wrigley Global Institute of Sustainability. "The reality and size of the problem must be acknowledged--and action must be taken--on a global stage. In a rapidly changing world, we cannot afford blind spots."

Credit: 
University of Colorado at Boulder

Novel computer model supports cancer therapy

Researchers from the Life Sciences Research Unit (LSRU) of the University of Luxembourg have developed a computer model that simulates the metabolism of cancer cells. They used the programme to investigate how combinations of drugs could be used more effectively to stop tumour growth. The biologists now published their findings in the scientific journal EBioMedicine of the prestigious Lancet group.

The metabolism of cancer cells is optimised to enable fast growth of tumours. "Their metabolism is much leaner than that of healthy cells, as they are just focused on growth. However, this makes them more vulnerable to interruptions in the chain of chemical reactions that the cells depend on. Whereas healthy cells can take alternative routes when one metabolic path is disabled, this is more difficult for cancer cells," explains Thomas Sauter, Professor of Systems Biology at the University of Luxembourg and lead author of the paper. "In our study, we investigated how drugs or combinations of drugs could be used to switch off certain proteins in cancer cells and thereby interrupt the cell's metabolism."

Therefore, the researchers created digital models of healthy and of cancerous cells and fed them with gene sequencing data from 10,000 patients of the Cancer Genome Atlas (TCGA) of the American National Cancer Institute (NCI). Using these models, the researchers were able to simulate the effects different active substances had on cells' metabolisms so they could identify those drugs that inhibited cancer growth and at the same time didn't affect the healthy cells. The models allow filtering out drugs that do not work or are toxic, so that only the promising ones are tested in the lab.

With the help of the models, they tested about 800 medications of which 40 were predicted to inhibit cancer growth. About 50 percent of these drugs where already known as anti-cancer therapeutics, but 17 of them are so far only approved for other treatments. "Our tool can help with the so-called "drug repositioning", which means that new therapeutically purposes are found for existing medication. This could significantly reduce the cost and time for drug development," Prof. Sauter said.

The particular advantage of the approach is the efficiency of its mathematical method. "We managed to create 10.000 patient models within one week, without the use of high-performance computing. This is exceptionally fast," comments Dr. Maria Pacheco, postdoctoral researcher at the University of Luxembourg and first author of the study. In addition, Dr. Elisabeth Letellier, principal investigator at the Molecular Disease Mechanisms group at the University of Luxembourg and collaborator on the present study, further emphasizes "In the future, this could allow us to build models of individual cancer patients and virtually test drugs in order to find the most efficient combination. This could also bring fresh hope to patients for whom known therapies haven proven to be ineffective."

So far, the models have been tested only for colorectal cancer, but the algorithm basically also works for all sorts of cancer, according to Thomas Sauter. He and his team are currently considering to develop commercial applications for their method.

Credit: 
University of Luxembourg

Arts and medicine: clarifying history, lessons for today from Peter Neubauer's twins study

Bottomline: This Arts and Medicine feature reviews "Three Identical Strangers" and "The Twinning Reaction," two documentaries telling the story of identical twins and triplets adopted as infants into separate families who were unknowing participants in a two-decade nature vs. nurture study of child development beginning in 1960.

Credit: 
JAMA Network

Combat veterans more likely to experience mental health issues in later life

CORVALLIS, Ore. - Military veterans exposed to combat were more likely to exhibit signs of depression and anxiety in later life than veterans who had not seen combat, a new study from Oregon State University shows.

The findings suggest that military service, and particularly combat experience, is a hidden variable in research on aging, said Carolyn Aldwin, director of the Center for Healthy Aging Research in the College of Public Health and Human Sciences at OSU and one of the study's authors.

"There are a lot factors of aging that can impact mental health in late life, but there is something about having been a combat veteran that is especially important," Aldwin said.

The findings were published this month in the journal Psychology and Aging. The first author is Hyunyup Lee, who conducted the research as a doctoral student at OSU; co-authors are Soyoung Choun of OSU and Avron Spiro III of Boston University and the VA Boston Healthcare System. The research was funded by the National Institutes on Aging and the Department of Veterans Affairs.

There is little existing research that examines the effects of combat exposure on aging and in particular on the impacts of combat on mental health in late life, Aldwin said. Many aging studies ask about participants' status as veterans, but don't unpack that further to look at differences between those who were exposed to combat and those who weren't.

Using data from the Veterans Affairs Normative Aging Study, a longitudinal study that began in the 1960s to investigate aging in initially healthy men, the researchers explored the relationship between combat exposure and depressive and anxiety symptoms, as well as self-rated health and stressful life events.

They found that increased rates of mental health symptoms in late life were found only among combat veterans. The increases were not seen in veterans who had not been exposed to combat.

Generally, mental health symptoms such as depression and anxiety tend to decrease or remain stable during adulthood but can increase in later life. The researchers found that combat exposure has a unique impact on that trajectory, independent of other health issues or stressful life events.

"In late life, it's pretty normal to do a life review," Aldwin said. "For combat veterans, that review of life experiences and losses may have more of an impact on their mental health. They may need help to see meaning in their service and not just dwell on the horrors of war."

Veterans' homecoming experience may also color how they view their service later in life, Aldwin said. Welcoming veterans home and focusing on reintegration could help to reduce the mental toll of their service over time.

Most of the veterans in the study served in World War II or Korea. Additional research is need to understand more about how veterans' experiences may vary from war to war, Aldwin said.

Aldwin and colleagues are currently working on a pilot study, VALOR, or Veterans Aging: Longitudinal studies in Oregon, to better understand impacts of combat exposure. The pilot study is supported by a grant from the OSU Research Office and includes veterans with service in Vietnam, the Persian Gulf and the post-9/11 conflicts.

The researchers have collected data from 300 veterans and are beginning to analyze it. Based on their initial findings, they are also planning a second, larger study with more veterans. They expect to see differences between veterans from different wars.

"Each war is different. They are going to affect veterans differently," Aldwin said. "Following 9-11, traumatic brain injuries have risen among veterans, while mortality rates have lowered. We have many more survivors with far more injuries. These veterans have had a much higher levels of exposure to combat, as well."

VALOR also offers researchers the opportunity to explore the impact of service on women veterans, whose experiences have not often been captured in previous research. About one-third of the participants in the pilot study were female veterans, Aldwin said.

Credit: 
Oregon State University

Tiny granules can help bring clean and abundant fusion power to Earth

image: PPPL physicists Robert Lunsford, left, and Rajesh Maingi, right

Image: 
Elle Starkman

Beryllium, a hard, silvery metal long used in X-ray machines and spacecraft, is finding a new role in the quest to bring the power that drives the sun and stars to Earth. Beryllium is one of the two main materials used for the wall in ITER, a multinational fusion facility under construction in France to demonstrate the practicality of fusion power. Now, physicists from the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and General Atomics have concluded that injecting tiny beryllium pellets into ITER could help stabilize the plasma that fuels fusion reactions.

Experiments and computer simulations found that the injected granules help create conditions in the plasma that could trigger small eruptions called edge-localized modes (ELMs). If triggered frequently enough, the tiny ELMs prevent giant eruptions that could halt fusion reactions and damage the ITER facility.

Scientists around the world are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity. The process involves plasma, a very hot soup of free-floating electrons and atomic nuclei, or ions. The merging of the nuclei releases a tremendous amount of energy.

In the present experiments, the researchers injected granules of carbon, lithium, and boron carbide -- light metals that share several properties of beryllium -- into the DIII-D National Fusion Facility that General Atomics operates for the DOE in San Diego. "These light metals are materials commonly used inside DIII-D and share several properties with beryllium," said PPPL physicist Robert Lunsford, lead author of the paper that reports the results in Nuclear Materials and Energy. Because the internal structure of the three metals is similar to that of beryllium, the scientists infer that all of these elements will affect ITER plasma in similar ways. The physicists also used magnetic fields to make the DIII-D plasma resemble the plasma as it is predicted to occur in ITER.

These experiments were the first of their kind. "This is the first attempt to try to figure out how these impurity pellets would penetrate into ITER and whether you would make enough of a change in temperature, density, and pressure to trigger an ELM," said Rajesh Maingi, head of plasma-edge research at PPPL and a co-author of the paper. "And it does look in fact like this granule injection technique with these elements would be helpful."

If so, the injection could lower the risk of large ELMs in ITER. "The amount of energy being driven into the ITER first walls by spontaneously occurring ELMs is enough to cause severe damage to the walls," Lunsford said. "If nothing were done, you would have an unacceptably short component lifetime, possibly requiring the replacement of parts every couple of months."

Lunsford also used a program he wrote himself that showed that injecting beryllium granules measuring 1.5 millimeters in diameter, about the thickness of a toothpick, would penetrate into the edge of the ITER plasma in a way that could trigger small ELMs. At that size, enough of the surface of the granule would evaporate, or ablate, to allow the beryllium to penetrate to locations in the plasma where ELMs can most effectively be triggered.

The next step will be to calculate whether density changes caused by the impurity pellets in ITER would indeed trigger an ELM as the experiments and simulations indicate. This research is currently underway in collaboration with international experts at ITER.

The researchers envision the injection of beryllium granules as just one of many tools, including using external magnets and injecting deuterium pellets, to manage the plasma in doughnut-shaped tokamak facilities like ITER. The scientists hope to conduct similar experiments on the Joint European Torus (JET) in the United Kingdom, currently the world's largest tokamak, to confirm the results of their calculations. Says Lunsford, "We think that it's going to take everyone working together with a bunch of different techniques to really get the ELM problem under control."

Credit: 
DOE/Princeton Plasma Physics Laboratory

Sister, neighbor, friend: Thinking about multiple roles boosts kids' performance

image: A typical child plays many roles, such as friend, neighbor, son or daughter. Simply reminding children of that fact can lead to better problem-solving and more flexible thinking, finds new research from Duke University.

Image: 
Duke University

DURHAM, N.C. -- A typical child plays many roles, such as friend, neighbor, son or daughter. Simply reminding children of that fact can lead to better problem-solving and more flexible thinking, finds new research from Duke University.

"This is some of the first research on reminding kids about their multi-faceted selves," said lead author Sarah Gaither, an assistant professor of psychology and neuroscience at Duke.
"Such reminders boost their problem-solving skills and how flexibly they see their social worlds - all from a simple mindset switch."

Better problem-solving was just one positive finding of the study, Gaither said. After considering their own various identities, children also showed more flexible thinking about race and other social groupings -- a behavior that could be valuable in an increasingly diverse society.

The research appears July 2 in the journal Developmental Science.

In a series of experiments, Gaither and her colleagues looked at 196 children, ages 6 and 7. All were native English speakers.

In one experiment, the first group of children was reminded they have various identities, such as son, daughter, reader or helper. A second group of children was reminded of their multiple physical attributes (such as a mouth, arms and legs).

In another experiment, one group of children was again reminded they have various identities. A second set of children received similar prompts -- but about other children's many roles, not their own.

All the children then tackled a series of tasks. Children who were reminded of their various identities demonstrated stronger problem-solving and creative thinking skills. For instance, when shown pictures of a bear gazing at a honey-filled beehive high up in a tree, these children had more creative ideas for how the bear might get the honey, such as flipping over a bowl so that it becomes a stool. In other words, they saw a new use for the bowl.

Children who were reminded of their multiple roles also showed more flexible thinking about social groupings. When asked to categorize different photos of faces, they suggested many ways to do so. For instance, they identified smiling faces vs. unsmiling ones, and old vs. young faces. The other children, meanwhile, primarily grouped people's faces by race and gender.

Because the results suggest simple ways to promote flexible, inclusive thinking for the young, they could be especially valuable for teachers, Gaither said.

"We have this tendency in our society to only think about ourselves in connection with one important group at a time," Gaither said. "When we remind kids that they have various identities, they think beyond our society's default categories, and remember that there are many other groups in addition to race and gender.

"It opens their horizons to be a little more inclusive."

Credit: 
Duke University

Why do mosquitoes choose us? Lindy McBride is on the case

image: Researchers in Lindy McBride's lab at Princeton University often waft air across guinea pigs Molly (seen here) and Mia (not pictured) to collect their odor for mosquito research. In experiments, mosquitoes are given the choice between the guinea pigs' odor and human odor as part of studies into how mosquitoes distinguish between humans and other mammals; neither the humans nor the guinea pigs are directly exposed to mosquito stings.

Image: 
Danielle Alio, Princeton University

Carolyn "Lindy" McBride is studying a question that haunts every summer gathering: How and why are mosquitoes attracted to humans?

Few animals specialize as thoroughly as the mosquitoes that carry diseases like Zika, malaria and dengue fever.

In fact, of the more than 3,000 mosquito species in the world, most are opportunistic, said McBride, an assistant professor of ecology and evolutionary biology and the Princeton Neuroscience Institute. They may be mammal biters, or bird biters, with a mild preference for various species within those categories, but most mosquitoes are neither totally indiscriminate nor species-specific. But McBride is most interested in the mosquitoes that scientists call "disease vectors" -- carriers of diseases that plague humans -- some of which have evolved to bite humans almost exclusively.

She studies several mosquitoes that carry diseases, including Aedes aegypti, which is the primary vector for dengue fever, Zika and yellow fever, and Culex pipiens, which carries West Nile virus. A. aegypti specializes in humans, while C. pipiens is less specialized, allowing it to transmit West Nile from birds to humans.

"It's the specialists that tend to be the best disease vectors, for obvious reasons: They bite a lot of humans," said McBride. She's trying to understand how the brain and genome of these mosquitoes have evolved to make them specialize in humans -- including how they can distinguish us from other mammals so effectively.

To help her understand what draws human-specialized mosquitoes to us, McBride compares the behavior, genetics and brains of the Zika mosquito to an African strain of the same species that does not specialize in humans.

In one line of research, she investigates how animal brains interpret complex aromas. That's a more complicated proposition than it first appears, since human odor is composed of more than 100 different compounds -- and those same compounds, in slightly different ratios, are present in most mammals.

"Not any one of those chemicals is attractive to mosquitoes by itself, so mosquitoes must recognize the ratio, the exact blend of components that defines human odor," said McBride. "So how does their brain figure it out?"

She is also studying what combination of compounds attracts mosquitoes. That could lead to baits that attract mosquitoes to lethal traps, or repellants that interrupt the signal.

Most mosquito studies in recent decades have been behavioral experiments, which are very labor intensive, said McBride. "You give them an odor and say, 'Do you like this?' and even with five compounds, the number of permutations you have to go through to figure out exactly what the right ratio is -- it's overwhelming." With 15 or 20 compounds, the number of permutations skyrockets, and with the full complement of 100, it's astronomical.

To test the odor preference of mosquitoes, McBride's lab has primarily used guinea pigs, small mammals with a different blend of many of the same 100 odor compounds of humans. Researchers gather their odor by blowing air over their bodies, and they then present mosquitoes with a choice between eau de guinea pig and a human arm. Human-specialized "domestic" A. aegypti mosquitoes will go toward the arm 90 to 95 percent of the time, said McBride, but the African "forest" A. aegypti mosquitoes are more likely to fly toward the guinea pig aroma.

In another recent experiment, then-senior Meredith Mihalopoulos of the Class of 2018 recruited seven volunteers and did "preference tests" with both forest and domestic A. aegypti mosquitoes. She let the mosquitoes choose between herself and each of the volunteers, finding that some people are more attractive to the insects than others. Then Alexis Kriete, a research specialist in the McBride lab, analyzed the odor of all the participants. They showed that while the same compounds were present, each human was more similar to each other than to the guinea pigs.

"There's nothing really unique about any animal odor," said McBride. "There's no one compound that characterizes a guinea pig species. To recognize a species, you have to recognize blends."

The McBride lab will be expanding to include other mammals and birds in their research. Graduate student Jessica Zung is working with farms and zoos to collect hair, fur, feather and wool samples from 50 animal species. She hopes to extract odor from them and analyze the odors at a Rutgers University facility that fractionates odors and identifies the ratio of the compounds. By inputting their odor profiles into a computational model, she and McBride hope to understand how exactly mosquitoes may have evolved to distinguish humans from non-human animals.

McBride's graduate student Zhilei Zhao is developing an entirely novel approach: imaging mosquito brains at very high resolutions to figure out how a mosquito identifies its next victim. "What combination of neural signals in the brain cause the mosquito to be attracted or repelled?" McBride asked. "If we can figure that out, then it's trivial to screen for blends that can be attractive or repellant. You put the mosquito up there, open up its head, image the brain, pop one aroma after another and watch: Does it hit the right combination of neurons?"

Key to that study will be the imaging equipment provided by Princeton's Bezos Center for Neural Circuit Dynamics, said McBride. "We can walk over there and say we want to image this, at this resolution, with this orientation, and a few months later, the microscope is built," she said. "We could have bought an off-the-shelf microscope, but it would have been so much slower and so much less powerful. Help from Stephan Thiberge, the director of the Bezos Center, has been critical for us."

McBride began her biology career studying evolution in butterflies, but she was lured to disease vector mosquitoes by how easy they are to rear in the lab. While the butterflies McBride studied need a year to develop, A. aegypti mosquitoes can go through an entire life cycle in three weeks, allowing for rapid-turnaround genetic experiments.

"That's what first drew me to mosquitoes," said McBride. "One of the surprises for me has been how satisfying it is that they have an impact on human health. That's certainly not why I got into biology -- I was studying birds and butterflies in the mountains, as far away from humans as I could get -- but I really appreciate that element of mosquito work now.

"But what is still as exciting is how easily we can manipulate mosquitoes to test hypotheses about how new behaviors evolve," she continued. "We can create transgenic strains, we can knock out genes, we can activate neurons with light. All these things have been done in model systems, like mouse and fly, but never in a non-model organism, never in an organism -- I'm showing my bias here -- with such interesting ecology and evolution."

Credit: 
Princeton University

Dose-dependent effects of esmolol-epinephrine combination therapy in myocardial ischemia

Epinephrine has been included in resuscitation guidelines worldwide since the 1960s. It is believed that epinephrine increases the chance of restoring a person's heartbeat and improves long-term neurological outcome through increasing coronary and cerebral perfusion pressure. However, recent studies have raised doubts about the benefit of epinephrine regarding neurological outcomes in cardiac arrest. Moreover, epinephrine use in the stabilization of a cardiogenic shock in post-myocardial infarction patients has been found to increase the incidence of refractory shock. In fact, beta-adrenergic receptor stimulation has been suggested to have deleterious effects as stimulation of this pathway increases oxygen consumption and reduces sub-endocardial perfusion. In contrast, esmolol, a cardio-selective β1-blocker, has been shown to provide cardioprotection after myocardial ischemia in animal and human studies. Therefore, esmolol co-administration with epinephrine may help to reduce epinephrine-reperfusion injury but maintain esmolol-cardioprotection and epinephrine mediated increases in chronotropy and inotropy. Indeed, recent studies in animals have uncovered beneficial effects of epinephrine and esmolol co-administration in a cardiac arrest model. Based on these findings, Dr. Tobias Eckle and his team at the University of Colorado School of Medicine, University of Colorado have investigated esmolol-epinephrine combination therapy in a mouse model of myocardial ischemia and reperfusion injury.

Comparing different esmolol doses in combination with epinephrine in a mouse model of myocardial infarction, Eckle's team demonstrated that at a specific esmolol-epinephrine ratio (15:1), esmolol-cardioprotection and epinephrine β1 mediated hemodynamic activity can both simultaneously exist during myocardial ischemia and reperfusion injury. "These findings might have implications for current clinical practice in the treatment of patients with cardiogenic shock or cardiac arrest", says Eckle. "In fact, a cardiogenic shock after myocardial ischemia disallows the use of esmolol due to hemodynamic instability." Interestingly, a definite recommendation for a specific catecholamine regimen in cardiogenic shock is lacking.

Combination therapy of epinephrine with esmolol seems less intuitive in cardiogenic shock after myocardial ischemia according to the research; higher esmolol doses could compromise epinephrine mediated increases of cardiac output via β1 adrenergic receptor inotropic and chronotropic effects, or higher epinephrine doses could compromise esmolol mediated cardioprotection via β1 adrenergic receptor blockade. Surprisingly, by increasing the esmolol dose, the study team was able to restore esmolol-cardioprotection while heart rate and some blood pressures in the early reperfusion phase were significantly increased compared to an esmolol treatment alone. "This finding is novel and highlights that esmolol cardioprotection is not fully understood," says Eckle. Having increased heart rates, which is β1 mediated, and at the same time seeing cardioprotection via esmolol β1 blockade, indicates that only a part or short-term blockade of β1 receptors is necessary for the salutary effects of esmolol in myocardial ischemia and reperfusion injury.

While some clinicians occasionally use esmolol in patients on epinephrine infusion due to cardiogenic shock going off cardiac bypass to treat epinephrine-induced arrhythmias, no study to date has evaluated potential cardioprotective effects of esmolol-epinephrine co-administration during cardiac bypass surgery or a cardiogenic shock. As this is the first animal study on epinephrine-esmolol co-administration during myocardial ischemia and reperfusion injury, further studies in larger animals using multiple dosing protocols are suggested.

Credit: 
Bentham Science Publishers

Atmosphere of mid-size planet revealed by hubble and spitzer

image: This artist's illustration shows the theoretical internal structure of the exoplanet GJ 3470 b. It is unlike any planet found in the Solar System. Weighing in at 12.6 Earth masses the planet is more massive than Earth but less massive than Neptune. Unlike Neptune, which is 3 billion miles from the Sun, GJ 3470 b may have formed very close to its red dwarf star as a dry, rocky object. It then gravitationally pulled in hydrogen and helium gas from a circumstellar disk to build up a thick atmosphere. The disk dissipated many billions of years ago, and the planet stopped growing. The bottom illustration shows the disk as the system may have looked long ago. Observation by NASA's Hubble and Spitzer space telescopes have chemically analyzed the composition of GJ 3470 b's very clear and deep atmosphere, yielding clues to the planet's origin. Many planets of this mass exist in our galaxy.

Image: 
NASA, ESA, and L. Hustak (STScI)

Two NASA space telescopes have teamed up to identify, for the first time, the detailed chemical "fingerprint" of a planet between the sizes of Earth and Neptune. No planets like this can be found in our own solar system, but they are common around other stars.

The planet, Gliese 3470 b (also known as GJ 3470 b), may be a cross between Earth and Neptune, with a large rocky core buried under a deep crushing hydrogen and helium atmosphere. Weighing in at 12.6 Earth masses, the planet is more massive than Earth, but less massive than Neptune (which is more than 17 Earth masses).

Many similar worlds have been discovered by NASA's Kepler space observatory, whose mission ended in 2018. In fact, 80% of the planets in our galaxy may fall into this mass range. However, astronomers have never been able to understand the chemical nature of such a planet until now, researchers say.

By inventorying the contents of GJ 3470 b's atmosphere, astronomers are able to uncover clues about the planet's nature and origin.

"This is a big discovery from the planet formation perspective. The planet orbits very close to the star and is far less massive than Jupiter--318 times Earth's mass--but has managed to accrete the primordial hydrogen/helium atmosphere that is largely "unpolluted" by heavier elements," said Björn Benneke of the University of Montreal, Canada. "We don't have anything like this in the solar system, and that's what makes it striking."

Astronomers enlisted the combined multi-wavelength capabilities NASA's Hubble snd Spitzer space telescopes to do a first-of-a-kind study of GJ 3470 b's atmosphere.

This was accomplished by measuring the absorption of starlight as the planet passed in front of its star (transit) and the loss of reflected light from the planet as it passed behind the star (eclipse). All totaled, the space telescopes observed 12 transits and 20 eclipses. The science of analyzing chemical fingerprints based on light is called "spectroscopy."

"For the first time we have a spectroscopic signature of such a world," said Benneke. But he is at a loss for classification: Should it be called a "super-Earth" or "sub-Neptune?" Or perhaps something else?

Fortuitously, the atmosphere of GJ 3470 b turned out to be mostly clear, with only thin hazes, enabling the scientists to probe deep into the atmosphere.

"We expected an atmosphere strongly enriched in heavier elements like oxygen and carbon which are forming abundant water vapor and methane gas, similar to what we see on Neptune", said Benneke. "Instead, we found an atmosphere that is so poor in heavy elements that its composition resembles the hydrogen/helium rich composition of the Sun."

Other exoplanets called "hot Jupiters" are thought to form far from their stars, and over time migrate much closer. But this planet seems to have formed just where it is today, says Benneke.

The most plausible explanation, according to Benneke, is that GJ 3470 b was born precariously close to its red dwarf star, which is about half the mass of our Sun. He hypothesizes that essentially it started out as a dry rock, and rapidly accreted hydrogen from a primordial disk of gas when its star was very young. The disk is called a "protoplanetary disk."

"We're seeing an object that was able to accrete hydrogen from the protoplanetary disk, but didn't runaway to become a hot Jupiter," said Benneke. "This is an intriguing regime."

One explanation is that the disk dissipated before the planet could bulk up further. "The planet got stuck being a sub-Neptune," said Benneke.

NASA's upcoming James Webb Space Telescope will be able to probe even deeper into GJ 3470 b's atmosphere thanks to the Webb's unprecedented sensitivity in the infrared. The new results have already spawned large interest by American and Canadian teams developing the instruments on Webb. They will observe the transits and eclipses of GJ 3470 b at light wavelengths where the atmospheric hazes become increasingly transparent.

The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.

The Jet Propulsion Laboratory in Pasadena, California, manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate in Washington, D.C. Science operations are conducted at the Spitzer Science Center at Caltech in Pasadena. Space operations are based at Lockheed Martin Space Systems in Littleton, Colorado. Data are archived at the Infrared Science Archive housed at IPAC at Caltech. Caltech manages JPL for NASA.

Credit: 
NASA/Goddard Space Flight Center

Harnessing reliability for neuroscience research

The neuroimaging community has made significant strides towards collecting large-scale neuroimaging datasets, which--until the past decade--had seemed out of reach. Between initiatives focused on the aggregation and open sharing of previously collected datasets and de novo data generation initiatives tasked with the creation of community resources, tens of thousands of datasets are now available online. These span a range of developmental statuses and disorders, and many more will soon be available. Such open data are allowing researchers to increase the scale of their studies, to apply various learning strategies (for example, artificial intelligence) with ambitions of brainbased biomarker discovery and to address questions regarding the reproducibility of findings, all at a pace that is unprecedented in imaging. However, based on the findings of recent works1-3, few of the datasets generated to date contain enough data per subject to achieve highly reliable measures of brain connectivity. Although our examination of this critical deficiency focuses on the field of neuroimaging, the implications of our argument and the statistical principles discussed are broadly applicable.

Scoping the problem

Our concern is simple: researchers are working hard to amass large-scale datasets, whether through data sharing or coordinated data generation initiatives, but failing to optimize their data collections for relevant reliabilities (for example, test-retest, between raters, etc.)4. They may be collecting larger amounts of suboptimal data, rather than smaller amounts of higherquality data, a trade-off that does not bode well for the field, particularly when it comes to making inferences and predictions at the individual level. We believe that this misstep can be avoided by critical assessments of reliability upfront.

The trade-off we observe occurring in neuroimaging reflects a general tendency in neuroscience. Statistical power is fundamental to studies of individual differences, as it determines our ability to detect effects of interest. While sample size is readily recognized as a key determinant of statistical power, measurement reliabilities are less commonly considered and at best are only indirectly considered when estimating required sample sizes. This is unfortunate, as statistical theory dictates that reliability places an upper limit on the maximum detectible effect size.

The interplay between reliability, sample size and effect size in determinations of statistical power is commonly underappreciated in the field. To facilitate a more direct discussion of these factors, Fig. 1 depicts the impact of measurement reliability and effect size on the sample sizes required to achieve desirable levels of statistical power (for example, 80%); these relations are not heavily dependent on the specific form of statistical inference employed (for example, two-sample t-test, paired t-tests, three-level ANOVA). Estimates were generated using the pwr package in R and are highly congruent with results from Monte Carlo simulations5. With respect to neuroscience, where the bulk of findings report effect sizes ranging from modest to moderate6, the figure makes obvious our point that increasing reliability can dramatically reduce the sample size requirements (and therefore cost) for achieving statistically appropriate designs.

In neuroimaging, the reliability of the measures employed in experiments can vary substantially2-4. In MRI, morphological measures are known to have the highest reliability, with most voxels in the brain exhibiting reliabilities measured as intraclass correlation >0.8 for core measures (for example, volume, cortical thickness and surface area). For functional MRI (fMRI) approaches, reliability tends to be lower and more variable, heavily dependent on the experimental design, the nature of the measure employed and--most importantly--the amount of data obtained (for example, for basic resting-state fMRI measures, the mean intra-class correlation obtained across voxels may increase by two to four times as one increases from 5 min to 30 min of data)2,3. Limited interindividual variability may be a significant contributor to findings of low reliability for fMRI, as its magnitude relative to within-subject variation is a primary determinant of reliability. Such a concern has been raised for task fMRI7, which directly borrows behavioural task designs from the psychological literature8.

Potential implications

From a statistical perspective, the risks of underpowered samples yielding increased false negatives and artificially inflated effect sizes (i.e., the 'winner's curse' bias) are well known. More recently, the potential for insufficiently powered samples to generate false positives has been established as well9. All these phenomena reduce the reproducibility of findings across studies, a challenge that other fields (for example, genetics) have long worked to overcome. In the context of neuroimaging or human brain mapping, an additional concern is that we may be biased to overvalue those brain areas for which measurement reliability is greater. For example, the default and frontoparietal networks receive more attention in clinical and cognitive neuroscience studies of individual and group differences. This could be appropriate, but it could also reflect the higher reliabilities of these networks3,4.

Solutions

Our goal here is to draw greater attention to the need for assessment and optimization of reliability, which is typically underappreciated in neuroscience research. Whether one is focusing on imaging, electrophysiology, neuroinflammatory markers, microbiomics, cognitive neuroscience paradigms or on-person devices, it is essential that we consider measurement reliability and its determinants.

For MRI-based neuroimaging, a repeated theme across the various modalities (for example, diffusion, functional, morphometry) is that higher quality data require more time to collect, whether due to increased resolution or repetitions. As such, investigators would benefit from assessing the minimum data requirements to achieve adequately reliable measurements before moving forward. An increasing number of resources are available for such assessments of reliability (for example, Consortium for Reliability and Reproducibility, MyConnectome Project, Healthy Brain Network Serial Scanning Initiative, Midnight Scan Club, Yale Test-Retest Dataset, PRIMatE Data Exchange). It is important to note that these resources are primarily focused on test-retest reliability4, leaving other forms of reliability less explored (for example, interstate reliability, inter-scanner reliability; see recent efforts from a Research Topic on reliability and reproducibility in functional connectomics10).

Importantly, reliability will differ depending on how a given imaging dataset is processed and which brain features are selected. A myriad of different processing strategies and brain features have emerged, but they are rarely compared with one another to identify those most suitable for studying individual differences. In this regard, efforts to optimize analytic strategies for reliability are essential, as they make it possible to decrease the minimum data required per individual to achieve a target level of reliability1-4,11. This is critically important for applications in developing, aging and clinical populations, where scanner environment tolerability limits our ability to collect time-intensive datasets. An excellent example of quantifying and optimizing for reliability comes from functional connectomics. Following convergent reports that at least 20-30 min of data are needed to obtain test- retest reliability for traditional pairwise measures of connectivity2, recent works have suggested the feasibility of combining different fMRI scans in a session (for example, rest, movie, task) to make up the differential in calculating reliable measures of functional connectivity2,12.

Cognitive and clinical neuroscientists should be aware that many cognitive paradigms used inside and outside of the scanner have never been subject to proper assessments of reliability, and the quality of reliability assessments for questionnaires (even proprietary) can vary substantially.

As such, the reliability of data being used on the phenotyping side is often an unknown in the equation and can limit the utility of even the most optimal imaging measures, a reality that also affects other fields (for example, genetics) and inherently compromises such efforts. Although not always appealing, an increased focus on the quantification and publication of minimum data requirements and their reliabilities for phenotypic assessments is a necessity, as is exploration of novel approaches to data capture that may increase reliability (for example, sensor-based acquisition via wearables and longitudinal sampling via smartphone apps).

Finally, and perhaps most critically, there is marked diversity in how the word 'reliability' is used, and a growing number of separate reliability metrics are appearing. This phenomenon is acknowledged in a recent publication13 by an Organization for Human Brain Mapping workgroup tasked with generating standards for improving reproducibility. We suggest it would be best to build directly on the terminology and measures well-established in other literatures (for example, statistics, medicine) rather than start anew14. We particularly want to avoid confusions in terminology, particularly those between 'reliability' and 'validity', two related but distinct concepts that are commonly used interchangeably in the literature. To facilitate an understanding of this latter point, we include a statistical note on the topic below.

A confusion to avoid

It is crucial that researchers acknowledge the gap between reliability and validity, as a highly reliable measure can be driven by artefact rather than meaningful (i.e., valid) signal. As illustrated in Fig. 2, this point becomes obvious when one considers the differing sources of variance associated with the measurement of individual differences15. First, we have the portion of the variance measured across individuals that is the trait of interest (Vt) (for example, between-subject differences in grey matter volume within left inferior frontal gyrus). Second, there is variance related to unwanted contaminants in our measurement that can systematically vary across individuals (Vc) (for example, between-subject differences in head motion). Finally, there is random noise (Vr), which is commonly treated as within-subject variation. Reliability is the proportion of the total variance that can be attributed to systematic variance across individuals (including both Vt and Vc; see equation 1); in contrast, validity is the proportion of the total variance that can be attributed specifically to the trait of interest alone (Vt; see equation 2).

Reliability= (Vt+Vc)/(Vt+Vc +Vr ) (1)
Validity =Vt/(Vt+Vc +Vr ) (2)

As discussed in prior work15, this framework indicates that a measure cannot be more valid than reliable (i.e., reliability provides an upper bound for validity). So, while it is possible to have a measurement that is sufficiently reliable and completely invalid (for example, a reliable artefact), it is impossible to have a measurement with low reliability that has high validity.

A specific challenge for neuroscientists is that while reliability can be readily quantified, validity cannot, as it is not possible to directly measure Vt. As such, various indirect forms of validity are used, which differ in the strength of the evidence required. At one end is criterion validity, which compares the measure of interest to an independent measure designated as the criterion or 'gold standard' measurement (for example, comparison of individual differences in tracts identified by diffusion imaging to postmortem histological findings, or comparison of differences in fMRI-based connectivity patterns to intracranial measures of neural coupling or magnetoencephalography). At the other extreme is face validity, in which findings are simply consistent with 'common sense' expectations (for example, does my functional connectivity pattern look like the motor system?). Intermediate to these are concepts such as construct validity, which test whether a measure varies as would be expected if it is indexing the desired construct (i.e., convergent validity) and not others (i.e., divergent validity) (for example, do differences in connectivity among individuals vary with developmental status and not head motion or other systematic artefacts?). An increasingly common tool in the imaging community is predictive validity, where researchers test the ability to make predictions regarding a construct of interest (for example, do differences in the network postulated to support intelligence predict differences in IQ?). As can be seen from the examples provided, different experimental paradigms offer differing levels of validity, with the more complex and challenging offering the highest forms. From a practical perspective, what researchers can do is make best efforts to measure and remove artefact signals such as head motion4,16 and work to establish the highest form of validity possible using the methods available.

Closing remarks

As neuroscientists make strides in our efforts to deliver clinically useful tools, it is essential that assessments and optimizations for reliability become common practice. This will require improved research practices among investigators, as well as support from funding agencies in the generation of open community resources upon which these essential properties can be quantified.

Credit: 
The Child Mind Institute

Deal or no deal? How discounts for unhappy subscribers can backfire on businesses

image: Vamsi Kanuri, assistant professor of marketing in Notre Dame's Mendoza College of Business.

Image: 
University of Notre Dame

Subscription-based service providers including newspapers, cable and internet providers and utility companies often issue price-based incentives including discounts in response to complaints about service failures. It's been shown to satisfy angry customers -- at least momentarily.

But new research from the University of Notre Dame demonstrates the tactic may not be successful in retaining customers in the long term.

"The Unintended Consequence of Price-Based Service Recovery Incentives," forthcoming in the Journal of Marketing from lead author Vamsi Kanuri, assistant professor of marketing in Notre Dame's Mendoza College of Business, and Michelle Andrews from Emory University, shows that in subscription-based service settings, discounts to make up for service failures could backfire by reducing the likelihood of subscription renewals.

"The economic theory of reference prices (amount a purchaser thinks is appropriate to pay for a good or service) leads us to believe that discounts to make up for service failures will provide a new price point for customers to anchor on," Kanuri said. "In turn, this will lead them to compare the price of the service renewal with their reduced service price following the service failure. A higher discount results in consumers forming a lower reference price, which in turn increases the difference between the full renewal price and the reference price. This difference then translates into a perceived loss, which ultimately results in lower renewal probabilities."

In other words, consumers may end up feeling cheated rather than rewarded by the discount -- the exact opposite of what the provider hoped to accomplish.

The researchers used econometric techniques to examine 6,919 renewal decisions of subscribers who threatened to cancel their subscriptions following service delivery failures at a large U.S. newspaper firm. The data covered 10 delivery failures frequently experienced by customers, including late delivery, wrong newspaper delivered, missed delivery, newspaper delivered to the wrong location and property damage during delivery.

"Firms do not understand the paradox of service failure," Kanuri said. "It has been shown that if a firm is able to delight a customer at the point of service failure, the customer is likely to be more satisfied than under normal conditions when there is no service failure and is likely to remain a customer longer. Everyone knows that firms are imperfect, just as human beings, and that there will be a service letdown at some point. How the firm chooses to delight its customers can make all the difference."

The study also offers ways to mitigate the negative effect of recovery discounts and can help any subscription-based service provider currently using discounts as a recovery tactic.

"After all, discounts may be necessary to alleviate customer dissatisfaction immediately after a service failure and firms may not have another option," Kanuri said. "In such circumstances, we demonstrate that firms can alleviate the long-term negative consequences by lowering the renewal price at the end of the contract, increasing the time between recovery and contract renewal (offer additional service usage time) and using touchpoints with customers such as emails, bill reminders and follow-up phone calls to remind customers of the initial subscription price."

Credit: 
University of Notre Dame