Culture

New study challenges claim that exogenous RNA is essential for sperm function

Scientists from the University of Bath are challenging the claims of two high profile papers from 2018 which reported that in the mouse, RNA has to be added to sperm for them to be fully fertile. The Bath findings undermine a proposed mechanism of epigenetic inheritance in which offspring inherit traits acquired by their parents.

In double-blind experiments, researchers from the Department of Biology & Biochemistry have shown that healthy mice pups can be born from sperm which haven't gained short RNA chains as they migrate through the epididymis - a ductular organ in which sperm acquire forward motility after they emerge from the testis.

This contradicts the results of the 2018 papers, which reported that mouse eggs fertilised with sperm taken from the 'caput' region of the epididymis - where sperm first enter the epididymis on leaving the testis - would not develop into viable embryos.

The results are published in Developmental Cell.

Lead author Professor Tony Perry said: "When I saw these two papers I just thought 'this can't be right' and with some quite straightforward experiments we have shown that it probably isn't.

"We have known for years that sperm taken from mouse testis contribute to full-term embryonic development following fertilisation. The 2018 studies proposed that sperm would unaccountably have lost this ability in the caput region of the epididymis but then reacquired it.

"Here we have shown that sperm taken from the caput region of the epididymis can, in fact, support full term development.

The Bath team took sperm from two regions of the epididymis, the caput and the cauda; the cauda region is where sperm are usually taken from mice for in vitro fertilisation, so we know they should work. Eggs were fertilised with the sperm and healthy pups were born from both sperm types (caput and cauda) with no significant difference in the number of pups born, their health, weight or fertility.

Professor Perry added: "Not only does this set the record straight in terms of tallying with well-established developmental biology, but the conclusion of the previous research was that acquired RNA was in some way essential for healthy embryo development - which doesn't seem to be the case.

"The 2018 papers would have provided one possible mechanism for epigenetic inheritance, but it's not supported by our data. It's important to suggest corrections to the record where they come to light, and publish results that fail to replicate so we can build confidence in our view of biology, especially where it has clinical implications, as is the case for epigenetic inheritance."

Credit: 
University of Bath

The world needs a global agenda for sand

What links the building you live in, the glass you drink from and the computer you work on? The answer is smaller than you think and is something we are rapidly running out of: sand.

In a commentary published today in the journal Nature, a group of scientists from the University of Colorado Boulder, the University of Illinois, the University of Hull and Arizona State University highlight the urgent need for a global agenda for sand.

Sand is a key ingredient in the recipe of modern life, and yet it might be our most overlooked natural resource, the authors argue. Sand and gravel are being extracted faster than they can be replaced. Rapid urbanization and global population growth have fueled the demand for sand and gravel, with between 32 and 50 billion tons extracted globally each year.

"From 2000-2100 it is projected there will be a 300 percent increase in sand demand and 400 percent increase in prices," said Mette Bendixen, a researcher at CU Boulder's Institute of Arctic and Alpine Research (INSTAAR). "We urgently require a monitoring program to address the current data and knowledge gap, and thus fully assess the magnitude of sand scarcity. It is up to the scientific community, governments and policy makers to take the steps needed to make this happen."

A lack of oversight and monitoring is leading to unsustainable exploitation, planning and trade. Removal of sand from rivers and beaches has far-reaching impacts on ecology, infrastructure, national economies and the livelihoods of the 3 billion people who live along the world's river corridors. Illegal sand mining has been documented in 70 countries across the globe, and battles over sand have reportedly killed hundreds in recent years, including local citizens, police officers and government officials.

"Politically and socially, we must ask: If we can send probes to the depths of the oceans or the furthest regions of the solar system, is it too much to expect that we possess a reliable understanding of sand mining in the world's great rivers, and on which so much of the world's human population, rely?" said Jim Best, a professor at the University of Illinois Department of Geology. "Now is the time to commit to gaining such knowledge by fully grasping and utilizing the new techniques that are at our disposal."

In order to move towards globally sustainable sand extraction, the authors argue that we must fully understand the occurrence of sustainable sources and reduce current extraction rates and sand needs, by recycling concrete and developing alternative to sand (such as crushed rocks or plastic waste materials). This will rely on a knowledge of the location and extent of sand mining, as well as the natural variations in sand flux in the world's rivers.

"The fact that sand is such a fundamental component of modern society, and yet we have no clear idea of how much sand we remove from our rivers every year, or even how much sand is naturally available, makes ensuring this industry is sustainable very, very difficult" said Chris Hackney, research fellow at the University of Hull's Energy and Environment Institute. "It's time that sand was given the same focus on the world stage as other global commodities such as oil, gas and precious metals."

"The issue of sand scarcity cannot be studied in geographical isolation as it has worldwide implications," said Lars L. Iversen, a research fellow at Arizona State University's Julie Ann Wrigley Global Institute of Sustainability. "The reality and size of the problem must be acknowledged--and action must be taken--on a global stage. In a rapidly changing world, we cannot afford blind spots."

Credit: 
University of Colorado at Boulder

Novel computer model supports cancer therapy

Researchers from the Life Sciences Research Unit (LSRU) of the University of Luxembourg have developed a computer model that simulates the metabolism of cancer cells. They used the programme to investigate how combinations of drugs could be used more effectively to stop tumour growth. The biologists now published their findings in the scientific journal EBioMedicine of the prestigious Lancet group.

The metabolism of cancer cells is optimised to enable fast growth of tumours. "Their metabolism is much leaner than that of healthy cells, as they are just focused on growth. However, this makes them more vulnerable to interruptions in the chain of chemical reactions that the cells depend on. Whereas healthy cells can take alternative routes when one metabolic path is disabled, this is more difficult for cancer cells," explains Thomas Sauter, Professor of Systems Biology at the University of Luxembourg and lead author of the paper. "In our study, we investigated how drugs or combinations of drugs could be used to switch off certain proteins in cancer cells and thereby interrupt the cell's metabolism."

Therefore, the researchers created digital models of healthy and of cancerous cells and fed them with gene sequencing data from 10,000 patients of the Cancer Genome Atlas (TCGA) of the American National Cancer Institute (NCI). Using these models, the researchers were able to simulate the effects different active substances had on cells' metabolisms so they could identify those drugs that inhibited cancer growth and at the same time didn't affect the healthy cells. The models allow filtering out drugs that do not work or are toxic, so that only the promising ones are tested in the lab.

With the help of the models, they tested about 800 medications of which 40 were predicted to inhibit cancer growth. About 50 percent of these drugs where already known as anti-cancer therapeutics, but 17 of them are so far only approved for other treatments. "Our tool can help with the so-called "drug repositioning", which means that new therapeutically purposes are found for existing medication. This could significantly reduce the cost and time for drug development," Prof. Sauter said.

The particular advantage of the approach is the efficiency of its mathematical method. "We managed to create 10.000 patient models within one week, without the use of high-performance computing. This is exceptionally fast," comments Dr. Maria Pacheco, postdoctoral researcher at the University of Luxembourg and first author of the study. In addition, Dr. Elisabeth Letellier, principal investigator at the Molecular Disease Mechanisms group at the University of Luxembourg and collaborator on the present study, further emphasizes "In the future, this could allow us to build models of individual cancer patients and virtually test drugs in order to find the most efficient combination. This could also bring fresh hope to patients for whom known therapies haven proven to be ineffective."

So far, the models have been tested only for colorectal cancer, but the algorithm basically also works for all sorts of cancer, according to Thomas Sauter. He and his team are currently considering to develop commercial applications for their method.

Credit: 
University of Luxembourg

Arts and medicine: clarifying history, lessons for today from Peter Neubauer's twins study

Bottomline: This Arts and Medicine feature reviews "Three Identical Strangers" and "The Twinning Reaction," two documentaries telling the story of identical twins and triplets adopted as infants into separate families who were unknowing participants in a two-decade nature vs. nurture study of child development beginning in 1960.

Credit: 
JAMA Network

Combat veterans more likely to experience mental health issues in later life

CORVALLIS, Ore. - Military veterans exposed to combat were more likely to exhibit signs of depression and anxiety in later life than veterans who had not seen combat, a new study from Oregon State University shows.

The findings suggest that military service, and particularly combat experience, is a hidden variable in research on aging, said Carolyn Aldwin, director of the Center for Healthy Aging Research in the College of Public Health and Human Sciences at OSU and one of the study's authors.

"There are a lot factors of aging that can impact mental health in late life, but there is something about having been a combat veteran that is especially important," Aldwin said.

The findings were published this month in the journal Psychology and Aging. The first author is Hyunyup Lee, who conducted the research as a doctoral student at OSU; co-authors are Soyoung Choun of OSU and Avron Spiro III of Boston University and the VA Boston Healthcare System. The research was funded by the National Institutes on Aging and the Department of Veterans Affairs.

There is little existing research that examines the effects of combat exposure on aging and in particular on the impacts of combat on mental health in late life, Aldwin said. Many aging studies ask about participants' status as veterans, but don't unpack that further to look at differences between those who were exposed to combat and those who weren't.

Using data from the Veterans Affairs Normative Aging Study, a longitudinal study that began in the 1960s to investigate aging in initially healthy men, the researchers explored the relationship between combat exposure and depressive and anxiety symptoms, as well as self-rated health and stressful life events.

They found that increased rates of mental health symptoms in late life were found only among combat veterans. The increases were not seen in veterans who had not been exposed to combat.

Generally, mental health symptoms such as depression and anxiety tend to decrease or remain stable during adulthood but can increase in later life. The researchers found that combat exposure has a unique impact on that trajectory, independent of other health issues or stressful life events.

"In late life, it's pretty normal to do a life review," Aldwin said. "For combat veterans, that review of life experiences and losses may have more of an impact on their mental health. They may need help to see meaning in their service and not just dwell on the horrors of war."

Veterans' homecoming experience may also color how they view their service later in life, Aldwin said. Welcoming veterans home and focusing on reintegration could help to reduce the mental toll of their service over time.

Most of the veterans in the study served in World War II or Korea. Additional research is need to understand more about how veterans' experiences may vary from war to war, Aldwin said.

Aldwin and colleagues are currently working on a pilot study, VALOR, or Veterans Aging: Longitudinal studies in Oregon, to better understand impacts of combat exposure. The pilot study is supported by a grant from the OSU Research Office and includes veterans with service in Vietnam, the Persian Gulf and the post-9/11 conflicts.

The researchers have collected data from 300 veterans and are beginning to analyze it. Based on their initial findings, they are also planning a second, larger study with more veterans. They expect to see differences between veterans from different wars.

"Each war is different. They are going to affect veterans differently," Aldwin said. "Following 9-11, traumatic brain injuries have risen among veterans, while mortality rates have lowered. We have many more survivors with far more injuries. These veterans have had a much higher levels of exposure to combat, as well."

VALOR also offers researchers the opportunity to explore the impact of service on women veterans, whose experiences have not often been captured in previous research. About one-third of the participants in the pilot study were female veterans, Aldwin said.

Credit: 
Oregon State University

Tiny granules can help bring clean and abundant fusion power to Earth

image: PPPL physicists Robert Lunsford, left, and Rajesh Maingi, right

Image: 
Elle Starkman

Beryllium, a hard, silvery metal long used in X-ray machines and spacecraft, is finding a new role in the quest to bring the power that drives the sun and stars to Earth. Beryllium is one of the two main materials used for the wall in ITER, a multinational fusion facility under construction in France to demonstrate the practicality of fusion power. Now, physicists from the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and General Atomics have concluded that injecting tiny beryllium pellets into ITER could help stabilize the plasma that fuels fusion reactions.

Experiments and computer simulations found that the injected granules help create conditions in the plasma that could trigger small eruptions called edge-localized modes (ELMs). If triggered frequently enough, the tiny ELMs prevent giant eruptions that could halt fusion reactions and damage the ITER facility.

Scientists around the world are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity. The process involves plasma, a very hot soup of free-floating electrons and atomic nuclei, or ions. The merging of the nuclei releases a tremendous amount of energy.

In the present experiments, the researchers injected granules of carbon, lithium, and boron carbide -- light metals that share several properties of beryllium -- into the DIII-D National Fusion Facility that General Atomics operates for the DOE in San Diego. "These light metals are materials commonly used inside DIII-D and share several properties with beryllium," said PPPL physicist Robert Lunsford, lead author of the paper that reports the results in Nuclear Materials and Energy. Because the internal structure of the three metals is similar to that of beryllium, the scientists infer that all of these elements will affect ITER plasma in similar ways. The physicists also used magnetic fields to make the DIII-D plasma resemble the plasma as it is predicted to occur in ITER.

These experiments were the first of their kind. "This is the first attempt to try to figure out how these impurity pellets would penetrate into ITER and whether you would make enough of a change in temperature, density, and pressure to trigger an ELM," said Rajesh Maingi, head of plasma-edge research at PPPL and a co-author of the paper. "And it does look in fact like this granule injection technique with these elements would be helpful."

If so, the injection could lower the risk of large ELMs in ITER. "The amount of energy being driven into the ITER first walls by spontaneously occurring ELMs is enough to cause severe damage to the walls," Lunsford said. "If nothing were done, you would have an unacceptably short component lifetime, possibly requiring the replacement of parts every couple of months."

Lunsford also used a program he wrote himself that showed that injecting beryllium granules measuring 1.5 millimeters in diameter, about the thickness of a toothpick, would penetrate into the edge of the ITER plasma in a way that could trigger small ELMs. At that size, enough of the surface of the granule would evaporate, or ablate, to allow the beryllium to penetrate to locations in the plasma where ELMs can most effectively be triggered.

The next step will be to calculate whether density changes caused by the impurity pellets in ITER would indeed trigger an ELM as the experiments and simulations indicate. This research is currently underway in collaboration with international experts at ITER.

The researchers envision the injection of beryllium granules as just one of many tools, including using external magnets and injecting deuterium pellets, to manage the plasma in doughnut-shaped tokamak facilities like ITER. The scientists hope to conduct similar experiments on the Joint European Torus (JET) in the United Kingdom, currently the world's largest tokamak, to confirm the results of their calculations. Says Lunsford, "We think that it's going to take everyone working together with a bunch of different techniques to really get the ELM problem under control."

Credit: 
DOE/Princeton Plasma Physics Laboratory

Sister, neighbor, friend: Thinking about multiple roles boosts kids' performance

image: A typical child plays many roles, such as friend, neighbor, son or daughter. Simply reminding children of that fact can lead to better problem-solving and more flexible thinking, finds new research from Duke University.

Image: 
Duke University

DURHAM, N.C. -- A typical child plays many roles, such as friend, neighbor, son or daughter. Simply reminding children of that fact can lead to better problem-solving and more flexible thinking, finds new research from Duke University.

"This is some of the first research on reminding kids about their multi-faceted selves," said lead author Sarah Gaither, an assistant professor of psychology and neuroscience at Duke.
"Such reminders boost their problem-solving skills and how flexibly they see their social worlds - all from a simple mindset switch."

Better problem-solving was just one positive finding of the study, Gaither said. After considering their own various identities, children also showed more flexible thinking about race and other social groupings -- a behavior that could be valuable in an increasingly diverse society.

The research appears July 2 in the journal Developmental Science.

In a series of experiments, Gaither and her colleagues looked at 196 children, ages 6 and 7. All were native English speakers.

In one experiment, the first group of children was reminded they have various identities, such as son, daughter, reader or helper. A second group of children was reminded of their multiple physical attributes (such as a mouth, arms and legs).

In another experiment, one group of children was again reminded they have various identities. A second set of children received similar prompts -- but about other children's many roles, not their own.

All the children then tackled a series of tasks. Children who were reminded of their various identities demonstrated stronger problem-solving and creative thinking skills. For instance, when shown pictures of a bear gazing at a honey-filled beehive high up in a tree, these children had more creative ideas for how the bear might get the honey, such as flipping over a bowl so that it becomes a stool. In other words, they saw a new use for the bowl.

Children who were reminded of their multiple roles also showed more flexible thinking about social groupings. When asked to categorize different photos of faces, they suggested many ways to do so. For instance, they identified smiling faces vs. unsmiling ones, and old vs. young faces. The other children, meanwhile, primarily grouped people's faces by race and gender.

Because the results suggest simple ways to promote flexible, inclusive thinking for the young, they could be especially valuable for teachers, Gaither said.

"We have this tendency in our society to only think about ourselves in connection with one important group at a time," Gaither said. "When we remind kids that they have various identities, they think beyond our society's default categories, and remember that there are many other groups in addition to race and gender.

"It opens their horizons to be a little more inclusive."

Credit: 
Duke University

Why do mosquitoes choose us? Lindy McBride is on the case

image: Researchers in Lindy McBride's lab at Princeton University often waft air across guinea pigs Molly (seen here) and Mia (not pictured) to collect their odor for mosquito research. In experiments, mosquitoes are given the choice between the guinea pigs' odor and human odor as part of studies into how mosquitoes distinguish between humans and other mammals; neither the humans nor the guinea pigs are directly exposed to mosquito stings.

Image: 
Danielle Alio, Princeton University

Carolyn "Lindy" McBride is studying a question that haunts every summer gathering: How and why are mosquitoes attracted to humans?

Few animals specialize as thoroughly as the mosquitoes that carry diseases like Zika, malaria and dengue fever.

In fact, of the more than 3,000 mosquito species in the world, most are opportunistic, said McBride, an assistant professor of ecology and evolutionary biology and the Princeton Neuroscience Institute. They may be mammal biters, or bird biters, with a mild preference for various species within those categories, but most mosquitoes are neither totally indiscriminate nor species-specific. But McBride is most interested in the mosquitoes that scientists call "disease vectors" -- carriers of diseases that plague humans -- some of which have evolved to bite humans almost exclusively.

She studies several mosquitoes that carry diseases, including Aedes aegypti, which is the primary vector for dengue fever, Zika and yellow fever, and Culex pipiens, which carries West Nile virus. A. aegypti specializes in humans, while C. pipiens is less specialized, allowing it to transmit West Nile from birds to humans.

"It's the specialists that tend to be the best disease vectors, for obvious reasons: They bite a lot of humans," said McBride. She's trying to understand how the brain and genome of these mosquitoes have evolved to make them specialize in humans -- including how they can distinguish us from other mammals so effectively.

To help her understand what draws human-specialized mosquitoes to us, McBride compares the behavior, genetics and brains of the Zika mosquito to an African strain of the same species that does not specialize in humans.

In one line of research, she investigates how animal brains interpret complex aromas. That's a more complicated proposition than it first appears, since human odor is composed of more than 100 different compounds -- and those same compounds, in slightly different ratios, are present in most mammals.

"Not any one of those chemicals is attractive to mosquitoes by itself, so mosquitoes must recognize the ratio, the exact blend of components that defines human odor," said McBride. "So how does their brain figure it out?"

She is also studying what combination of compounds attracts mosquitoes. That could lead to baits that attract mosquitoes to lethal traps, or repellants that interrupt the signal.

Most mosquito studies in recent decades have been behavioral experiments, which are very labor intensive, said McBride. "You give them an odor and say, 'Do you like this?' and even with five compounds, the number of permutations you have to go through to figure out exactly what the right ratio is -- it's overwhelming." With 15 or 20 compounds, the number of permutations skyrockets, and with the full complement of 100, it's astronomical.

To test the odor preference of mosquitoes, McBride's lab has primarily used guinea pigs, small mammals with a different blend of many of the same 100 odor compounds of humans. Researchers gather their odor by blowing air over their bodies, and they then present mosquitoes with a choice between eau de guinea pig and a human arm. Human-specialized "domestic" A. aegypti mosquitoes will go toward the arm 90 to 95 percent of the time, said McBride, but the African "forest" A. aegypti mosquitoes are more likely to fly toward the guinea pig aroma.

In another recent experiment, then-senior Meredith Mihalopoulos of the Class of 2018 recruited seven volunteers and did "preference tests" with both forest and domestic A. aegypti mosquitoes. She let the mosquitoes choose between herself and each of the volunteers, finding that some people are more attractive to the insects than others. Then Alexis Kriete, a research specialist in the McBride lab, analyzed the odor of all the participants. They showed that while the same compounds were present, each human was more similar to each other than to the guinea pigs.

"There's nothing really unique about any animal odor," said McBride. "There's no one compound that characterizes a guinea pig species. To recognize a species, you have to recognize blends."

The McBride lab will be expanding to include other mammals and birds in their research. Graduate student Jessica Zung is working with farms and zoos to collect hair, fur, feather and wool samples from 50 animal species. She hopes to extract odor from them and analyze the odors at a Rutgers University facility that fractionates odors and identifies the ratio of the compounds. By inputting their odor profiles into a computational model, she and McBride hope to understand how exactly mosquitoes may have evolved to distinguish humans from non-human animals.

McBride's graduate student Zhilei Zhao is developing an entirely novel approach: imaging mosquito brains at very high resolutions to figure out how a mosquito identifies its next victim. "What combination of neural signals in the brain cause the mosquito to be attracted or repelled?" McBride asked. "If we can figure that out, then it's trivial to screen for blends that can be attractive or repellant. You put the mosquito up there, open up its head, image the brain, pop one aroma after another and watch: Does it hit the right combination of neurons?"

Key to that study will be the imaging equipment provided by Princeton's Bezos Center for Neural Circuit Dynamics, said McBride. "We can walk over there and say we want to image this, at this resolution, with this orientation, and a few months later, the microscope is built," she said. "We could have bought an off-the-shelf microscope, but it would have been so much slower and so much less powerful. Help from Stephan Thiberge, the director of the Bezos Center, has been critical for us."

McBride began her biology career studying evolution in butterflies, but she was lured to disease vector mosquitoes by how easy they are to rear in the lab. While the butterflies McBride studied need a year to develop, A. aegypti mosquitoes can go through an entire life cycle in three weeks, allowing for rapid-turnaround genetic experiments.

"That's what first drew me to mosquitoes," said McBride. "One of the surprises for me has been how satisfying it is that they have an impact on human health. That's certainly not why I got into biology -- I was studying birds and butterflies in the mountains, as far away from humans as I could get -- but I really appreciate that element of mosquito work now.

"But what is still as exciting is how easily we can manipulate mosquitoes to test hypotheses about how new behaviors evolve," she continued. "We can create transgenic strains, we can knock out genes, we can activate neurons with light. All these things have been done in model systems, like mouse and fly, but never in a non-model organism, never in an organism -- I'm showing my bias here -- with such interesting ecology and evolution."

Credit: 
Princeton University

Dose-dependent effects of esmolol-epinephrine combination therapy in myocardial ischemia

Epinephrine has been included in resuscitation guidelines worldwide since the 1960s. It is believed that epinephrine increases the chance of restoring a person's heartbeat and improves long-term neurological outcome through increasing coronary and cerebral perfusion pressure. However, recent studies have raised doubts about the benefit of epinephrine regarding neurological outcomes in cardiac arrest. Moreover, epinephrine use in the stabilization of a cardiogenic shock in post-myocardial infarction patients has been found to increase the incidence of refractory shock. In fact, beta-adrenergic receptor stimulation has been suggested to have deleterious effects as stimulation of this pathway increases oxygen consumption and reduces sub-endocardial perfusion. In contrast, esmolol, a cardio-selective β1-blocker, has been shown to provide cardioprotection after myocardial ischemia in animal and human studies. Therefore, esmolol co-administration with epinephrine may help to reduce epinephrine-reperfusion injury but maintain esmolol-cardioprotection and epinephrine mediated increases in chronotropy and inotropy. Indeed, recent studies in animals have uncovered beneficial effects of epinephrine and esmolol co-administration in a cardiac arrest model. Based on these findings, Dr. Tobias Eckle and his team at the University of Colorado School of Medicine, University of Colorado have investigated esmolol-epinephrine combination therapy in a mouse model of myocardial ischemia and reperfusion injury.

Comparing different esmolol doses in combination with epinephrine in a mouse model of myocardial infarction, Eckle's team demonstrated that at a specific esmolol-epinephrine ratio (15:1), esmolol-cardioprotection and epinephrine β1 mediated hemodynamic activity can both simultaneously exist during myocardial ischemia and reperfusion injury. "These findings might have implications for current clinical practice in the treatment of patients with cardiogenic shock or cardiac arrest", says Eckle. "In fact, a cardiogenic shock after myocardial ischemia disallows the use of esmolol due to hemodynamic instability." Interestingly, a definite recommendation for a specific catecholamine regimen in cardiogenic shock is lacking.

Combination therapy of epinephrine with esmolol seems less intuitive in cardiogenic shock after myocardial ischemia according to the research; higher esmolol doses could compromise epinephrine mediated increases of cardiac output via β1 adrenergic receptor inotropic and chronotropic effects, or higher epinephrine doses could compromise esmolol mediated cardioprotection via β1 adrenergic receptor blockade. Surprisingly, by increasing the esmolol dose, the study team was able to restore esmolol-cardioprotection while heart rate and some blood pressures in the early reperfusion phase were significantly increased compared to an esmolol treatment alone. "This finding is novel and highlights that esmolol cardioprotection is not fully understood," says Eckle. Having increased heart rates, which is β1 mediated, and at the same time seeing cardioprotection via esmolol β1 blockade, indicates that only a part or short-term blockade of β1 receptors is necessary for the salutary effects of esmolol in myocardial ischemia and reperfusion injury.

While some clinicians occasionally use esmolol in patients on epinephrine infusion due to cardiogenic shock going off cardiac bypass to treat epinephrine-induced arrhythmias, no study to date has evaluated potential cardioprotective effects of esmolol-epinephrine co-administration during cardiac bypass surgery or a cardiogenic shock. As this is the first animal study on epinephrine-esmolol co-administration during myocardial ischemia and reperfusion injury, further studies in larger animals using multiple dosing protocols are suggested.

Credit: 
Bentham Science Publishers

Atmosphere of mid-size planet revealed by hubble and spitzer

image: This artist's illustration shows the theoretical internal structure of the exoplanet GJ 3470 b. It is unlike any planet found in the Solar System. Weighing in at 12.6 Earth masses the planet is more massive than Earth but less massive than Neptune. Unlike Neptune, which is 3 billion miles from the Sun, GJ 3470 b may have formed very close to its red dwarf star as a dry, rocky object. It then gravitationally pulled in hydrogen and helium gas from a circumstellar disk to build up a thick atmosphere. The disk dissipated many billions of years ago, and the planet stopped growing. The bottom illustration shows the disk as the system may have looked long ago. Observation by NASA's Hubble and Spitzer space telescopes have chemically analyzed the composition of GJ 3470 b's very clear and deep atmosphere, yielding clues to the planet's origin. Many planets of this mass exist in our galaxy.

Image: 
NASA, ESA, and L. Hustak (STScI)

Two NASA space telescopes have teamed up to identify, for the first time, the detailed chemical "fingerprint" of a planet between the sizes of Earth and Neptune. No planets like this can be found in our own solar system, but they are common around other stars.

The planet, Gliese 3470 b (also known as GJ 3470 b), may be a cross between Earth and Neptune, with a large rocky core buried under a deep crushing hydrogen and helium atmosphere. Weighing in at 12.6 Earth masses, the planet is more massive than Earth, but less massive than Neptune (which is more than 17 Earth masses).

Many similar worlds have been discovered by NASA's Kepler space observatory, whose mission ended in 2018. In fact, 80% of the planets in our galaxy may fall into this mass range. However, astronomers have never been able to understand the chemical nature of such a planet until now, researchers say.

By inventorying the contents of GJ 3470 b's atmosphere, astronomers are able to uncover clues about the planet's nature and origin.

"This is a big discovery from the planet formation perspective. The planet orbits very close to the star and is far less massive than Jupiter--318 times Earth's mass--but has managed to accrete the primordial hydrogen/helium atmosphere that is largely "unpolluted" by heavier elements," said Björn Benneke of the University of Montreal, Canada. "We don't have anything like this in the solar system, and that's what makes it striking."

Astronomers enlisted the combined multi-wavelength capabilities NASA's Hubble snd Spitzer space telescopes to do a first-of-a-kind study of GJ 3470 b's atmosphere.

This was accomplished by measuring the absorption of starlight as the planet passed in front of its star (transit) and the loss of reflected light from the planet as it passed behind the star (eclipse). All totaled, the space telescopes observed 12 transits and 20 eclipses. The science of analyzing chemical fingerprints based on light is called "spectroscopy."

"For the first time we have a spectroscopic signature of such a world," said Benneke. But he is at a loss for classification: Should it be called a "super-Earth" or "sub-Neptune?" Or perhaps something else?

Fortuitously, the atmosphere of GJ 3470 b turned out to be mostly clear, with only thin hazes, enabling the scientists to probe deep into the atmosphere.

"We expected an atmosphere strongly enriched in heavier elements like oxygen and carbon which are forming abundant water vapor and methane gas, similar to what we see on Neptune", said Benneke. "Instead, we found an atmosphere that is so poor in heavy elements that its composition resembles the hydrogen/helium rich composition of the Sun."

Other exoplanets called "hot Jupiters" are thought to form far from their stars, and over time migrate much closer. But this planet seems to have formed just where it is today, says Benneke.

The most plausible explanation, according to Benneke, is that GJ 3470 b was born precariously close to its red dwarf star, which is about half the mass of our Sun. He hypothesizes that essentially it started out as a dry rock, and rapidly accreted hydrogen from a primordial disk of gas when its star was very young. The disk is called a "protoplanetary disk."

"We're seeing an object that was able to accrete hydrogen from the protoplanetary disk, but didn't runaway to become a hot Jupiter," said Benneke. "This is an intriguing regime."

One explanation is that the disk dissipated before the planet could bulk up further. "The planet got stuck being a sub-Neptune," said Benneke.

NASA's upcoming James Webb Space Telescope will be able to probe even deeper into GJ 3470 b's atmosphere thanks to the Webb's unprecedented sensitivity in the infrared. The new results have already spawned large interest by American and Canadian teams developing the instruments on Webb. They will observe the transits and eclipses of GJ 3470 b at light wavelengths where the atmospheric hazes become increasingly transparent.

The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.

The Jet Propulsion Laboratory in Pasadena, California, manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate in Washington, D.C. Science operations are conducted at the Spitzer Science Center at Caltech in Pasadena. Space operations are based at Lockheed Martin Space Systems in Littleton, Colorado. Data are archived at the Infrared Science Archive housed at IPAC at Caltech. Caltech manages JPL for NASA.

Credit: 
NASA/Goddard Space Flight Center

Harnessing reliability for neuroscience research

The neuroimaging community has made significant strides towards collecting large-scale neuroimaging datasets, which--until the past decade--had seemed out of reach. Between initiatives focused on the aggregation and open sharing of previously collected datasets and de novo data generation initiatives tasked with the creation of community resources, tens of thousands of datasets are now available online. These span a range of developmental statuses and disorders, and many more will soon be available. Such open data are allowing researchers to increase the scale of their studies, to apply various learning strategies (for example, artificial intelligence) with ambitions of brainbased biomarker discovery and to address questions regarding the reproducibility of findings, all at a pace that is unprecedented in imaging. However, based on the findings of recent works1-3, few of the datasets generated to date contain enough data per subject to achieve highly reliable measures of brain connectivity. Although our examination of this critical deficiency focuses on the field of neuroimaging, the implications of our argument and the statistical principles discussed are broadly applicable.

Scoping the problem

Our concern is simple: researchers are working hard to amass large-scale datasets, whether through data sharing or coordinated data generation initiatives, but failing to optimize their data collections for relevant reliabilities (for example, test-retest, between raters, etc.)4. They may be collecting larger amounts of suboptimal data, rather than smaller amounts of higherquality data, a trade-off that does not bode well for the field, particularly when it comes to making inferences and predictions at the individual level. We believe that this misstep can be avoided by critical assessments of reliability upfront.

The trade-off we observe occurring in neuroimaging reflects a general tendency in neuroscience. Statistical power is fundamental to studies of individual differences, as it determines our ability to detect effects of interest. While sample size is readily recognized as a key determinant of statistical power, measurement reliabilities are less commonly considered and at best are only indirectly considered when estimating required sample sizes. This is unfortunate, as statistical theory dictates that reliability places an upper limit on the maximum detectible effect size.

The interplay between reliability, sample size and effect size in determinations of statistical power is commonly underappreciated in the field. To facilitate a more direct discussion of these factors, Fig. 1 depicts the impact of measurement reliability and effect size on the sample sizes required to achieve desirable levels of statistical power (for example, 80%); these relations are not heavily dependent on the specific form of statistical inference employed (for example, two-sample t-test, paired t-tests, three-level ANOVA). Estimates were generated using the pwr package in R and are highly congruent with results from Monte Carlo simulations5. With respect to neuroscience, where the bulk of findings report effect sizes ranging from modest to moderate6, the figure makes obvious our point that increasing reliability can dramatically reduce the sample size requirements (and therefore cost) for achieving statistically appropriate designs.

In neuroimaging, the reliability of the measures employed in experiments can vary substantially2-4. In MRI, morphological measures are known to have the highest reliability, with most voxels in the brain exhibiting reliabilities measured as intraclass correlation >0.8 for core measures (for example, volume, cortical thickness and surface area). For functional MRI (fMRI) approaches, reliability tends to be lower and more variable, heavily dependent on the experimental design, the nature of the measure employed and--most importantly--the amount of data obtained (for example, for basic resting-state fMRI measures, the mean intra-class correlation obtained across voxels may increase by two to four times as one increases from 5 min to 30 min of data)2,3. Limited interindividual variability may be a significant contributor to findings of low reliability for fMRI, as its magnitude relative to within-subject variation is a primary determinant of reliability. Such a concern has been raised for task fMRI7, which directly borrows behavioural task designs from the psychological literature8.

Potential implications

From a statistical perspective, the risks of underpowered samples yielding increased false negatives and artificially inflated effect sizes (i.e., the 'winner's curse' bias) are well known. More recently, the potential for insufficiently powered samples to generate false positives has been established as well9. All these phenomena reduce the reproducibility of findings across studies, a challenge that other fields (for example, genetics) have long worked to overcome. In the context of neuroimaging or human brain mapping, an additional concern is that we may be biased to overvalue those brain areas for which measurement reliability is greater. For example, the default and frontoparietal networks receive more attention in clinical and cognitive neuroscience studies of individual and group differences. This could be appropriate, but it could also reflect the higher reliabilities of these networks3,4.

Solutions

Our goal here is to draw greater attention to the need for assessment and optimization of reliability, which is typically underappreciated in neuroscience research. Whether one is focusing on imaging, electrophysiology, neuroinflammatory markers, microbiomics, cognitive neuroscience paradigms or on-person devices, it is essential that we consider measurement reliability and its determinants.

For MRI-based neuroimaging, a repeated theme across the various modalities (for example, diffusion, functional, morphometry) is that higher quality data require more time to collect, whether due to increased resolution or repetitions. As such, investigators would benefit from assessing the minimum data requirements to achieve adequately reliable measurements before moving forward. An increasing number of resources are available for such assessments of reliability (for example, Consortium for Reliability and Reproducibility, MyConnectome Project, Healthy Brain Network Serial Scanning Initiative, Midnight Scan Club, Yale Test-Retest Dataset, PRIMatE Data Exchange). It is important to note that these resources are primarily focused on test-retest reliability4, leaving other forms of reliability less explored (for example, interstate reliability, inter-scanner reliability; see recent efforts from a Research Topic on reliability and reproducibility in functional connectomics10).

Importantly, reliability will differ depending on how a given imaging dataset is processed and which brain features are selected. A myriad of different processing strategies and brain features have emerged, but they are rarely compared with one another to identify those most suitable for studying individual differences. In this regard, efforts to optimize analytic strategies for reliability are essential, as they make it possible to decrease the minimum data required per individual to achieve a target level of reliability1-4,11. This is critically important for applications in developing, aging and clinical populations, where scanner environment tolerability limits our ability to collect time-intensive datasets. An excellent example of quantifying and optimizing for reliability comes from functional connectomics. Following convergent reports that at least 20-30 min of data are needed to obtain test- retest reliability for traditional pairwise measures of connectivity2, recent works have suggested the feasibility of combining different fMRI scans in a session (for example, rest, movie, task) to make up the differential in calculating reliable measures of functional connectivity2,12.

Cognitive and clinical neuroscientists should be aware that many cognitive paradigms used inside and outside of the scanner have never been subject to proper assessments of reliability, and the quality of reliability assessments for questionnaires (even proprietary) can vary substantially.

As such, the reliability of data being used on the phenotyping side is often an unknown in the equation and can limit the utility of even the most optimal imaging measures, a reality that also affects other fields (for example, genetics) and inherently compromises such efforts. Although not always appealing, an increased focus on the quantification and publication of minimum data requirements and their reliabilities for phenotypic assessments is a necessity, as is exploration of novel approaches to data capture that may increase reliability (for example, sensor-based acquisition via wearables and longitudinal sampling via smartphone apps).

Finally, and perhaps most critically, there is marked diversity in how the word 'reliability' is used, and a growing number of separate reliability metrics are appearing. This phenomenon is acknowledged in a recent publication13 by an Organization for Human Brain Mapping workgroup tasked with generating standards for improving reproducibility. We suggest it would be best to build directly on the terminology and measures well-established in other literatures (for example, statistics, medicine) rather than start anew14. We particularly want to avoid confusions in terminology, particularly those between 'reliability' and 'validity', two related but distinct concepts that are commonly used interchangeably in the literature. To facilitate an understanding of this latter point, we include a statistical note on the topic below.

A confusion to avoid

It is crucial that researchers acknowledge the gap between reliability and validity, as a highly reliable measure can be driven by artefact rather than meaningful (i.e., valid) signal. As illustrated in Fig. 2, this point becomes obvious when one considers the differing sources of variance associated with the measurement of individual differences15. First, we have the portion of the variance measured across individuals that is the trait of interest (Vt) (for example, between-subject differences in grey matter volume within left inferior frontal gyrus). Second, there is variance related to unwanted contaminants in our measurement that can systematically vary across individuals (Vc) (for example, between-subject differences in head motion). Finally, there is random noise (Vr), which is commonly treated as within-subject variation. Reliability is the proportion of the total variance that can be attributed to systematic variance across individuals (including both Vt and Vc; see equation 1); in contrast, validity is the proportion of the total variance that can be attributed specifically to the trait of interest alone (Vt; see equation 2).

Reliability= (Vt+Vc)/(Vt+Vc +Vr ) (1)
Validity =Vt/(Vt+Vc +Vr ) (2)

As discussed in prior work15, this framework indicates that a measure cannot be more valid than reliable (i.e., reliability provides an upper bound for validity). So, while it is possible to have a measurement that is sufficiently reliable and completely invalid (for example, a reliable artefact), it is impossible to have a measurement with low reliability that has high validity.

A specific challenge for neuroscientists is that while reliability can be readily quantified, validity cannot, as it is not possible to directly measure Vt. As such, various indirect forms of validity are used, which differ in the strength of the evidence required. At one end is criterion validity, which compares the measure of interest to an independent measure designated as the criterion or 'gold standard' measurement (for example, comparison of individual differences in tracts identified by diffusion imaging to postmortem histological findings, or comparison of differences in fMRI-based connectivity patterns to intracranial measures of neural coupling or magnetoencephalography). At the other extreme is face validity, in which findings are simply consistent with 'common sense' expectations (for example, does my functional connectivity pattern look like the motor system?). Intermediate to these are concepts such as construct validity, which test whether a measure varies as would be expected if it is indexing the desired construct (i.e., convergent validity) and not others (i.e., divergent validity) (for example, do differences in connectivity among individuals vary with developmental status and not head motion or other systematic artefacts?). An increasingly common tool in the imaging community is predictive validity, where researchers test the ability to make predictions regarding a construct of interest (for example, do differences in the network postulated to support intelligence predict differences in IQ?). As can be seen from the examples provided, different experimental paradigms offer differing levels of validity, with the more complex and challenging offering the highest forms. From a practical perspective, what researchers can do is make best efforts to measure and remove artefact signals such as head motion4,16 and work to establish the highest form of validity possible using the methods available.

Closing remarks

As neuroscientists make strides in our efforts to deliver clinically useful tools, it is essential that assessments and optimizations for reliability become common practice. This will require improved research practices among investigators, as well as support from funding agencies in the generation of open community resources upon which these essential properties can be quantified.

Credit: 
The Child Mind Institute

Deal or no deal? How discounts for unhappy subscribers can backfire on businesses

image: Vamsi Kanuri, assistant professor of marketing in Notre Dame's Mendoza College of Business.

Image: 
University of Notre Dame

Subscription-based service providers including newspapers, cable and internet providers and utility companies often issue price-based incentives including discounts in response to complaints about service failures. It's been shown to satisfy angry customers -- at least momentarily.

But new research from the University of Notre Dame demonstrates the tactic may not be successful in retaining customers in the long term.

"The Unintended Consequence of Price-Based Service Recovery Incentives," forthcoming in the Journal of Marketing from lead author Vamsi Kanuri, assistant professor of marketing in Notre Dame's Mendoza College of Business, and Michelle Andrews from Emory University, shows that in subscription-based service settings, discounts to make up for service failures could backfire by reducing the likelihood of subscription renewals.

"The economic theory of reference prices (amount a purchaser thinks is appropriate to pay for a good or service) leads us to believe that discounts to make up for service failures will provide a new price point for customers to anchor on," Kanuri said. "In turn, this will lead them to compare the price of the service renewal with their reduced service price following the service failure. A higher discount results in consumers forming a lower reference price, which in turn increases the difference between the full renewal price and the reference price. This difference then translates into a perceived loss, which ultimately results in lower renewal probabilities."

In other words, consumers may end up feeling cheated rather than rewarded by the discount -- the exact opposite of what the provider hoped to accomplish.

The researchers used econometric techniques to examine 6,919 renewal decisions of subscribers who threatened to cancel their subscriptions following service delivery failures at a large U.S. newspaper firm. The data covered 10 delivery failures frequently experienced by customers, including late delivery, wrong newspaper delivered, missed delivery, newspaper delivered to the wrong location and property damage during delivery.

"Firms do not understand the paradox of service failure," Kanuri said. "It has been shown that if a firm is able to delight a customer at the point of service failure, the customer is likely to be more satisfied than under normal conditions when there is no service failure and is likely to remain a customer longer. Everyone knows that firms are imperfect, just as human beings, and that there will be a service letdown at some point. How the firm chooses to delight its customers can make all the difference."

The study also offers ways to mitigate the negative effect of recovery discounts and can help any subscription-based service provider currently using discounts as a recovery tactic.

"After all, discounts may be necessary to alleviate customer dissatisfaction immediately after a service failure and firms may not have another option," Kanuri said. "In such circumstances, we demonstrate that firms can alleviate the long-term negative consequences by lowering the renewal price at the end of the contract, increasing the time between recovery and contract renewal (offer additional service usage time) and using touchpoints with customers such as emails, bill reminders and follow-up phone calls to remind customers of the initial subscription price."

Credit: 
University of Notre Dame

Gender bias alive and well in health care roles, study shows

New Orleans, LA -- Results of a multi-center study of patients' assumptions about health care professionals' roles based on gender show significant stereotypical bias towards males as physicians and females as nurses. The research team, led in New Orleans by Lisa Moreno-Walton, MD, LSU Health New Orleans Emergency Medicine at University Medical Center (UMC), found patients recognized males as physicians nearly 76% of the time. Female attending physicians were recognized as physicians only 58% of the time. The research paper, published in the Journal of Women's Health, is available online here.

"Despite the fact that about 52% of all medical students are women, unconscious/ implicit bias is so strong that even when women introduce themselves to patients as the doctor, patients fail to recognize the female as a physician," notes Dr. Moreno-Walton.

The researchers analyzed the responses of 150 patients to an anonymous survey conducted in the Emergency Department (ED) of a teaching hospital in Miami Beach, Florida. After being seen by the nurse, resident physician, and attending physician who introduced themselves, explained their exact roles and wrote their names on the white boards in the exam rooms next to the nurse, resident or attending physician designation, volunteers administered the survey. Patients were asked to recall the gender of their nurse, resident physician, and attending physician.

The patients' genders did not affect their recognition of physicians and nurses. Both male and female patients correctly recognized female physicians less than male and female more than male nurses. Patients' ages, though, did. Younger patients more often correctly identified male nurses as nurses and female attending physicians as physicians than did older patients.

"At UMC, the vast majority of female ED physicians report that on our monthly patient satisfaction reports, only about 50% of our patients state that they were seen by a physician," adds Moreno-Walton. "All of us know we are seeing all of our patients, but the patients do not know that we are doctors."

This experience is a common one.

"When I do my career development seminars all over the country, I ask, 'If there is any woman in the room who has NEVER been mistaken for a nurse, please raise your hand,'" says Moreno-Walton. "In seven years of doing these seminars, I have yet to have even one woman physician raise her hand. Implicit bias, microaggressions and the sometimes present explicit bias can do significant damage to the morale, self-esteem and confidence of health care professionals who are females and/or underrepresented minorities. This is not trivial."

The research team advises further research be conducted to understand the implications of implicit gender biases on patient satisfaction, patient compliance, physician burnout, compassion fatigue and job satisfaction, among other issues.

Credit: 
Louisiana State University Health Sciences Center

How you and your friends can play a video game together using only your minds

image: University of Washington researchers created a method for two people help a third person solve a task using only their minds. Heather Wessel, a recent UW graduate with a bachelor's degree in psychology (left), and Savannah Cassis, a UW undergraduate in psychology (right) sent information about a Tetris-like game from their brains over the internet to UW psychology graduate student Theodros Haile's (middle) brain. Haile could then manipulate the game with his mind.

Image: 
Mark Stone/University of Washington

Telepathic communication might be one step closer to reality thanks to new research from the University of Washington. A team created a method that allows three people to work together to solve a problem using only their minds.

In BrainNet, three people play a Tetris-like game using a brain-to-brain interface. This is the first demonstration of two things: a brain-to-brain network of more than two people, and a person being able to both receive and send information to others using only their brain. The team published its results April 16 in the Nature journal Scientific Reports, though this research previously attracted media attention after the researchers posted it September to the preprint site arXiv.

"Humans are social beings who communicate with each other to cooperate and solve problems that none of us can solve on our own," said corresponding author Rajesh Rao, the CJ and Elizabeth Hwang professor in the UW's Paul G. Allen School of Computer Science & Engineering and a co-director of the Center for Neurotechnology. "We wanted to know if a group of people could collaborate using only their brains. That's how we came up with the idea of BrainNet: where two people help a third person solve a task."

As in Tetris, the game shows a block at the top of the screen and a line that needs to be completed at the bottom. Two people, the Senders, can see both the block and the line but can't control the game. The third person, the Receiver, can see only the block but can tell the game whether to rotate the block to successfully complete the line. Each Sender decides whether the block needs to be rotated and then passes that information from their brain, through the internet and to the brain of the Receiver. Then the Receiver processes that information and sends a command -- to rotate or not rotate the block -- to the game directly from their brain, hopefully completing and clearing the line.

The team asked five groups of participants to play 16 rounds of the game. For each group, all three participants were in different rooms and couldn't see, hear or speak to one another.

The Senders each could see the game displayed on a computer screen. The screen also showed the word "Yes" on one side and the word "No" on the other side. Beneath the "Yes" option, an LED flashed 17 times per second. Beneath the "No" option, an LED flashed 15 times a second.

"Once the Sender makes a decision about whether to rotate the block, they send 'Yes' or 'No' to the Receiver's brain by concentrating on the corresponding light," said first author Linxing Preston Jiang, a student in the Allen School's combined bachelor's/master's degree program.

The Senders wore electroencephalography caps that picked up electrical activity in their brains. The lights' different flashing patterns trigger unique types of activity in the brain, which the caps can pick up. So, as the Senders stared at the light for their corresponding selection, the cap picked up those signals, and the computer provided real-time feedback by displaying a cursor on the screen that moved toward their desired choice. The selections were then translated into a "Yes" or "No" answer that could be sent over the internet to the Receiver.

"To deliver the message to the Receiver, we used a cable that ends with a wand that looks like a tiny racket behind the Receiver's head. This coil stimulates the part of the brain that translates signals from the eyes," said co-author Andrea Stocco, a UW assistant professor in the Department of Psychology and the Institute for Learning & Brain Sciences, or I-LABS. "We essentially 'trick' the neurons in the back of the brain to spread around the message that they have received signals from the eyes. Then participants have the sensation that bright arcs or objects suddenly appear in front of their eyes."

If the answer was, "Yes, rotate the block," then the Receiver would see the bright flash. If the answer was "No," then the Receiver wouldn't see anything. The Receiver received input from both Senders before making a decision about whether to rotate the block. Because the Receiver also wore an electroencephalography cap, they used the same method as the Senders to select yes or no.

The Senders got a chance to review the Receiver's decision and send corrections if they disagreed. Then, once the Receiver sent a second decision, everyone in the group found out if they cleared the line. On average, each group successfully cleared the line 81% of the time, or for 13 out of 16 trials.

The researchers wanted to know if the Receiver would learn over time to trust one Sender over the other based on their reliability. The team purposely picked one of the Senders to be a "bad Sender" and flipped their responses in 10 out of the 16 trials -- so that a "Yes, rotate the block" suggestion would be given to the Receiver as "No, don't rotate the block," and vice versa. Over time, the Receiver switched from being relatively neutral about both Senders to strongly preferring the information from the "good Sender."

The team hopes that these results pave the way for future brain-to-brain interfaces that allow people to collaborate to solve tough problems that one brain alone couldn't solve. The researchers also believe this is an appropriate time to start to have a larger conversation about the ethics of this kind of brain augmentation research and developing protocols to ensure that people's privacy is respected as the technology improves. The group is working with the Neuroethics team at the Center for Neurotechnology to address these types of issues.

"But for now, this is just a baby step. Our equipment is still expensive and very bulky and the task is a game," Rao said. "We're in the 'Kitty Hawk' days of brain interface technologies: We're just getting off the ground."

Credit: 
University of Washington

'Planting green' cover-crop strategy may help farmers deal with wet springs

image: Planting green involves planting main crops into living cover crops. An example shown here: Cereal rye is rolled and soybeans planted green in the same pass at Penn State's Russell E. Larson Agricultural Research Center.

Image: 
Heidi Reed/Penn State

Allowing cover crops to grow two weeks longer in the spring and planting corn and soybean crops into them before termination is a strategy that may help no-till farmers deal with wet springs, according to Penn State researchers.

The approach -- known as planting green -- could help no-till farmers counter a range of problems they must deal with during wet springs like the ones that have occurred this year and last year. These problems include soil erosion, nutrient losses, soils holding too much moisture and causing a delay in the planting of main crops, and main-crop damage from slugs.

"With climate change bringing the Northeast more extreme precipitation events and an increase in total precipitation, no-till farmers especially need a way of dealing with wet springs," said Heather Karsten, associate professor of crop production ecology, whose research group in the College of Agricultural Sciences conducted a three-year study of planting green. "We wanted to see if farmers could get more out of their cover crops by letting them grow longer in the spring."

As cover crops continue to grow, they draw moisture from the soil, creating desired drier conditions in wet springs for planting corn and soybeans. With planting green, after those main crops are planted into the cover crops, the cover crops are typically terminated by farmers with an herbicide. The decomposing cover crop residues then preserve soil moisture for the corn and soybean crops through the growing season.

The study took place at five sites over three years -- on three cooperating Pennsylvania farms that plant no-till in Centre, Clinton and Lancaster counties; at Penn State's Russell E. Larson Agricultural Research Center in Centre County; and at the University's Southeast Agricultural Research and Extension Center in Lancaster County.

At each location, researchers compared the results of planting green to the traditional practice of terminating cover crops 10 days to two weeks before planting the main crops of corn and soybeans.

Cover crops included in the study were primarily rye and triticale, as well as a mixture of triticale, Austrian winter pea, hairy vetch and radish in one location.

Findings of the research, recently published online today in Agronomy Journal, were mixed, according to study leader Heidi Reed, a doctoral student in agronomy when the research was conducted who is now an educator with Penn State Extension, specializing in field and forage crops.

Reed noted that planting green appeared to benefit soybean crops more than corn.

Planting green increased cover crop biomass by 94 percent in corn and by 94 to 181 percent in soybean.

However, because planting green results in more cover crop residues acting as mulch on the surface, it also cooled soils from 1.3 to 4.3 degrees Fahrenheit at planting.

At several of the sites during the study years, main-crop plant populations were reduced when planted green, possibly due to the cooler temperatures slowing crop emergence and nutrient cycling, and/or from cover crop residue interference with the planter. In corn, in a few cases, crop damage by slugs was also increased when corn was planted green.

No-till farmers struggle with slugs damaging corn and soybean seeds and seedlings because no-till doesn't disturb the soil and kill slugs or bury their eggs the way tillage does.

"No-till with cover crop residues also provides habitat for some crop pests and keeps the soil moist -- so no-till cover crop systems tend to be great slug habitat," Karsten said.

"We had hoped that letting cover crops grow longer in the spring would supply alternative forage for the slugs, as well as habitat for slug predators such as beetles -- and these factors would reduce slug damage of the main crop seedlings. But we did not see a consistent reduction in slug damage on main crops as we expected."

When researchers compared crop-yield stability between the two cover crop termination times across the multiple locations and years, corn yield was less stable and reduced by planting green in high-yielding environments; however, soybean yield was not influenced by planting green.

"We concluded that corn was more vulnerable to yield losses from conditions created by planting green than soybeans, " Reed said. "Since soybean yield was stable across study locations, and not affected by cover crop termination date, we suggest that growers who want to extend cover crop benefits and avoid the risk of crop-yield reduction from planting green should consider trying it first with soybean."

Credit: 
Penn State