Culture

New health insurance insights

A new analysis of a randomized health insurance program in Oregon sheds light on the value the program has for enrollees and providers alike.

The study, by MIT economist Amy Finkelstein and two co-authors, suggests that adults with low incomes value Medicaid at only about 20 cents to 50 cents per dollar of medical spending paid on their behalf.

"The value of Medicaid for most low-income adults is much lower than the medical expenditures paid by the insurance," says Finkelstein, the John and Jennie S. MacDonald Professor at MIT and a leading health care economist.

That finding reinforces the results of another, separate study that Finkelstein and multiple co-authors conducted in Massachusetts. In that case, across 70 percent of people in the Massachusetts state health insurance program for low-income adults, their valuation of the program was equal to less than 50 percent of their expected insurance costs.

While it might seem puzzling that recipients value health insurance at less than the covered medical expenditures, the study also offers an explanation for this: Low-income individuals who do not have insurance still only pay a fraction of their medical costs. In the Oregon data, this figure was roughly 20 percent of medical costs; prior studies have found similar results nationwide. The remainder of the spending on the low-income uninsured comes from a variety of sources, including charity care from nonprofit hospitals, publicly funded health clinics that offer free care, state funding to hospitals for uncompensated care, and unpaid medical debt.

"The nominally uninsured have a fair amount of implicit insurance," Finkelstein says. "Once you put it in that light, it becomes a lot less surprising that Medicaid spending is valued by them at a lot less than dollar for dollar."

One further implication of the findings is that a significant portion of public spending on health insurance for low-income individuals effectively acts as a subsidy for health care providers and state programs that cover the costs of uninsured patients.

The new paper, "The Value of Medicaid: Interpreting Results from the Oregon Health Experiment," appears in the December issue of the Journal of Political Economy. Its co-authors are Finkelstein; Nathan Hendren PhD '12, a professor of economics at Harvard University; and Erzo F.P. Luttmer, a professor of economics at Dartmouth College.

The previous paper, "Subsidizing Health Insurance for Low-Income Adults: Evidence from Massachusetts," was published last spring in the American Economic Review. Its co-authors are Finkelstein; Hendren; and Mark Shepard, an assistant professor at the Harvard Kennedy School of Government.

A random walk in Oregon

The latest paper examines a distinctive Medicaid policy that Oregon implemented in 2008. With funding to cover only about 10,000 of eligible adults, Oregon conducted a lottery to decide who would be eligible to apply for Medicaid.

That random assigment of slots using a lottery allowed the researchers to develop a study comparing two otherwise similar groups of Oregon residents: those who had obtained Medicaid coverage via lottery and those who entered the lottery but did not gain coverage. In effect, Oregon had developed a randomized controlled trial, which the scholars used for their research.

Medicaid eligibility regulations and administrative practices can vary by state. In Oregon, adults and children generally qualify for Medicaid when they live in a household with income no greater than 133 percent of the poverty level defined by the U.S. federal government; in 2016, in the 48 contiguous states, that was $11,800 for a single person and $24,300 for a family of four.

Previous studies of the Oregon experiment that Finkelstein has led have shown that, among other things, emergency room use increases among Medicaid recipients, contrary to expectations of many experts.

Being covered by Medicaid also increases patient visits to doctors, prescription drug use, and hospital admissions, while reducing out-of-pocket medical expenses and lowering unpaid medical debt for recipients. Medicaid coverage also appears to lower the incidence of depression, although it does not seem to change the available measures of physical health.

The current study uses data from the prior Oregon studies, as well as state Medicaid records, and survey data from individuals who applied for Oregon's lottery. The survey data show how much people used health care, including prescription drugs, outpatient visits, emergency-room visits, and hospital visits.

In line with previous studies, the current paper shows that having Medicaid increases total spending on health care -- about $3,600 reimbursed to providers annually on behalf of each Medicaid enrollee, compared to $2,721 annually for each low-income uninsured individual. Of that $2,721, the low-income uninsured paid about $569 in annual out-of-pocket costs -- the source of the paper's estimate that uninsured individuals pay about 20 percent of charged costs.

Using this data, the researchers also estimated an annual net cost of Medicaid in Oregon of $1,448 per recipient. This is the average annual increase in health care spending by Medicaid recipients, plus their average annual decrease in out-of-pocket spending. Thus moving a low-income uninsured individual in Oregon onto Medicaid results in a $1,448 increase in insured health care spending on behalf of that person.

Because the Oregon Medicaid program's reimbursements to health care providers are an average of $3,600 annually per recipient, the researchers estimate that about 40 percent of Medicaid spending underwrites costs incurred by enrollees. The other 60 percent is, as they write in the paper, "best conceived of as ... a monetary transfer to external parties who would otherwise subsidize the medical care for the low-income uninsured."

Simultaneously, the researchers refined their "willingness to pay" metric by using multiple methods to estimate how much having health insurance affects consumer spending generally. These methods yielded three estimates ranging from $793 to $1,675 in annual health care spending for low-income individuals. This is the source of the paper's conclusion that people value Medicaid at 20 percent to 50 percent of charged costs.

Two approaches, similar results

Significantly, the two studies use different methodological approaches to study different programs in different states, and arrive at similar conclusions. In Massachusetts, the scholars used data from the state's health insurance program -- a forerunner of the federal Affordable Care Act -- to see how the share of eligible individuals who signed up for insurance changed as their subsidy level changed.

"Despite a different design and different setting, even though it's Massachusetts and not Oregon, and different method, we got pretty much the same result," Finkelstein observes.

Overall, Finkelstein says, it will be valuable to keep learning about the care obtained by uninsured people, as well as the ultimate destination of Medicaid funding, including the 60 percent that is routed to other parties that subsidize care for the low-income uninsured. Understanding who ultimately gets those transfers, she notes, could help illuminate how redistributive Medicaid actually is, as a program intended to benefit lower-income Americans.

Moreover, Finkelstein says, more research will be needed to study how best to provide health care for lower-income Americans.

"Right now we have an implicit, informal insurance system that likely reduces demand for formal insurance but provides a sort of patchwork of care that may not be very good," Finkelstein says.

Credit: 
Massachusetts Institute of Technology

Leafcutter ants accelerate the cutting and transport of leaves during stormy weather

Leafcutter ants such as Atta sexdens or Acromyrmex lobicornis face two major challenges when they leave the safety of the nest to forage: choosing the best plants from which to collect leaves and avoiding being surprised by strong winds or heavy rain, which would prevent them from carrying out their task.

A study by researchers at the University of São Paulo's Luiz de Queiroz College of Agriculture (ESALQ-USP) in Brazil shows that leafcutter ants are capable of predicting adverse weather by sensing changes in atmospheric pressure.

When the ants detect a sharp drop in atmospheric pressure, which in most cases is a sign that heavy rain and strong winds are imminent, they greatly accelerate the speed concluded that the ants are able to collect and store the largest possible amount of food for the nest.

The results of the study are published in the journal Ethology. The study was conducted under the aegis of the National Institute of Science and Technology for Semiochemicals in Agriculture, one of the NISTs funded by São Paulo Research Foundation - FAPESP and the National Council for Scientific and Technological Development (CNPq) in São Paulo State.

"We found that the leafcutter ant can sense changes in atmospheric pressure to anticipate adverse weather and change its foraging strategy" told José Maurício Simões Bento, a professor at ESALQ-USP and one of the authors of the study, to Agência FAPESP.

According to Bento, the search for food is essential for ant colonies, since relatively few individuals leave the nest.

"Many ant castes, such as queens and gardeners, as well as immature stages, stay inside the nest," he said. "The only castes that go outside are foragers, to cut and transport leaves, and soldiers, to defend the colony entrance."

The first foragers to exit the nest are scouts, whose job is to search for leafy plants in the surrounding area. Once they locate plants with leaves available for cutting, they return home, marking the trail with a pheromone so that other workers can find the plants, cut leaves and carry them back to the nest.

Most of this vegetative material is used by these ants to grow a fungus, Leucoagaricus gongylophorus, with which they exhibit a mutualistic symbiotic relationship.

The role played by the ants in this mutualism is to go outside and bring back plant material to serve as a substrate for the growth of the fungus. The fungus donates nutrients through its hyphae (cell filaments) that the ants can eat.

"These leafcutter ants cultivate the fungus to have plenty of food available, especially as a reserve for periods of scarcity," Bento said.

Faster foraging

To determine whether the ants are able to sense changes in atmospheric pressure and change their foraging strategy accordingly, researchers decided to analyze worker recruitment and leaf-cutting patterns under low and high atmospheric pressure compared with stable conditions.

They placed three nests of A. sexdens in a barometric chamber and tested different pressure levels for their impact on the ants' foraging activity. The pressure was first raised to 950 millibars (mbar) and maintained for 1 hour to allow the colony to acclimatize. It was then held steady, followed by an increase to 958 mbar and a decrease to 942 mbar, for 3 hours in each case.

"We chose 8 mbar as the interval between low, stable and high pressure because this is the average recorded for Brazilian cities that produce eucalyptus or roses, and where A. sexdens occurs naturally and is a pest for these crops," Bento explained.

After these different levels of atmospheric pressure were reached, the colonies were filmed for 1 hour, since rain and wind occur several hours after the pressure drops.

At this point, the entrance to each colony was opened to allow the ants exit to a rosebush via a platform. The number of leaves cut and carried into each nest was counted, as were the time taken by the first scout to leave and the total number of workers recruited to forage. The results were subjected to statistical analysis.

The analysis showed that scouts left to forage much faster when the atmospheric pressure fell. At low pressure, they left 2.8 times faster than at steady pressure and 3.7 times faster than at high pressure.

"Increasing their foraging speed enables the ants to find a larger number of leaves on plants. Rainstorms blow many leaves away, reducing the amount of material available for ants to take back to the colony," Bento said.

The researchers did not observe a difference in the number of workers recruited for foraging. However, between 1.5 and 2.0 times as many leaves were cut and taken to the nests under low pressure than under steady or high pressure.

"Individual ants perceive the advent of low pressure, and this change triggers an increase in foraging efficiency," Bento said.

"They individually start cutting and carrying more leaves, and this results in higher productivity for the nest as a whole."

In Bento's opinion, the efforts of all a colony's individual members to harvest and bring in a larger amount of food when they are stressed by adverse conditions shows a high capacity for decision making in favor of group maintenance with no central or unitary control. "This is additional evidence of how evolved these insects are," he said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

NOAA-NASA's Suomi NPP satellite views New South Wales fires raging on

image: NOAA-NASA's Suomi NPP satellite flew over the New South Wales fires in Australia on December 16, 2019 and found devastation from the ongoing fires. The New South Wales Rural Fire Service is reporting 96 fires are burning .

Image: 
NOAA-NASA

NOAA-NASA's Suomi NPP satellite flew over the New South Wales fires in Australia on December 16, 2019 and found devastation from the ongoing fires. The New South Wales Rural Fire Service is reporting 96 fires are burning and to date the size of the area burned is 1.5 times the size of the state of Connecticut (approximately 5.3 million acres of land). These fires are largely the result of an atypical drought for the area, increasing temperatures, and low humidity. Coupled with the danger of the fires is the resulting shroud of smog that is currently covering Sydney.

The smoke released by any type of fire (forest, brush, crop, structure, tires, waste or wood burning) is a mixture of particles and chemicals produced by incomplete burning of carbon-containing materials. All smoke contains carbon monoxide, carbon dioxide and particulate matter (PM or soot). Smoke can contain many different chemicals, including aldehydes, acid gases, sulfur dioxide, nitrogen oxides, polycyclic aromatic hydrocarbons (PAHs), benzene, toluene, styrene, metals and dioxins. The type and amount of particles and chemicals in smoke varies depending on what is burning, how much oxygen is available, and the burn temperature.

Exposure to high levels of smoke should be avoided. Individuals are advised to limit their physical exertion if exposure to high levels of smoke cannot be avoided. Individuals with cardiovascular or respiratory conditions (e.g., asthma), infants, young children, and the elderly may be more vulnerable to the health effects of smoke exposure.

NASA's Ozone Mapping and Profiler Suite (OMPS) is an instrument that flies aboard NASA-NOAA's Suomi NPP satellite and provides data on ozone. The OMPS Aerosol Index layer (in the image above) indicates the presence of ultraviolet (UV)-absorbing particles in the air (aerosols) such as desert dust and, as in this case, soot particles in the atmosphere; it is related to both the thickness of the aerosol layer located in the atmosphere and to the height of the layer. The Aerosol Index is a unitless range from =5.00, where 5.0 indicates heavy concentrations of aerosols that could reduce visibility or impact human health. The Aerosol Index layer is useful for identifying and tracking the long-range transport of volcanic ash from volcanic eruptions, smoke from wildfires or biomass burning events and dust from desert dust storms, even tracking over clouds and areas of snow and ice. The yellow aerosols can also be seen in the South Pacific Ocean where winds have carried the aerosols away from fires and past New Zealand and beyond.

Aerosols absorb and scatter incoming sunlight, which reduces visibility and increases the optical depth. Aerosols have an effect on human health, weather and the climate. Sources of aerosols include pollution from factories, smoke from fires, dust from dust storms, sea salts, and volcanic ash and smog. Aerosols compromise human health when inhaled by people with asthma or other respiratory illnesses. Aerosols also have an affect on the weather and climate by cooling or warming the earth, helping or preventing clouds from forming.

NASA's satellite instruments are often the first to detect wildfires burning in remote regions, and the locations of new fires are sent directly to land managers worldwide within hours of the satellite overpass. Together, NASA instruments detect actively burning fires, track the transport of smoke from fires, provide information for fire management, and map the extent of changes to ecosystems, based on the extent and severity of burn scars. NASA has a fleet of Earth-observing instruments, many of which contribute to our understanding of fire in the Earth system. Satellites in orbit around the poles provide observations of the entire planet several times per day, whereas satellites in a geostationary orbit provide coarse-resolution imagery of fires, smoke and clouds every five to 15 minutes. For more information visit: https://www.nasa.gov/mission_pages/fires/main/missions/index.html

NASA's Earth Observing System Data and Information System (EOSDIS) Worldview application provides the capability to interactively browse over 700 global, full-resolution satellite imagery layers and then download the underlying data. Many of the available imagery layers are updated within three hours of observation, essentially showing the entire Earth as it looks "right now." Actively burning fires, detected by thermal bands, are shown as red points. Image Courtesy: NASA Worldview, Earth Observing System Data and Information System (EOSDIS). Caption: Lynn Jenner with information from New South Wales Rural Fire Service

Credit: 
NASA/Goddard Space Flight Center

Collaboration yields insights into mosquito reproduction

ITHACA, N.Y. - As carriers for diseases like dengue and Zika, mosquitoes kill more than 1 million people each year and sicken hundreds of millions more. But a better understanding of mosquito reproduction can help humans combat outbreaks of these diseases, which are worsening as the climate warms.

Four Cornell researchers - two entomologists and two engineers - took a deeper look at this process. In a paper published in Scientific Reports on Dec. 6, they documented nanoscale changes in the sperm of Aedes aegypti mosquitoes and watched how the females' bodies responded during the insemination-to-fertilization period.

"We want to understand reproduction because if you can figure out ways to stop reproduction in the field, you could have mass mosquito birth control," said Ethan Degner, Ph.D. '19, co-lead author of the paper along with Jade Noble, Ph.D. '18.

Degner completed his Ph.D. under Laura Harrington, professor of entomology. Noble's Ph.D. was under Lena Kourkoutis, associate professor of applied and engineering physics. The four wrote the paper.

Current efforts at controlling mosquito populations take two general approaches: modifying mosquito habitat, by spraying pesticides or reducing areas of standing water that mosquitoes use for breeding; and making biological changes, like releasing sterile males into the environment or genetically modifying males to produce inviable offspring.

The Cornell researchers hope their fundamental research will lead to more effective methods of managing mosquito reproduction.

While scientists have known the pathway sperm take within female mosquitoes, Degner said, how sperm actually behave during the crucial period between insemination and fertilization was unknown.

To learn more, the entomologists needed better tools. Their standard light microscopes had a resolution limit of 400 nanometers, around the same wavelength as light, but they needed resolution at least 100 times smaller. By employing high-powered cryo-electron microscopy - the development of which won a trio of international scientists the 2017 Nobel Prize in chemistry, and which Kourkoutis uses regularly in her research - they could then see features up to 100,000 times smaller.

"With electron microscopy, you can theoretically see features on the order of picometers (trillionths of a meter)," Noble said. "This was exactly what we needed to see the change in features in the sperm. Some of the features we were looking at were only 8-9 nanometers."

Using a powerful electron microscope, they discovered that the mosquitoes' sperm shed their entire outer coat, called the glycocalyx, within 24 hours of inseminating the female. They also observed that the shedding process appeared to wake the sperm from a dormant state and trigger one of rapid motility. Intriguingly, as the sperm become more mobile, this somehow encouraged the female mosquito to become more fertile.

The second key component of this research came from the ability to cryo-freeze mosquito sperm and fix them in a near-native state. Traditional entomological research involves infusing insect specimens with a chemical fixative, dehydrating them, then adding a dye to help see different features.

However, at such a small scale, that process introduced the possibility that changes observed in sperm might be due to the preparation, rather than natural changes in the sperm. Through this unique partnership, the cryo-freezing process enabled the researchers to "be a little more certain that the features we saw in our images were due to the sperm changing and not due to the sample preparation," Noble said.

"That's another reason why this collaboration was so helpful," she said, "because it's opening the door to the idea that biologists can really benefit from these high-powered tools."

Noble and Degner knew each other through several on-campus organizations, and while discussing their respective research, realized the benefits of collaborating. Harrington said she hopes this radical collaboration will be the first of many.

"The potential for this tool to increase our understanding of the biology of sperm and physiology of mosquito pathogen interactions is tremendous," she said. "I hope we can employ this approach to explore these other areas in the future."

Credit: 
Cornell University

Celebrated ancient Egyptian woman physician likely never existed, says researcher

AURORA, Colo. (Dec. 16, 2019) - For decades, an ancient Egyptian known as Merit Ptah has been celebrated as the first female physician and a role model for women entering medicine. Yet a researcher from the University of Colorado Anschutz Medical Campus now says she never existed and is an example of how misconceptions can spread.

"Almost like a detective, I had to trace back her story, following every lead, to discover how it all began and who invented Merit Ptah," said Jakub Kwiecinski, PhD, an instructor in the Dept. of Immunology and Microbiology at the CU School of Medicine and a medical historian.

His study was published last week in the Journal of the History of Medicine and Allied Sciences.

Kwiecinski's interest in Merit Ptah (`beloved of god Ptah') was sparked after seeing her name in so many places.

"Merit Ptah was everywhere. In online posts about women in STEM, in computer games, in popular history books, there's even a crater on Venus named after her," he said. "And yet, with all these mentions, there was no proof that she really existed. It soon became clear that there had been no ancient Egyptian woman physician called Merit Ptah."

Digging deep into the historical record, Kwiecinski discovered a case of mistaken identity that took on a life of its own, fueled by those eager for an inspirational story.

According to Kwiecinski, Merit Ptah the physician had her origins in the 1930s when Kate Campbell Hurd-Mead, a medical historian, doctor and activist, set out to write a complete history of medical women around the world. Her book was published in 1938.

She talked about the excavation of a tomb in the Valley of Kings where there was a "picture of a woman doctor named Merit Ptah, the mother of a high priest, who is calling her `the Chief Physician.'"

Kwiecinski said there was no record of such a person being a physician.

"Merit Ptah as a name existed in the Old Kingdom, but does not appear in any of the collated lists of ancient Egyptian healers - not even as one of the `legendary'; or `controversial cases," he said. "She is also absent from the list of Old Kingdom women administrators. No Old Kingdom tombs are present in the Valley of the Kings, where the story places Merit Ptah's son, and only a handful of such tombs exist in the larger area, the Theban Necropolis."

The Old Kingdom of Egypt lasted from 2575 to 2150 BC.

But there was another woman who bears a striking resemblance to Merit Ptah. In 1929-30, an excavation in Giza uncovered a tomb of Akhethetep, an Old Kingdom courtier. Inside, a false door depicted a woman called Peseshet, presumably the tomb owner's mother, described as the `Overseer of Healer Women.' Peseshet and Merit Ptah came from the same time periods and were both mentioned in the tombs of their sons who were high priestly officials.

This discovery was described in several books and one of them found its way into Hurd-Mead's private library. Kwiecinski believes Hurd-Mead confused Merit Ptah with Peseseth.

"Unfortunately, Hurd-Mead in her own book accidentally mixed up the name of the ancient healer, as well as the date when she lived, and the location of the tomb," he said. "And so, from a misunderstood case of an authentic Egyptian woman healer, Peseshet, a seemingly earlier Merit Ptah, `the first woman physician' was born."

The Merit Ptah story spread far and wide, driven by a variety of forces. Kwiecinski said one factor was the popular perception of ancient Egypt as an almost fairytale land "outside time and space" perfectly suited for the creation of legendary stories.

The story spread through amateur historian circles, creating a kind of echo chamber not unlike how fake news stories circulate today.

"Finally, it was associated with an extremely emotional, partisan - but also deeply personal - issue of equal rights," he said. "Altogether this created a perfect storm that propelled the story of Merit Ptah into being told over and over again."

Yet Kwiecinski said the most striking part of the story is not the mistake but the determination of generations of women historians to recover the forgotten history of female healers, proving that science and medicine have never been exclusively male.

"So even though Merit Ptah is not an authentic ancient Egyptian woman healer," he said. "She is a very real symbol of the 20th century feministic struggle to write women back into the history books, and to open medicine and STEM to women."

Credit: 
University of Colorado Anschutz Medical Campus

Radiation breaks connections in the brain

One of the potentially life-altering side effects that patients experience after cranial radiotherapy for brain cancer is cognitive impairment. Researchers now believe that they have pinpointed why this occurs and these findings could point the way for new therapies to protect the brain from the damage caused by radiation.

The new study - which appears in the journal Scientific Reports - shows that radiation exposure triggers an immune response in the brain that severs connections between nerve cells. While the immune system's role in remodeling the complex network of links between neurons is normal in the healthy brain, radiation appears to send the process into overdrive, resulting in damage that could be responsible for the cognitive and memory problems that patients often face after radiotherapy.

"The brain undergoes a constant process of rewiring itself and cells in the immune system act like gardeners, carefully pruning the synapses that connect neurons," said Kerry O'Banion, M.D., Ph.D., a professor in the University of Rochester Del Monte Institute for Neuroscience and senior author of the study which was conducted in mice. "When exposed to radiation, these cells become overactive and destroy the nodes on nerve cells that allow them to form connections with their neighbors."

The culprit is a cell in the immune system called microglia. These cells serve as the brain's sentinels, seeking out and destroying infections, and cleaning up damaged tissue after an injury. In recent years, scientists have begun to understand and appreciate microglia's role in the ongoing process by which the networks and connections between neurons are constantly wired and rewired during development and to support learning, memory, cognition, and sensory function.

Microglia interact with neurons at the synapse, the juncture where the axon of one neuron connects and communicates with another. Synapses are clustered on arms that extend out from the receiving neuron's main body called dendrites. When a connection is no longer required, signals are sent out in the form of proteins that tell microglia to destroy the synapse and remove the link with its neighbor.

In the new study, researchers exposed the mice to radiation equivalent to the doses that patients experience during cranial radiotherapy. They observed that microglia in the brain were activated and removed nodes that form one end of the synaptic juncture - called spines - which prevented the cells from making new connections with other neurons. The microglia appeared to target less mature spines, which the researchers speculate could be important for encoding new memories - a finding that may explain the cognitive difficulties that many patients experience. The researchers also observed that the damage found in the brain after radiation was more pronounced in male mice.

While advances have been made in recent years in cranial radiotherapy protocols and technology that allow clinicians to better target tumors and limit the area of the brain exposed to radiation, the results of the study show that the brain remains at significant risk to damage during therapy.

The research points to two possible approaches that could help prevent damages to nerve cells, including blocking a receptor called CR3 that is responsible for synapse removal by microglia. When the CR3 receptor was suppressed in mice, the animals did not experience synaptic loss when exposed to radiation. Another approach could be to tamp down the brain's immune response while the person undergoes radiotherapy to prevent microglia from becoming overactive.

Credit: 
University of Rochester Medical Center

Researchers explore factors affecting money management skills in multiple sclerosis

image: Managing money may be difficult for individuals with MS who have depressive symptomatology and deficits in executive function.

Image: 
Kessler Foundation/Nicky Miller

East Hanover, NJ. December 16, 2019. A team of rehabilitation researchers identified factors associated with the money management problems experienced by some individuals with multiple sclerosis. Few studies have addressed this issue, which can have substantial impact on quality of life. The open access article, "Money management in multiple sclerosis: The role of cognitive, motor, and affective factors", (doi: 10.3389/neur.2019.01128) was epublished on October 23, 2019 by Frontiers in Neurology.

The authors are Yael Goverover, OTR/L, PhD, OT, of New York University and Kessler Foundation, and Nancy Chiaravalloti, PhD, and John DeLuca, PhD, of Kessler Foundation.

Open access link: https://www.frontiersin.org/articles/10.3389/fneur.2019.01128/full

Researchers enrolled 72 participants with multiple sclerosis, aged 18 to 65 years, and 26 healthy controls. To examine the association between money management difficulties and cognitive, motor, and emotional factors, researchers tested all participants for cognitive skills, depression and anxiety, and upper and lower limb motor function.

Money management skills were assessed with two methods: 1) KF-Actual Reality™, a performance-based assessment developed at Kessler Foundation that tests five behaviors essential to money management by tasking the participant with an actual task - making an online purchase, and 2) a money management questionnaire developed for use in individuals with brain injury. Based on their performance, the participants with MS were grouped as efficient (MS Efficient-MM) or inefficient money managers (MS Inefficient-MM).

Overall, the healthy control group performed better than both MS groups. Of the three groups, the MS Inefficient-MM group scored lowest on measures of cognitive and motor skills, and highest on affective symptomatology. Researchers identified two factors associated with efficient money management: good executive functioning and low depressive symptomatology. "It is important to note that these factors characterized the healthy controls and the MS Efficient-MM group," said Dr. Goverover, the lead author, "indicating that money management difficulties affect a subset, and not the MS population as a whole."

The association of money management difficulties with depressive symptomatology is a new finding, according to Dr. Goverover, and further research is warranted into what may be a key predictor for these difficulties in the MS population. "Difficulties with managing money can have serious financial, legal, and psychological consequences for individuals and their caregivers," she emphasized. "Owing money, paying bills late, making impulse purchases, running out of money for essentials - these behaviors adversely affect the ability to function independently in everyday life. Knowing the factors that underlie money management problems will enable providers to identify those at risk and counsel caregivers to intervene effectively to minimize negative behaviors."

Credit: 
Kessler Foundation

Developing next-generation biologic pacemakers

image: University of Houston associate professor of pharmacology Bradley McConnell is helping usher in a new age of cardiac pacemakers by using stem cells found in fat, converting them to heart cells, and reprogramming those to act as biologic pacemaker cells.

Image: 
University of Houston

University of Houston associate professor of pharmacology Bradley McConnell is helping usher in a new age of cardiac pacemakers by using stem cells found in fat, converting them to heart cells, and reprogramming those to act as biologic pacemaker cells. He is reporting his work in the Journal of Molecular and Cellular Cardiology. The new biologic pacemaker-like cell will be useful as an alternative treatment for conduction system disorders, cardiac repair after a heart attack and to bridge the limitations of the electronic pacemaker.

"We are reprogramming the cardiac progenitor cell and guiding it to become a conducting cell of the heart to conduct electrical current," said McConnell.

McConnell's collaborator, Robert J. Schwartz, Hugh Roy and Lillian Cranz Cullen Distinguished Professor of biology and biochemistry, previously reported work on turning the adipogenic mesenchymal stem cells, that reside in fat cells, into cardiac progenitor cells. Now those same cardiac progenitor cells are being programmed to keep hearts beating as a sinoatrial node (SAN), part of the electrical cardiac conduction system (CCS).

The SAN is the primary pacemaker of the heart, responsible for generating the electric impulse or beat. Native cardiac pacemaker cells are confined within the SAN, a small structure comprised of just a few thousand specialized pacemaker cells. Failure of the SAN or a block at any point in the CCS results in arrhythmias.

More than 600,000 electronic pacemakers are implanted in patients annually to help control abnormal heart rhythms. The small mechanical device is placed in the chest or abdomen and uses electrical pulses to prompt the heart to beat normally. In addition to having the device regularly examined by a physician, over time an electronic pacemaker can stop working properly.

"Batteries will die. Just look at your smartphone," said McConnell. "This biologic pacemaker is better able to adapt to the body and would not have to be maintained by a physician. It is not a foreign object. It would be able to grow with the body and become much more responsive to what the body is doing."

To convert the cardiac progenitor cells, McConnell infused the cells with a unique cocktail of three transcription factors and a plasma membrane channel protein to reprogram the heart cells in vitro.

"In our study, we observed that the SHOX2, HCN2, and TBX5 (SHT5) cocktail of transcription factors and channel protein reprogrammed the cells into pacemaker-like cells. The combination will facilitate the development of cell-based therapies for various cardiac conduction diseases," he reported.

Credit: 
University of Houston

Study examines causes of death in US breast cancer survivors

Survival rates for patients with breast cancer have improved significantly in the last four decades, and many patients will eventually die from non-cancer-related causes. Researchers recently conducted the largest population-based long-term retrospective analysis of non-cancer causes of death among patients with breast cancer. The findings are published early online in CANCER, a peer-reviewed journal of the American Cancer Society.

Of 754,270 U.S. women diagnosed with breast cancer from 2000 to 2015, 24.3 percent died by the end of 2015. The highest number of deaths (46.2 percent) occurred within one to five years following diagnosis, and most were caused by breast cancer or other cancers. Breast cancer-related deaths decreased as years passed, however, and were eventually overcome by non-breast cancer causes of death. Within five to 10 years following diagnosis, about half of patients died of non-breast cancer causes, whereas the majority of those who survived beyond 10 years died of non-breast cancer causes.

The most common non-cancer causes of death within 10 years of diagnosis were heart diseases, followed by cerebrovascular diseases. After more than 10 years following diagnosis, the most common non-cancer causes of death were heart diseases, followed by Alzheimer's disease.

Compared with the general population, patients had a higher risk of dying from chronic liver diseases within 5-10 years following diagnosis, and from Alzheimer's disease and heart diseases after more than 10 years following diagnosis.

"Non-cancer diseases, such as heart diseases, contribute to a significant number of deaths in patients with breast cancer, even higher than in the general population," said senior author Mohamad Bassam Sonbol, MD, of Mayo Clinic in Phoenix, Arizona. "Cancers other than breast cancer are also an important cause of death in patients with a history of breast cancer."

The results will be informative for survivors in discussions with physicians about their future health. "Our findings emphasize the importance of counseling patients about their survivorship and risk of developing other cancers, with a focus on proper screening or preventive measures for other cancers and diseases," added Dr. Sonbol.

Credit: 
Wiley

E-cigarettes significantly raise risk of chronic lung disease, first long-term study finds

E-cigarette use significantly increases a person's risk of developing chronic lung diseases like asthma, bronchitis, emphysema or chronic obstructive pulmonary disease, according to new UC San Francisco research, the first longitudinal study linking e-cigarettes to respiratory illness in a sample representative of the entire U.S. adult population.

The study also found that people who used e-cigarettes and also smoked tobacco -- by far the most common pattern among adult e-cigarette users -- were at an even higher risk of developing chronic lung disease than those who used either product alone.

The findings were published Dec. 16, 2019 in the American Journal of Preventive Medicine and are based on an analysis of publicly available data from the Population Assessment of Tobacco and Health (PATH), which tracked e-cigarette and tobacco habits as well as new lung disease diagnoses in over 32,000 American adults from 2013 to 2016.

Though several earlier population studies had found an association between e-cigarette use and lung disease at a single point in time, these so-called cross-sectional studies provided a snapshot that made it impossible for researchers to say whether lung disease was being caused by e-cigarettes or if people with lung disease were more likely to use e-cigarettes.

By starting with people who did not have any reported lung disease, taking account of their e-cigarette use and smoking from the start, and then following them for three years, the new longitudinal study offers stronger evidence of a causal link between adult e-cigarette use and lung diseases than prior studies.

"What we found is that for e-cigarette users, the odds of developing lung disease increased by about a third, even after controlling for their tobacco use and their clinical and demographic information," said senior author Stanton Glantz, PhD, a UCSF professor of medicine and director of the UCSF Center for Tobacco Control Research and Education.

"We concluded that e-cigarettes are harmful on their own, and the effects are independent of smoking conventional tobacco," Glantz said.

Though current and former e-cigarette users were 1.3 times more likely to develop chronic lung disease, tobacco smokers increased their risk by a factor of 2.6. For dual users -- people who smoke and use e-cigarettes at the same time -- these two risks multiply, more than tripling the risk of developing lung disease.

"Dual users -- the most common use pattern among people who use e-cigarettes -- get the combined risk of e-cigarettes and conventional cigarettes, so they're actually worse off than tobacco smokers," said Glantz.

This finding is particularly relevant as the debate continues to rage over whether e-cigarettes should be promoted as a harm-reduction tool for smokers. While the authors found that switching from smoked tobacco to e-cigarettes lowered the risk of developing lung disease, fewer than 1 percent of the smokers had completely switched to e-cigarettes.

"Switching from conventional cigarettes to e-cigarettes exclusively could reduce the risk of lung disease, but very few people do it," said Glantz. "For most smokers, they simply add e-cigarettes and become dual users, significantly increasing their risk of developing lung disease above just smoking."

Importantly, the results reported in this study are unrelated to EVALI (E-cigarette or Vaping Product Use-Associated Lung Injury), the acute lung disease first reported last summer, severe cases of which sent several e-cigarette users to the hospital and others to an early grave. Though scientists are still working to determine the cause of EVALI, prior physiological studies in both animals and humans found that e-cigarettes suppress the immune system and increase the levels of stress-related proteins in the lungs. And chemical analyses showed that e-cigarettes contain higher levels of certain toxic chemicals than conventional cigarettes. But the new study shows that these are not the only health threats posed by e-cigarettes.

"This study contributes to the growing case that e-cigarettes have long-term adverse effects on health and are making the tobacco epidemic worse," said Glantz.

Credit: 
University of California - San Francisco

Heart-healthy diets are naturally low in dietary cholesterol and can help to reduce the risk of heart disease and stroke

DALLAS, Dec. 16, 2019 -- Reducing dietary cholesterol by focusing on an overall heart-healthy dietary pattern that replaces saturated fats with polyunsaturated fats remains good advice for keeping artery-clogging LDL cholesterol levels healthy. Such dietary patterns are naturally low in dietary cholesterol. Current research does not support a specific numerical limit on cholesterol from food according to a Scientific Advisory (Advisory) from the American Heart Association, published today in the Association's premier journal Circulation.

Much of the cholesterol in blood is manufactured in the liver and used for building cells. However, foods such as full-fat dairy products and fatty cuts of red and processed meats contain relatively high amounts of cholesterol and are also usually high in saturated fat, which may cause an accumulation of cholesterol in blood. Too much cholesterol in the blood contributes to the formation of thick, hard deposits on the inside of the arteries, a process that underlies most heart diseases and strokes.

Scientific research about the role of dietary cholesterol has not conclusively found a link between dietary cholesterol and higher LDL cholesterol at levels currently consumed. The differences in findings may be based on the way studies about diet are designed and the absolute amount of cholesterol fed, according to the Advisory. For example, evidence from observational studies conducted in several countries generally does indicate a significant association between dietary cholesterol and cardiovascular disease (CVD) risk. Observational studies, however, are not designed to prove cause and effect - they identify trends, often based on study participants filling out questionnaires about what they eat. Study findings from observational studies could be impacted by factors such as the difficulty of teasing out the specific effect of dietary cholesterol versus saturated fat because most foods that are high in saturated fats are also high in dietary cholesterol.

The meta-analysis included in the Advisory that included randomized, controlled, dietary intervention trials, which are designed to prove cause and effect, found that there was a dose-dependent relationship between dietary cholesterol and higher levels of artery-clogging LDL cholesterol when the range of dietary cholesterol tested was beyond that normally eaten. This relationship persists after adjustment for dietary fat type. The feeding studies included in the meta-analysis provided food to participants, so the researchers could accurately understand what people eat, however, they are costly to conduct. Hence, the meta-analysis was limited by the small number of participants in each randomized trial. The researchers were also not able to adequately compare the role of artery-clogging LDL cholesterol, HDL "good" cholesterol and total cholesterol in the blood among the participants because of their small size -- and HDL and total cholesterol could influence the results.

"Consideration of the relationship between dietary cholesterol and CVD risk cannot ignore two aspects of diet. First, most foods contributing cholesterol to the U.S. diet are usually high in saturated fat, which is strongly linked to an increased risk of too much LDL cholesterol. Second, we know from an enormous body of scientific studies that heart-healthy dietary patterns, such as Mediterranean-style and DASH style diets (Dietary Approaches to Stop Hypertension) are inherently low in cholesterol," said Jo Ann S. Carson, Ph.D., R.D.N., L.D., immediate-past chair and current member of the American Heart Association's nutrition committee and professor of clinical nutrition at UT Southwestern Medical Center in Dallas, Texas when the advisory was written.

"Eating a nutrient-rich diet that emphasizes fruits, vegetables, whole grains, low-fat or fat-free dairy products, lean cuts of meat, poultry, fish or plant-based protein, nuts and seeds. Saturated fats - mostly found in animal products such as meat and full fat dairy, as well as tropical oils - should be replaced with polyunsaturated fats such as corn, canola or soybean oils. Foods high in added sugars and sodium (salt) should be limited," said Carson.

As per the Advisory, in general, egg intake was not significantly associated with the risk of cardiovascular disease in the studies that were examined. It is reasonable to eat one whole egg (or its equivalent such as 3 ounces of shrimp) daily as part of a heart-healthy diet for healthy individuals.

The Advisory continues to support the recommendation in the 2019 American College of Cardiology/American Heart Association Guideline on the Primary Prevention of Cardiovascular Disease to reduce intake of dietary cholesterol for overall heart health.

Credit: 
American Heart Association

Fossil shells reveal both global mercury contamination and warming when dinosaurs perished

ANN ARBOR--The impact of an asteroid or comet is acknowledged as the principal cause of the mass extinction that killed off most dinosaurs and about three-quarters of the planet's plant and animal species 66 million years ago.

But massive volcanic eruptions in India may also have contributed to the extinctions. Scientists have long debated the significance of the Deccan Traps eruptions, which began before the impact and lasted, on and off, for nearly a million years, punctuated by the impact event.

Now, a University of Michigan-led geochemical analysis of fossil marine mollusk shells from around the globe is providing new insights into both the climate response and environmental mercury contamination at the time of the Deccan Traps volcanism.

From the same shell specimens, the researchers found what appears to be a global signal of both abrupt ocean warming and distinctly elevated mercury concentrations. Volcanoes are the largest natural source of mercury entering the atmosphere.

The dual chemical fingerprints begin before the impact event and align with the onset of the Deccan Traps eruptions.

When the researchers compared the mercury levels from the ancient shells to concentrations in freshwater clam shells collected at a present-day site of industrial mercury pollution in Virginia's Shenandoah Valley, the levels were roughly equivalent.

Evidence from the study, which is scheduled for publication Dec.16 in the journal Nature Communications, supports the idea that Deccan Traps volcanism had climatic and ecological impacts that were profound, long-lasting and global, the researchers conclude.

"For the first time, we can provide insights into the distinct climatic and environmental impacts of Deccan Traps volcanism by analyzing a single material," said Kyle Meyer, lead author of the new study. "It was incredibly surprising to see that the exact same samples where marine temperatures showed an abrupt warming signal also exhibited the highest mercury concentrations, and that these concentrations were of similar magnitude to a site of significant modern industrial mercury contamination."

Meyer conducted the study as part of his doctoral dissertation in the U-M Department of Earth and Environmental Sciences. He is now a postdoctoral researcher at Portland State University in Oregon.

Mercury is a toxic trace metal that poses a health threat to humans, fish and wildlife. Human-generated sources of mercury include coal-fired power plants and artisanal gold mines. At Virginia's South River industrially contaminated site, where the researchers collected freshwater clam shells, signs warn residents not to eat fish from the river.

"The modern site has a fishing ban for humans because of high mercury levels. So, imagine the environmental impact of having this level of mercury contamination globally for tens to hundreds of thousands of years," said U-M geochemist and study co-author Sierra Petersen, who was Meyer's co-adviser.

The researchers hypothesized that the fossilized shells of mollusks, principally bivalves such as oysters and clams, could simultaneously record both coastal marine temperature responses and varying mercury signals associated with the release of massive amounts of heat-trapping carbon dioxide and mercury from the Deccan Traps.

The long-lived Deccan Traps eruptions formed much of western India and were centered on the time of the Cretaceous-Paleogene (K-Pg) mass extinction, 66 million years ago.

The study used fossil shells collected in Antarctica, the United States (Alabama, Alaska, California and Washington), Argentina, India, Egypt, Libya and Sweden. The researchers analyzed the isotopic composition of the shell carbonate to determine marine temperatures, using a recently developed technique called carbonate clumped isotope paleothermometry.

They also measured the amount of mercury in the remarkably well-preserved fossil shells and assembled the first-ever deep-time record of mercury preserved in fossilized biomineral remains.

In previous studies, records of environmental mercury have been reconstructed from marine sediments, providing insights into the timing and scale of the Deccan Traps event. But those records lacked such a direct linkage to the climate response. In the new study, both signals are present in the same specimens--an important first, according to the authors.

"Mercury anomalies had been documented in sediments but never before in shells. Having the ability to reconstruct both climate and a volcanism indicator in the exact same materials helps us circumvent lots of problems related to relative dating," said Petersen, an assistant professor in the U-M Department of Earth and Environmental Sciences. "So, one of the big firsts in this study is the technical proof of concept."

The new technique is expected to have broad applications for the study of mass extinctions and climate perturbations in the geological record, according to the researchers.

Credit: 
University of Michigan

New CRISPR-based system targets amplified antibiotic-resistant genes

image: Genes conferring antibiotic resistance (AR) in bacteria (blue arrow) are often carried on circular mini-chromosome elements referred to as plasmids. Site-specific cutting of these plasmids using the CRISPR system, which results in destruction of the plasmid, has been used to reduce the incidence of AR by approximately 100 fold. Pro-Active Genetics (Pro-AG) employs a highly efficient cut-and-paste mechanism that inserts a gene cassette (red box) into the gene conferring AR thereby disrupting its function. The Pro-AG donor cassette is flanked with sequences corresponding to its AR target (blue boxes) to initiate the process. Once inserted into an AR target gene, the Pro-AG element copies itself through a self-amplifying mechanism leading to an approximately 100,000-fold reduction in AR bacteria.

Image: 
Bier Lab, UC San Diego

Taking advantage of powerful advances in CRISPR gene editing, scientists at the University of California San Diego have set their sights on one of society's most formidable threats to human health.

A research team led by Andrés Valderrama at UC San Diego School of Medicine and Surashree Kulkarni of the Division of Biological Sciences has developed a new CRISPR-based gene-drive system that dramatically increases the efficiency of inactivating a gene rendering bacteria antibiotic-resistant. The new system leverages technology developed by UC San Diego biologists in insects and mammals that biases genetic inheritance of preferred traits called "active genetics." The new "pro-active" genetic system, or Pro-AG, is detailed in a paper published December 16 in Nature Communications.

Widespread prescriptions of antibiotics and use in animal food production have led to a rising prevalence of antimicrobial resistance in the environment. Evidence indicates that these environmental sources of antibiotic resistance are transmitted to humans and contribute to the current health crisis associated with the dramatic rise in drug-resistant microbes. Health experts predict that threats from antibiotic resistance could drastically increase in the coming decades, leading to some 10 million drug-resistant disease deaths per year by 2050 if left unchecked.

The core of Pro-AG features a modification of the standard CRISPR-Cas9 gene editing technology in DNA. Working with Escherichia coli bacteria, the researchers developed the Pro-AG method to disrupt the function of a bacterial gene conferring antibiotic resistance. In particular, the Pro-AG system addresses a thorny issue in antibiotic resistance presented in the form of plasmids, circular forms of DNA that can replicate independently of the bacterial genome. Multiple copies of, or "amplified," plasmids carrying antibiotic-resistant genes can exist in each cell and feature the ability to transfer antibiotic resistance between bacteria, resulting in a daunting challenge to successful treatment. Pro-AG works by a cut-and-insert repair mechanism to disrupt the activity of the antibiotic resistant gene with at least two orders of magnitude greater efficiency than current cut-and-destroy methods.

Valderrama and Kulkarni, working in the UC San Diego labs of study coauthors Professors Victor Nizet and Ethan Bier, respectively, demonstrated the effectiveness of the new technique in experimental cultures containing a high number of plasmids carrying genes known to confer resistance to the antibiotic ampicillin. The system relies on a self-amplifying "editing" mechanism that increases its efficiency through a positive feedback loop. The result of Pro-AG editing is the insertion of tailored genetic payloads into target sites with high precision.

Eventual human applications include potential treatments for patients suffering from chronic bacterial infections.

While Pro-AG is not yet ready for treating patients, "a human delivery system carrying Pro-AG could be deployed to address conditions such as cystic fibrosis, chronic urinary infections, tuberculosis and infections associated with resistant biofilms that pose difficult challenges in hospital settings," said Nizet, distinguished professor of Pediatrics and Pharmacy and the faculty lead of the UC San Diego Collaborative to Halt Antibiotic-Resistant Microbes (CHARM).

When combined with a variety of existing delivery mechanisms for spreading the Pro-AG system through populations of bacteria, the scientists say the technology also could be widely effective in removing, or "scrubbing," antibiotic-resistant strains from the environment in areas such as sewers, fish ponds and feedlots. Because Pro-AG "edits" its targets rather than destroys them, this system also enables engineering or manipulating bacteria for a broad range of future biotechnological and biomedical applications rendering them harmless or even recruiting them to perform beneficial functions.

"The highly efficient and precise nature of Pro-AG should permit a variety of practical applications, including dissemination of this system throughout populations of bacteria using one of several existing delivery systems to greatly reduce the prevalence of antibiotic resistance in the environment," said Bier, a distinguished professor in the Section of Cell and Developmental Biology and science director of the UC San Diego unit of the Tata Institute for Genetics and Society (TIGS).

Credit: 
University of California - San Diego

Fish consumption and mercury exposure in pregnant women in coastal Florida

image: Adam M. Schaefer, MPH, lead author and an epidemiologist at FAU's Harbor Branch, and collaborators, wanted to test this vulnerable coastal Florida population because the sensitivity of the developing brain to the effects of mercury deposition has been shown in studies of pregnant women exposed through the consumption of seafood, even at relatively low levels of prenatal mercury.

Image: 
Florida Atlantic University's Harbor Branch

Mercury contamination of the marine environment is a global public health concern. Human exposure occurs primarily by eating seafood, especially large predatory fish such as swordfish and albacore tuna. Those most vulnerable - pregnant women: mercury exposure during pregnancy has been associated with cognitive impairment, including memory, attention, fine motor skills, and other markers of delayed neurodevelopment, although results are conflicting.

Researchers from Florida Atlantic University's Harbor Branch Oceanographic Institute and collaborators conducted a study to assess mercury concentrations in the hair of pregnant women living in coastal Florida and to determine the relationships between hair total mercury concentrations, fish consumption, sources of seafood, knowledge of the risks of mercury exposure, and seafood consumption during pregnancy.

This latest study follows their previous research showing that bottlenose dolphins in the Indian River Lagoon have some of the highest concentrations of mercury in this species worldwide. The lagoon is an estuary that extends more than 250 kilometers and traverses 40 percent of the eastern coastline in Florida and is a highly impacted environment. To "close the loop" between this wildlife sentinel and human health, they also conducted a prior study in recreational anglers and coastal residents. They found mercury concentrations in the hair of 135 participants that was higher than those previously reported for similar populations in the United States.

"In Florida the average adult consumes almost 10 times as many grams of seafood per day compared to the general U.S. population, potentially increasing the risk of mercury exposure above safe limits, especially for pregnant women," said Adam M. Schaefer, MPH, lead author, and an epidemiologist at FAU's Harbor Branch. "Because the sensitivity of the developing brain to the effects of mercury deposition has been shown in studies of pregnant women exposed through the consumption of seafood, even at relatively low levels of prenatal mercury, we wanted to test this vulnerable coastal Florida population."

Researchers also described the complex relationship between mercury and neurobehavioral outcomes. Specifically, the well-described benefits of seafood consumption and omega-3 fatty acids during pregnancy.

Results of the study, published in the International Journal of Environmental Research and Public Health, show that despite the fact that southern Florida is an area of selective deposition of atmospheric mercury, and that mercury is bioaccumulated in local fish species and apex predators, the mean total hair mercury concentration of the 229 participants was lower or similar to U.S. data for women of child-bearing age. Hair mercury concentration was associated with consumption of locally caught seafood and all seafood, a higher level of education, and first pregnancy.

Those who reported eating seafood three times a week had the highest concentration of mercury in their hair - almost four times as high as those who did not consume any seafood. The highest concentrations were in women over the age of 33 with the highest levels observed among Asian women. Mercury concentrations in hair among those pregnant women who consumed seafood from the Indian River Lagoon were significantly higher than among women who reported never consuming locally caught items. Level of education and the number of children also were related to hair mercury concentration.

Knowledge and education were important components of the study. The majority of participants (85.5 percent) reported being aware that high levels of mercury may be harmful to the unborn fetus. Similarly, 89 percent of women were aware that some fish can contain high levels of mercury. When asked how often one should consume tuna steaks and swordfish, 76.8 percent of women answered that the consumption of these items should be avoided during pregnancy. However, only 53.7 percent of women knew that store-bought swordfish can contain high concentrations of mercury.

"In view of the serious consequences of prenatal exposure to high concentrations of mercury, continued education on safe sources and species of seafood is warranted," said Schaefer. "Educational efforts must provide a balanced approach to include information regarding the benefits of fish consumption while minimizing risk by avoiding locally caught seafood or fish species known to contain high levels of mercury."

Credit: 
Florida Atlantic University

How minds make meaning

image: A special issue of the Philosophical Transactions of the Royal Society B, edited by Andrea E. Martin from the Max Planck Institute of Psycholinguistics and Giosuè Baggio from the Norwegian University of Science and Technology.

Image: 
Royal Society B

When we hear the phrase 'a pink banana', we can understand what it means and form the intended thought - even though bananas are typically yellow. This is because we compose the meanings of separate words into a new whole. "Meaning composition is the lynchpin of cognition, necessary for explaining the creativity of human thought and communication", says co-editor Andrea Martin, Group Leader at the Max Planck Institute and Principal Investigator at the Donders Centre for Cognitive Neuroimaging. "It is a capacity that sets us apart from other species and computational devices."

So how does the mind 'make meaning'? This question is not only a hot topic in linguistics, it has also long vexed philosophers. Do we mechanically combine language parts such as "Ann" (a part that linguists may call the "argument") and "laughed" (the "predicate") to arrive at an understanding of "Ann laughed"? Or does "Ann laughed" only make sense once we interpret this sequence of words in its context, integrated with our knowledge of the world?

One focus of the special issue is the neurobiology of meaning composition. Peter Hagoort, director of the Neurobiology of Language Department at the MPI and of the Donders Centre for Cognitive Neuroimaging, emphasises that people interpret language by using multi-modal cues from rich conversational settings, rather than just syntax to combine words. For instance, the combination 'The finger fell in the soup' triggers negative emotions, even though the words themselves are neutral. In Hagoort's model, based on neuroimaging methods such as MEG and fMRI, meaning is composed in a dynamic interaction between brain regions, such as the temporo-parietal and inferior frontal cortex.

A second focus is computational models. MPI's Andrea Martin collaborated with Leonidas Doumas from the University of Edinburgh to model the neurophysiological mechanisms of meaning composition. Martin and Doumas show that previous models are not able to accurately capture human judgments. For instance, when people hear 'fuzzy cactus' and 'fuzzy penguin', they treat 'cactus' and 'penguin' as similar - belonging to the set of fuzzy things - even when the separate words are judged dissimilar. This shows that humans and artificial intelligence systems still have vastly different ways of representing meaning.

The final seven contributions are experimental studies. For instance, Jonathan Brennan from the University of Michigan and Andrea Martin use existing EEG data, recorded as adults listened to an audiobook of 'Alice in Wonderland'. The authors show that brain waves differ depending on the number of phrases that are processed, revealing how the brain is actively computing meaning across linguistic units (words and phrases).

A large-scale ERP experiment by MPI's Mante Nieuwland and his colleagues addresses the well-known N400 response that occurs in the brain as we process unexpected meanings. When we encounter 'You never forget how to ride an elephant', the brain shows a 'surprise' N400 signal at 'elephant', which is absent when the sentence has a more predictable ending ('You never forget how to ride a bicycle'). The authors argue that 'bicycle' is not only more predictable than 'elephant' (speeding up activation of the word's meaning), it also makes the sentence more plausible (speeding up the integration of the word into the sentence) - given our knowledge of the world. The effects of predictability on the N400 occur before those of plausibility, again showing that the brain actively composes the meaning of both words and sentences in context.

The editors are hopeful that the definite model of how the brain makes meaning is within reach. "It would be a significant leap forward in the search for cognitive science's Holy Grail", concludes Martin. "A mechanistic model of meaning composition would offer solutions to a number of open problems in various fields of science, including philosophy, linguistics, neuroscience, psychology, computer science, and artificial intelligence".

Credit: 
Max Planck Institute for Psycholinguistics