Culture

New review study shows that egg-industry-funded research downplays danger of cholesterol

image: The graph tracks the rise of egg-industry-funded cholesterol studies over time.

Image: 
Physicians Committee for Responsible Medicine

WASHINGTON--Controversial headlines claiming that eggs don't raise cholesterol levels could be the product of faulty industry-funded research, according to a new review published in the American Journal of Lifestyle Medicine.

Researchers with the Physicians Committee for Responsible Medicine examined all research studies published from 1950 to March of 2019 that evaluated the effect of eggs on blood cholesterol levels. The researchers examined funding sources and whether those sources influenced study findings.

The results show that prior to 1970, industry played no role in cholesterol research. The percentage of industry-funded studies increased over time, from 0 percent in the 1950s to 60 percent in 2010-2019.

"In decades past, the egg industry played little or no role in cholesterol research, and the studies' conclusions clearly showed that eggs raise cholesterol," says study author Neal Barnard, MD, president of the Physicians Committee for Responsible Medicine. "In recent years, the egg industry has sought to neutralize eggs' unhealthy image as a cholesterol-raising product by funding more studies and skewing the interpretation of the results."

Overall, more than 85 percent of the studies--whether funded by industry or not--showed that eggs have unfavorable effects on blood cholesterol. Industry-funded studies, however, were more likely to downplay these findings. That is, although the study data showed cholesterol increases, study conclusions often reported that eggs had no effect at all. Approximately half (49 percent) of industry-funded intervention studies reported conclusions that were discordant with actual study results, compared with 13 percent of non-industry-funded trials.

For example, in one 2014 study in college freshmen, the addition of two eggs at breakfast, five days a week over 14 weeks, was associated with a mean LDL cholesterol increase of 15 mg/dL. Despite this rise in cholesterol, investigators concluded that the "additional 400 mg/day of dietary cholesterol did not negatively impact blood lipids." The cholesterol change did not reach statistical significance, meaning that there was at least a 5 percent chance that the cholesterol rise could have been due to chance alone.

"It would have been appropriate for the investigators to report that the cholesterol increases associated with eggs could have been due to chance. Instead, they wrote that the increases did not happen at all. Similar conclusions were reported in more than half of industry-funded studies," adds Dr. Barnard.

These studies have even influenced policymakers. In 2015, the U.S. Dietary Guidelines Advisory Committee reported that "available evidence shows no appreciable relationship between consumption of dietary cholesterol and serum cholesterol...." After reviewing the evidence, however, the government did not carry that statement forward in the final Guidelines, which called for eating "as little dietary cholesterol as possible."

"The egg industry has mounted an intense effort to try to show that eggs do not adversely affect blood cholesterol levels," adds Dr. Barnard. "For years, faulty studies on the effects of eggs on cholesterol have duped the press, public, and policymakers to serve industry interests."

Several meta-analyses have concluded that egg consumption does raise cholesterol levels. According to a 2019 meta-analysis, eating an egg each day raises low density lipoprotein (LDL, or "bad") cholesterol by about nine points. The study, published in the American Journal of Clinical Nutrition, combined the findings of 55 prior studies, finding that every 100 milligrams of added dietary cholesterol (approximately half an egg) raised LDL ("bad") cholesterol levels by about 4.5 mg/dL. A 2019 JAMA study of nearly 30,000 participants found that eating even small amounts of eggs daily significantly raised the risk for both cardiovascular disease and premature death from all causes.

Of 153 studies analyzed in the American Journal of Lifestyle Medicine report, 139 showed that eggs raise blood cholesterol (68 of these reached statistical significance, meaning the results were very unlikely to be due to chance). No studies reported significant net decreases in cholesterol concentrations. Non-significant net cholesterol decreases were reported by six non-industry-funded and eight industry-funded studies.

Credit: 
Physicians Committee for Responsible Medicine

Having a psychotic disorder may increase decline of some areas of cognition over adulthood

A new study has shown that relative to participants without a psychotic disorder, those diagnosed with a disorder were consistently impaired across all areas of cognitive (memory and thinking) ability measured. The comparison also suggested that declines in some cognitive areas might worsen with age.

This was as part of a cross- sectional comparison 20-years after diagnosis of their first psychotic episode.

Crucially, the study found that cognitive impairment of participants with a psychotic disorder was linked to their symptoms, particularly loss of interest in everyday activities, and also negative changes in their employment.

Academics from City, University of London, Icahn School of Medicine at Mount Sinai, New York, Stony Brook University, New York and others, conducted the study as part of the Suffolk County Mental Health Project in the United States. The project began in 1989 in order to find out what challenges people diagnosed with psychotic disorders may face throughout their lives.

Previous research has shown cognitive impairment to be a core feature of schizophrenia and is associated with poor social and vocational outcomes for those affected. However, little was previously known about how cognitive impairment may progress in the longer term in schizophrenia and other psychotic disorders, as studies beyond 10 years after first diagnosis are rare.

The study involved 445 participants who had been admitted to psychiatric inpatient units within Suffolk County. Participants came back to complete cognitive testing at two and 20 year follow-up after their first episode of psychosis. Participants undertook a range of tests which measured different aspects of their cognitive functioning, including their vocabulary knowledge, their ability to recount words from memory, memory of factual information and previous experiences, and ability to conceptualise across ideas and decision-making. They also took part in clinical interviews that assessed their symptom level and how well they were doing socially, as well as functionally in terms of vocation/ and employment.

Twenty years after their diagnosis, cognitive functioning of those with a psychotic disorder was compared with a group of non-psychotic participants from Suffolk County who were matched to them by gender and age.

Co-first author of the study, Dr Anne-Kathrin Fett, Senior Lecturer in Psychology at City, University of London, said:

"Our study provides the first comprehensive picture of long-term cognitive changes and associated clinical and functional outcomes in psychotic disorders, and is an important step toward providing clarity on what challenges people with these disorders face in the community.

"However, it is important to note that while there was a general downward trend, participants varied in terms of cognitive changes and some also achieved improvement over the follow-up period. We need to find out what can influence cognitive functioning positively. We do not yet have medication, but lifestyle changes may be able to improve cognition long-term to some extent.

"Importantly replication and further studies will be necessary to offer directions for the development of strategies to help prevent the progressive deterioration of cognitive functioning in later stages of psychotic illness."

The study also found that cognitive impairment across schizophrenia spectrum disorders and other psychotic conditions, including psychotic bipolar disorder, major depression with psychosis and substance induced psychosis, had similar trajectories of cognitive decline over an 18-year period since measured two years after the first diagnosis.

Credit: 
City St George’s, University of London

The uncertain role of natural gas in the transition to clean energy

A new MIT study examines the opposing roles of natural gas in the battle against climate change -- as a bridge toward a lower-emissions future, but also a contributor to greenhouse gas emissions.

Natural gas, which is mostly methane, is viewed as a significant "bridge fuel" to help the world move away from the greenhouse gas emissions of fossil fuels, since burning natural gas for electricity produces about half as much carbon dioxide as burning coal. But methane is itself a potent greenhouse gas, and it currently leaks from production wells, storage tanks, pipelines, and urban distribution pipes for natural gas. Increasing its usage, as a strategy for decarbonizing the electricity supply, will also increase the potential for such "fugitive" methane emissions, although there is great uncertainty about how much to expect. Recent studies have documented the difficulty in even measuring today's emissions levels.

This uncertainty adds to the difficulty of assessing natural gas' role as a bridge to a net-zero-carbon energy system, and in knowing when to transition away from it. But strategic choices must be made now about whether to invest in natural gas infrastructure. This inspired MIT researchers to quantify timelines for cleaning up natural gas infrastructure in the United States or accelerating a shift away from it, while recognizing the uncertainty about fugitive methane emissions.

The study shows that in order for natural gas to be a major component of the nation's effort to meet greenhouse gas reduction targets over the coming decade, present methods of controlling methane leakage would have to improve by anywhere from 30 to 90 percent. Given current difficulties in monitoring methane, achieving those levels of reduction may be a challenge. Methane is a valuable commodity, and therefore companies producing, storing, and distributing it already have some incentive to minimize its losses. However, despite this, even intentional natural gas venting and flaring (emitting carbon dioxide) continues.

The study also finds policies that favor moving directly to carbon-free power sources, such as wind, solar, and nuclear, could meet the emissions targets without requiring such improvements in leakage mitigation, even though natural gas use would still be a significant part of the energy mix.

The researchers compared several different scenarios for curbing methane from the electric generation system in order to meet a target for 2030 of a 32 percent cut in carbon dioxide-equivalent emissions relative to 2005 levels, which is consistent with past U.S. commitments to mitigate climate change. The findings appear today in the journal Environmental Research Letters, in a paper by MIT postdoc Magdalena Klemun and Associate Professor Jessika Trancik.

Methane is a much stronger greenhouse gas than carbon dioxide, although how much more depends on the timeframe you choose to look at. Although methane traps heat much more, it doesn't last as long once it's in the atmosphere -- for decades, not centuries. When averaged over a 100-year timeline, which is the comparison most widely used, methane is approximately 25 times more powerful than carbon dioxide. But averaged over a 20-year period, it is 86 times stronger.

The actual leakage rates associated with the use of methane are widely distributed, highly variable, and very hard to pin down. Using figures from a variety of sources, the researchers found the overall range to be somewhere between 1.5 percent and 4.9 percent of the amount of gas produced and distributed. Some of this happens right at the wells, some occurs during processing and from storage tanks, and some is from the distribution system. Thus, a variety of different kinds of monitoring systems and mitigation measures may be needed to address the different conditions.

"Fugitive emissions can be escaping all the way from where natural gas is being extracted and produced, all the way along to the end user," Trancik says. "It's difficult and expensive to monitor it along the way."

That in itself poses a challenge. "An important thing to keep in mind when thinking about greenhouse gases," she says, "is that the difficulty in tracking and measuring methane is itself a risk." If researchers are unsure how much there is and where it is, it's hard for policymakers to formulate effective strategies to mitigate it. This study's approach is to embrace the uncertainty instead of being hamstrung by it, Trancik says: The uncertainty itself should inform current strategies, the authors say, by motivating investments in leak detection to reduce uncertainty, or a faster transition away from natural gas.

"Emissions rates for the same type of equipment, in the same year, can vary significantly," adds Klemun. "It can vary depending on which time of day you measure it, or which time of year. There are a lot of factors."

Much attention has focused on so-called "super-emitters," but even these can be difficult to track down. "In many data sets, a small fraction of point sources contributes disproportionately to overall emissions," Klemun says. "If it were easy to predict where these occur, and if we better understood why, detection and repair programs could become more targeted." But achieving this will require additional data with high spatial resolution, covering wide areas and many segments of the supply chain, she says.

The researchers looked at the whole range of uncertainties, from how much methane is escaping to how to characterize its climate impacts, under a variety of different scenarios. One approach places strong emphasis on replacing coal-fired plants with natural gas, for example; others increase investment in zero-carbon sources while still maintaining a role for natural gas.

In the first approach, methane emissions from the U.S. power sector would need to be reduced by 30 to 90 percent from today's levels by 2030, along with a 20 percent reduction in carbon dioxide. Alternatively, that target could be met through even greater carbon dioxide reductions, such as through faster expansion of low-carbon electricity, without requiring any reductions in natural gas leakage rates. The higher end of the published ranges reflects greater emphasis on methane's short-term warming contribution.

One question raised by the study is how much to invest in developing technologies and infrastructure for safely expanding natural gas use, given the difficulties in measuring and mitigating methane emissions, and given that virtually all scenarios for meeting greenhouse gas reduction targets call for ultimately phasing out natural gas that doesn't include carbon capture and storage by mid-century. "A certain amount of investment probably makes sense to improve and make use of current infrastructure, but if you're interested in really deep reduction targets, our results make it harder to make a case for that expansion right now," Trancik says.

The detailed analysis in this study should provide guidance for local and regional regulators as well as policymakers all the way to federal agencies, they say. The insights also apply to other economies relying on natural gas. The best choices and exact timelines are likely to vary depending on local circumstances, but the study frames the issue by examining a variety of possibilities that include the extremes in both directions -- that is, toward investing mostly in improving the natural gas infrastructure while expanding its use, or accelerating a move away from it.

Credit: 
Massachusetts Institute of Technology

New health insurance insights

A new analysis of a randomized health insurance program in Oregon sheds light on the value the program has for enrollees and providers alike.

The study, by MIT economist Amy Finkelstein and two co-authors, suggests that adults with low incomes value Medicaid at only about 20 cents to 50 cents per dollar of medical spending paid on their behalf.

"The value of Medicaid for most low-income adults is much lower than the medical expenditures paid by the insurance," says Finkelstein, the John and Jennie S. MacDonald Professor at MIT and a leading health care economist.

That finding reinforces the results of another, separate study that Finkelstein and multiple co-authors conducted in Massachusetts. In that case, across 70 percent of people in the Massachusetts state health insurance program for low-income adults, their valuation of the program was equal to less than 50 percent of their expected insurance costs.

While it might seem puzzling that recipients value health insurance at less than the covered medical expenditures, the study also offers an explanation for this: Low-income individuals who do not have insurance still only pay a fraction of their medical costs. In the Oregon data, this figure was roughly 20 percent of medical costs; prior studies have found similar results nationwide. The remainder of the spending on the low-income uninsured comes from a variety of sources, including charity care from nonprofit hospitals, publicly funded health clinics that offer free care, state funding to hospitals for uncompensated care, and unpaid medical debt.

"The nominally uninsured have a fair amount of implicit insurance," Finkelstein says. "Once you put it in that light, it becomes a lot less surprising that Medicaid spending is valued by them at a lot less than dollar for dollar."

One further implication of the findings is that a significant portion of public spending on health insurance for low-income individuals effectively acts as a subsidy for health care providers and state programs that cover the costs of uninsured patients.

The new paper, "The Value of Medicaid: Interpreting Results from the Oregon Health Experiment," appears in the December issue of the Journal of Political Economy. Its co-authors are Finkelstein; Nathan Hendren PhD '12, a professor of economics at Harvard University; and Erzo F.P. Luttmer, a professor of economics at Dartmouth College.

The previous paper, "Subsidizing Health Insurance for Low-Income Adults: Evidence from Massachusetts," was published last spring in the American Economic Review. Its co-authors are Finkelstein; Hendren; and Mark Shepard, an assistant professor at the Harvard Kennedy School of Government.

A random walk in Oregon

The latest paper examines a distinctive Medicaid policy that Oregon implemented in 2008. With funding to cover only about 10,000 of eligible adults, Oregon conducted a lottery to decide who would be eligible to apply for Medicaid.

That random assigment of slots using a lottery allowed the researchers to develop a study comparing two otherwise similar groups of Oregon residents: those who had obtained Medicaid coverage via lottery and those who entered the lottery but did not gain coverage. In effect, Oregon had developed a randomized controlled trial, which the scholars used for their research.

Medicaid eligibility regulations and administrative practices can vary by state. In Oregon, adults and children generally qualify for Medicaid when they live in a household with income no greater than 133 percent of the poverty level defined by the U.S. federal government; in 2016, in the 48 contiguous states, that was $11,800 for a single person and $24,300 for a family of four.

Previous studies of the Oregon experiment that Finkelstein has led have shown that, among other things, emergency room use increases among Medicaid recipients, contrary to expectations of many experts.

Being covered by Medicaid also increases patient visits to doctors, prescription drug use, and hospital admissions, while reducing out-of-pocket medical expenses and lowering unpaid medical debt for recipients. Medicaid coverage also appears to lower the incidence of depression, although it does not seem to change the available measures of physical health.

The current study uses data from the prior Oregon studies, as well as state Medicaid records, and survey data from individuals who applied for Oregon's lottery. The survey data show how much people used health care, including prescription drugs, outpatient visits, emergency-room visits, and hospital visits.

In line with previous studies, the current paper shows that having Medicaid increases total spending on health care -- about $3,600 reimbursed to providers annually on behalf of each Medicaid enrollee, compared to $2,721 annually for each low-income uninsured individual. Of that $2,721, the low-income uninsured paid about $569 in annual out-of-pocket costs -- the source of the paper's estimate that uninsured individuals pay about 20 percent of charged costs.

Using this data, the researchers also estimated an annual net cost of Medicaid in Oregon of $1,448 per recipient. This is the average annual increase in health care spending by Medicaid recipients, plus their average annual decrease in out-of-pocket spending. Thus moving a low-income uninsured individual in Oregon onto Medicaid results in a $1,448 increase in insured health care spending on behalf of that person.

Because the Oregon Medicaid program's reimbursements to health care providers are an average of $3,600 annually per recipient, the researchers estimate that about 40 percent of Medicaid spending underwrites costs incurred by enrollees. The other 60 percent is, as they write in the paper, "best conceived of as ... a monetary transfer to external parties who would otherwise subsidize the medical care for the low-income uninsured."

Simultaneously, the researchers refined their "willingness to pay" metric by using multiple methods to estimate how much having health insurance affects consumer spending generally. These methods yielded three estimates ranging from $793 to $1,675 in annual health care spending for low-income individuals. This is the source of the paper's conclusion that people value Medicaid at 20 percent to 50 percent of charged costs.

Two approaches, similar results

Significantly, the two studies use different methodological approaches to study different programs in different states, and arrive at similar conclusions. In Massachusetts, the scholars used data from the state's health insurance program -- a forerunner of the federal Affordable Care Act -- to see how the share of eligible individuals who signed up for insurance changed as their subsidy level changed.

"Despite a different design and different setting, even though it's Massachusetts and not Oregon, and different method, we got pretty much the same result," Finkelstein observes.

Overall, Finkelstein says, it will be valuable to keep learning about the care obtained by uninsured people, as well as the ultimate destination of Medicaid funding, including the 60 percent that is routed to other parties that subsidize care for the low-income uninsured. Understanding who ultimately gets those transfers, she notes, could help illuminate how redistributive Medicaid actually is, as a program intended to benefit lower-income Americans.

Moreover, Finkelstein says, more research will be needed to study how best to provide health care for lower-income Americans.

"Right now we have an implicit, informal insurance system that likely reduces demand for formal insurance but provides a sort of patchwork of care that may not be very good," Finkelstein says.

Credit: 
Massachusetts Institute of Technology

Leafcutter ants accelerate the cutting and transport of leaves during stormy weather

Leafcutter ants such as Atta sexdens or Acromyrmex lobicornis face two major challenges when they leave the safety of the nest to forage: choosing the best plants from which to collect leaves and avoiding being surprised by strong winds or heavy rain, which would prevent them from carrying out their task.

A study by researchers at the University of São Paulo's Luiz de Queiroz College of Agriculture (ESALQ-USP) in Brazil shows that leafcutter ants are capable of predicting adverse weather by sensing changes in atmospheric pressure.

When the ants detect a sharp drop in atmospheric pressure, which in most cases is a sign that heavy rain and strong winds are imminent, they greatly accelerate the speed concluded that the ants are able to collect and store the largest possible amount of food for the nest.

The results of the study are published in the journal Ethology. The study was conducted under the aegis of the National Institute of Science and Technology for Semiochemicals in Agriculture, one of the NISTs funded by São Paulo Research Foundation - FAPESP and the National Council for Scientific and Technological Development (CNPq) in São Paulo State.

"We found that the leafcutter ant can sense changes in atmospheric pressure to anticipate adverse weather and change its foraging strategy" told José Maurício Simões Bento, a professor at ESALQ-USP and one of the authors of the study, to Agência FAPESP.

According to Bento, the search for food is essential for ant colonies, since relatively few individuals leave the nest.

"Many ant castes, such as queens and gardeners, as well as immature stages, stay inside the nest," he said. "The only castes that go outside are foragers, to cut and transport leaves, and soldiers, to defend the colony entrance."

The first foragers to exit the nest are scouts, whose job is to search for leafy plants in the surrounding area. Once they locate plants with leaves available for cutting, they return home, marking the trail with a pheromone so that other workers can find the plants, cut leaves and carry them back to the nest.

Most of this vegetative material is used by these ants to grow a fungus, Leucoagaricus gongylophorus, with which they exhibit a mutualistic symbiotic relationship.

The role played by the ants in this mutualism is to go outside and bring back plant material to serve as a substrate for the growth of the fungus. The fungus donates nutrients through its hyphae (cell filaments) that the ants can eat.

"These leafcutter ants cultivate the fungus to have plenty of food available, especially as a reserve for periods of scarcity," Bento said.

Faster foraging

To determine whether the ants are able to sense changes in atmospheric pressure and change their foraging strategy accordingly, researchers decided to analyze worker recruitment and leaf-cutting patterns under low and high atmospheric pressure compared with stable conditions.

They placed three nests of A. sexdens in a barometric chamber and tested different pressure levels for their impact on the ants' foraging activity. The pressure was first raised to 950 millibars (mbar) and maintained for 1 hour to allow the colony to acclimatize. It was then held steady, followed by an increase to 958 mbar and a decrease to 942 mbar, for 3 hours in each case.

"We chose 8 mbar as the interval between low, stable and high pressure because this is the average recorded for Brazilian cities that produce eucalyptus or roses, and where A. sexdens occurs naturally and is a pest for these crops," Bento explained.

After these different levels of atmospheric pressure were reached, the colonies were filmed for 1 hour, since rain and wind occur several hours after the pressure drops.

At this point, the entrance to each colony was opened to allow the ants exit to a rosebush via a platform. The number of leaves cut and carried into each nest was counted, as were the time taken by the first scout to leave and the total number of workers recruited to forage. The results were subjected to statistical analysis.

The analysis showed that scouts left to forage much faster when the atmospheric pressure fell. At low pressure, they left 2.8 times faster than at steady pressure and 3.7 times faster than at high pressure.

"Increasing their foraging speed enables the ants to find a larger number of leaves on plants. Rainstorms blow many leaves away, reducing the amount of material available for ants to take back to the colony," Bento said.

The researchers did not observe a difference in the number of workers recruited for foraging. However, between 1.5 and 2.0 times as many leaves were cut and taken to the nests under low pressure than under steady or high pressure.

"Individual ants perceive the advent of low pressure, and this change triggers an increase in foraging efficiency," Bento said.

"They individually start cutting and carrying more leaves, and this results in higher productivity for the nest as a whole."

In Bento's opinion, the efforts of all a colony's individual members to harvest and bring in a larger amount of food when they are stressed by adverse conditions shows a high capacity for decision making in favor of group maintenance with no central or unitary control. "This is additional evidence of how evolved these insects are," he said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

NOAA-NASA's Suomi NPP satellite views New South Wales fires raging on

image: NOAA-NASA's Suomi NPP satellite flew over the New South Wales fires in Australia on December 16, 2019 and found devastation from the ongoing fires. The New South Wales Rural Fire Service is reporting 96 fires are burning .

Image: 
NOAA-NASA

NOAA-NASA's Suomi NPP satellite flew over the New South Wales fires in Australia on December 16, 2019 and found devastation from the ongoing fires. The New South Wales Rural Fire Service is reporting 96 fires are burning and to date the size of the area burned is 1.5 times the size of the state of Connecticut (approximately 5.3 million acres of land). These fires are largely the result of an atypical drought for the area, increasing temperatures, and low humidity. Coupled with the danger of the fires is the resulting shroud of smog that is currently covering Sydney.

The smoke released by any type of fire (forest, brush, crop, structure, tires, waste or wood burning) is a mixture of particles and chemicals produced by incomplete burning of carbon-containing materials. All smoke contains carbon monoxide, carbon dioxide and particulate matter (PM or soot). Smoke can contain many different chemicals, including aldehydes, acid gases, sulfur dioxide, nitrogen oxides, polycyclic aromatic hydrocarbons (PAHs), benzene, toluene, styrene, metals and dioxins. The type and amount of particles and chemicals in smoke varies depending on what is burning, how much oxygen is available, and the burn temperature.

Exposure to high levels of smoke should be avoided. Individuals are advised to limit their physical exertion if exposure to high levels of smoke cannot be avoided. Individuals with cardiovascular or respiratory conditions (e.g., asthma), infants, young children, and the elderly may be more vulnerable to the health effects of smoke exposure.

NASA's Ozone Mapping and Profiler Suite (OMPS) is an instrument that flies aboard NASA-NOAA's Suomi NPP satellite and provides data on ozone. The OMPS Aerosol Index layer (in the image above) indicates the presence of ultraviolet (UV)-absorbing particles in the air (aerosols) such as desert dust and, as in this case, soot particles in the atmosphere; it is related to both the thickness of the aerosol layer located in the atmosphere and to the height of the layer. The Aerosol Index is a unitless range from =5.00, where 5.0 indicates heavy concentrations of aerosols that could reduce visibility or impact human health. The Aerosol Index layer is useful for identifying and tracking the long-range transport of volcanic ash from volcanic eruptions, smoke from wildfires or biomass burning events and dust from desert dust storms, even tracking over clouds and areas of snow and ice. The yellow aerosols can also be seen in the South Pacific Ocean where winds have carried the aerosols away from fires and past New Zealand and beyond.

Aerosols absorb and scatter incoming sunlight, which reduces visibility and increases the optical depth. Aerosols have an effect on human health, weather and the climate. Sources of aerosols include pollution from factories, smoke from fires, dust from dust storms, sea salts, and volcanic ash and smog. Aerosols compromise human health when inhaled by people with asthma or other respiratory illnesses. Aerosols also have an affect on the weather and climate by cooling or warming the earth, helping or preventing clouds from forming.

NASA's satellite instruments are often the first to detect wildfires burning in remote regions, and the locations of new fires are sent directly to land managers worldwide within hours of the satellite overpass. Together, NASA instruments detect actively burning fires, track the transport of smoke from fires, provide information for fire management, and map the extent of changes to ecosystems, based on the extent and severity of burn scars. NASA has a fleet of Earth-observing instruments, many of which contribute to our understanding of fire in the Earth system. Satellites in orbit around the poles provide observations of the entire planet several times per day, whereas satellites in a geostationary orbit provide coarse-resolution imagery of fires, smoke and clouds every five to 15 minutes. For more information visit: https://www.nasa.gov/mission_pages/fires/main/missions/index.html

NASA's Earth Observing System Data and Information System (EOSDIS) Worldview application provides the capability to interactively browse over 700 global, full-resolution satellite imagery layers and then download the underlying data. Many of the available imagery layers are updated within three hours of observation, essentially showing the entire Earth as it looks "right now." Actively burning fires, detected by thermal bands, are shown as red points. Image Courtesy: NASA Worldview, Earth Observing System Data and Information System (EOSDIS). Caption: Lynn Jenner with information from New South Wales Rural Fire Service

Credit: 
NASA/Goddard Space Flight Center

Collaboration yields insights into mosquito reproduction

ITHACA, N.Y. - As carriers for diseases like dengue and Zika, mosquitoes kill more than 1 million people each year and sicken hundreds of millions more. But a better understanding of mosquito reproduction can help humans combat outbreaks of these diseases, which are worsening as the climate warms.

Four Cornell researchers - two entomologists and two engineers - took a deeper look at this process. In a paper published in Scientific Reports on Dec. 6, they documented nanoscale changes in the sperm of Aedes aegypti mosquitoes and watched how the females' bodies responded during the insemination-to-fertilization period.

"We want to understand reproduction because if you can figure out ways to stop reproduction in the field, you could have mass mosquito birth control," said Ethan Degner, Ph.D. '19, co-lead author of the paper along with Jade Noble, Ph.D. '18.

Degner completed his Ph.D. under Laura Harrington, professor of entomology. Noble's Ph.D. was under Lena Kourkoutis, associate professor of applied and engineering physics. The four wrote the paper.

Current efforts at controlling mosquito populations take two general approaches: modifying mosquito habitat, by spraying pesticides or reducing areas of standing water that mosquitoes use for breeding; and making biological changes, like releasing sterile males into the environment or genetically modifying males to produce inviable offspring.

The Cornell researchers hope their fundamental research will lead to more effective methods of managing mosquito reproduction.

While scientists have known the pathway sperm take within female mosquitoes, Degner said, how sperm actually behave during the crucial period between insemination and fertilization was unknown.

To learn more, the entomologists needed better tools. Their standard light microscopes had a resolution limit of 400 nanometers, around the same wavelength as light, but they needed resolution at least 100 times smaller. By employing high-powered cryo-electron microscopy - the development of which won a trio of international scientists the 2017 Nobel Prize in chemistry, and which Kourkoutis uses regularly in her research - they could then see features up to 100,000 times smaller.

"With electron microscopy, you can theoretically see features on the order of picometers (trillionths of a meter)," Noble said. "This was exactly what we needed to see the change in features in the sperm. Some of the features we were looking at were only 8-9 nanometers."

Using a powerful electron microscope, they discovered that the mosquitoes' sperm shed their entire outer coat, called the glycocalyx, within 24 hours of inseminating the female. They also observed that the shedding process appeared to wake the sperm from a dormant state and trigger one of rapid motility. Intriguingly, as the sperm become more mobile, this somehow encouraged the female mosquito to become more fertile.

The second key component of this research came from the ability to cryo-freeze mosquito sperm and fix them in a near-native state. Traditional entomological research involves infusing insect specimens with a chemical fixative, dehydrating them, then adding a dye to help see different features.

However, at such a small scale, that process introduced the possibility that changes observed in sperm might be due to the preparation, rather than natural changes in the sperm. Through this unique partnership, the cryo-freezing process enabled the researchers to "be a little more certain that the features we saw in our images were due to the sperm changing and not due to the sample preparation," Noble said.

"That's another reason why this collaboration was so helpful," she said, "because it's opening the door to the idea that biologists can really benefit from these high-powered tools."

Noble and Degner knew each other through several on-campus organizations, and while discussing their respective research, realized the benefits of collaborating. Harrington said she hopes this radical collaboration will be the first of many.

"The potential for this tool to increase our understanding of the biology of sperm and physiology of mosquito pathogen interactions is tremendous," she said. "I hope we can employ this approach to explore these other areas in the future."

Credit: 
Cornell University

Celebrated ancient Egyptian woman physician likely never existed, says researcher

AURORA, Colo. (Dec. 16, 2019) - For decades, an ancient Egyptian known as Merit Ptah has been celebrated as the first female physician and a role model for women entering medicine. Yet a researcher from the University of Colorado Anschutz Medical Campus now says she never existed and is an example of how misconceptions can spread.

"Almost like a detective, I had to trace back her story, following every lead, to discover how it all began and who invented Merit Ptah," said Jakub Kwiecinski, PhD, an instructor in the Dept. of Immunology and Microbiology at the CU School of Medicine and a medical historian.

His study was published last week in the Journal of the History of Medicine and Allied Sciences.

Kwiecinski's interest in Merit Ptah (`beloved of god Ptah') was sparked after seeing her name in so many places.

"Merit Ptah was everywhere. In online posts about women in STEM, in computer games, in popular history books, there's even a crater on Venus named after her," he said. "And yet, with all these mentions, there was no proof that she really existed. It soon became clear that there had been no ancient Egyptian woman physician called Merit Ptah."

Digging deep into the historical record, Kwiecinski discovered a case of mistaken identity that took on a life of its own, fueled by those eager for an inspirational story.

According to Kwiecinski, Merit Ptah the physician had her origins in the 1930s when Kate Campbell Hurd-Mead, a medical historian, doctor and activist, set out to write a complete history of medical women around the world. Her book was published in 1938.

She talked about the excavation of a tomb in the Valley of Kings where there was a "picture of a woman doctor named Merit Ptah, the mother of a high priest, who is calling her `the Chief Physician.'"

Kwiecinski said there was no record of such a person being a physician.

"Merit Ptah as a name existed in the Old Kingdom, but does not appear in any of the collated lists of ancient Egyptian healers - not even as one of the `legendary'; or `controversial cases," he said. "She is also absent from the list of Old Kingdom women administrators. No Old Kingdom tombs are present in the Valley of the Kings, where the story places Merit Ptah's son, and only a handful of such tombs exist in the larger area, the Theban Necropolis."

The Old Kingdom of Egypt lasted from 2575 to 2150 BC.

But there was another woman who bears a striking resemblance to Merit Ptah. In 1929-30, an excavation in Giza uncovered a tomb of Akhethetep, an Old Kingdom courtier. Inside, a false door depicted a woman called Peseshet, presumably the tomb owner's mother, described as the `Overseer of Healer Women.' Peseshet and Merit Ptah came from the same time periods and were both mentioned in the tombs of their sons who were high priestly officials.

This discovery was described in several books and one of them found its way into Hurd-Mead's private library. Kwiecinski believes Hurd-Mead confused Merit Ptah with Peseseth.

"Unfortunately, Hurd-Mead in her own book accidentally mixed up the name of the ancient healer, as well as the date when she lived, and the location of the tomb," he said. "And so, from a misunderstood case of an authentic Egyptian woman healer, Peseshet, a seemingly earlier Merit Ptah, `the first woman physician' was born."

The Merit Ptah story spread far and wide, driven by a variety of forces. Kwiecinski said one factor was the popular perception of ancient Egypt as an almost fairytale land "outside time and space" perfectly suited for the creation of legendary stories.

The story spread through amateur historian circles, creating a kind of echo chamber not unlike how fake news stories circulate today.

"Finally, it was associated with an extremely emotional, partisan - but also deeply personal - issue of equal rights," he said. "Altogether this created a perfect storm that propelled the story of Merit Ptah into being told over and over again."

Yet Kwiecinski said the most striking part of the story is not the mistake but the determination of generations of women historians to recover the forgotten history of female healers, proving that science and medicine have never been exclusively male.

"So even though Merit Ptah is not an authentic ancient Egyptian woman healer," he said. "She is a very real symbol of the 20th century feministic struggle to write women back into the history books, and to open medicine and STEM to women."

Credit: 
University of Colorado Anschutz Medical Campus

Radiation breaks connections in the brain

One of the potentially life-altering side effects that patients experience after cranial radiotherapy for brain cancer is cognitive impairment. Researchers now believe that they have pinpointed why this occurs and these findings could point the way for new therapies to protect the brain from the damage caused by radiation.

The new study - which appears in the journal Scientific Reports - shows that radiation exposure triggers an immune response in the brain that severs connections between nerve cells. While the immune system's role in remodeling the complex network of links between neurons is normal in the healthy brain, radiation appears to send the process into overdrive, resulting in damage that could be responsible for the cognitive and memory problems that patients often face after radiotherapy.

"The brain undergoes a constant process of rewiring itself and cells in the immune system act like gardeners, carefully pruning the synapses that connect neurons," said Kerry O'Banion, M.D., Ph.D., a professor in the University of Rochester Del Monte Institute for Neuroscience and senior author of the study which was conducted in mice. "When exposed to radiation, these cells become overactive and destroy the nodes on nerve cells that allow them to form connections with their neighbors."

The culprit is a cell in the immune system called microglia. These cells serve as the brain's sentinels, seeking out and destroying infections, and cleaning up damaged tissue after an injury. In recent years, scientists have begun to understand and appreciate microglia's role in the ongoing process by which the networks and connections between neurons are constantly wired and rewired during development and to support learning, memory, cognition, and sensory function.

Microglia interact with neurons at the synapse, the juncture where the axon of one neuron connects and communicates with another. Synapses are clustered on arms that extend out from the receiving neuron's main body called dendrites. When a connection is no longer required, signals are sent out in the form of proteins that tell microglia to destroy the synapse and remove the link with its neighbor.

In the new study, researchers exposed the mice to radiation equivalent to the doses that patients experience during cranial radiotherapy. They observed that microglia in the brain were activated and removed nodes that form one end of the synaptic juncture - called spines - which prevented the cells from making new connections with other neurons. The microglia appeared to target less mature spines, which the researchers speculate could be important for encoding new memories - a finding that may explain the cognitive difficulties that many patients experience. The researchers also observed that the damage found in the brain after radiation was more pronounced in male mice.

While advances have been made in recent years in cranial radiotherapy protocols and technology that allow clinicians to better target tumors and limit the area of the brain exposed to radiation, the results of the study show that the brain remains at significant risk to damage during therapy.

The research points to two possible approaches that could help prevent damages to nerve cells, including blocking a receptor called CR3 that is responsible for synapse removal by microglia. When the CR3 receptor was suppressed in mice, the animals did not experience synaptic loss when exposed to radiation. Another approach could be to tamp down the brain's immune response while the person undergoes radiotherapy to prevent microglia from becoming overactive.

Credit: 
University of Rochester Medical Center

Researchers explore factors affecting money management skills in multiple sclerosis

image: Managing money may be difficult for individuals with MS who have depressive symptomatology and deficits in executive function.

Image: 
Kessler Foundation/Nicky Miller

East Hanover, NJ. December 16, 2019. A team of rehabilitation researchers identified factors associated with the money management problems experienced by some individuals with multiple sclerosis. Few studies have addressed this issue, which can have substantial impact on quality of life. The open access article, "Money management in multiple sclerosis: The role of cognitive, motor, and affective factors", (doi: 10.3389/neur.2019.01128) was epublished on October 23, 2019 by Frontiers in Neurology.

The authors are Yael Goverover, OTR/L, PhD, OT, of New York University and Kessler Foundation, and Nancy Chiaravalloti, PhD, and John DeLuca, PhD, of Kessler Foundation.

Open access link: https://www.frontiersin.org/articles/10.3389/fneur.2019.01128/full

Researchers enrolled 72 participants with multiple sclerosis, aged 18 to 65 years, and 26 healthy controls. To examine the association between money management difficulties and cognitive, motor, and emotional factors, researchers tested all participants for cognitive skills, depression and anxiety, and upper and lower limb motor function.

Money management skills were assessed with two methods: 1) KF-Actual Reality™, a performance-based assessment developed at Kessler Foundation that tests five behaviors essential to money management by tasking the participant with an actual task - making an online purchase, and 2) a money management questionnaire developed for use in individuals with brain injury. Based on their performance, the participants with MS were grouped as efficient (MS Efficient-MM) or inefficient money managers (MS Inefficient-MM).

Overall, the healthy control group performed better than both MS groups. Of the three groups, the MS Inefficient-MM group scored lowest on measures of cognitive and motor skills, and highest on affective symptomatology. Researchers identified two factors associated with efficient money management: good executive functioning and low depressive symptomatology. "It is important to note that these factors characterized the healthy controls and the MS Efficient-MM group," said Dr. Goverover, the lead author, "indicating that money management difficulties affect a subset, and not the MS population as a whole."

The association of money management difficulties with depressive symptomatology is a new finding, according to Dr. Goverover, and further research is warranted into what may be a key predictor for these difficulties in the MS population. "Difficulties with managing money can have serious financial, legal, and psychological consequences for individuals and their caregivers," she emphasized. "Owing money, paying bills late, making impulse purchases, running out of money for essentials - these behaviors adversely affect the ability to function independently in everyday life. Knowing the factors that underlie money management problems will enable providers to identify those at risk and counsel caregivers to intervene effectively to minimize negative behaviors."

Credit: 
Kessler Foundation

Developing next-generation biologic pacemakers

image: University of Houston associate professor of pharmacology Bradley McConnell is helping usher in a new age of cardiac pacemakers by using stem cells found in fat, converting them to heart cells, and reprogramming those to act as biologic pacemaker cells.

Image: 
University of Houston

University of Houston associate professor of pharmacology Bradley McConnell is helping usher in a new age of cardiac pacemakers by using stem cells found in fat, converting them to heart cells, and reprogramming those to act as biologic pacemaker cells. He is reporting his work in the Journal of Molecular and Cellular Cardiology. The new biologic pacemaker-like cell will be useful as an alternative treatment for conduction system disorders, cardiac repair after a heart attack and to bridge the limitations of the electronic pacemaker.

"We are reprogramming the cardiac progenitor cell and guiding it to become a conducting cell of the heart to conduct electrical current," said McConnell.

McConnell's collaborator, Robert J. Schwartz, Hugh Roy and Lillian Cranz Cullen Distinguished Professor of biology and biochemistry, previously reported work on turning the adipogenic mesenchymal stem cells, that reside in fat cells, into cardiac progenitor cells. Now those same cardiac progenitor cells are being programmed to keep hearts beating as a sinoatrial node (SAN), part of the electrical cardiac conduction system (CCS).

The SAN is the primary pacemaker of the heart, responsible for generating the electric impulse or beat. Native cardiac pacemaker cells are confined within the SAN, a small structure comprised of just a few thousand specialized pacemaker cells. Failure of the SAN or a block at any point in the CCS results in arrhythmias.

More than 600,000 electronic pacemakers are implanted in patients annually to help control abnormal heart rhythms. The small mechanical device is placed in the chest or abdomen and uses electrical pulses to prompt the heart to beat normally. In addition to having the device regularly examined by a physician, over time an electronic pacemaker can stop working properly.

"Batteries will die. Just look at your smartphone," said McConnell. "This biologic pacemaker is better able to adapt to the body and would not have to be maintained by a physician. It is not a foreign object. It would be able to grow with the body and become much more responsive to what the body is doing."

To convert the cardiac progenitor cells, McConnell infused the cells with a unique cocktail of three transcription factors and a plasma membrane channel protein to reprogram the heart cells in vitro.

"In our study, we observed that the SHOX2, HCN2, and TBX5 (SHT5) cocktail of transcription factors and channel protein reprogrammed the cells into pacemaker-like cells. The combination will facilitate the development of cell-based therapies for various cardiac conduction diseases," he reported.

Credit: 
University of Houston

Study examines causes of death in US breast cancer survivors

Survival rates for patients with breast cancer have improved significantly in the last four decades, and many patients will eventually die from non-cancer-related causes. Researchers recently conducted the largest population-based long-term retrospective analysis of non-cancer causes of death among patients with breast cancer. The findings are published early online in CANCER, a peer-reviewed journal of the American Cancer Society.

Of 754,270 U.S. women diagnosed with breast cancer from 2000 to 2015, 24.3 percent died by the end of 2015. The highest number of deaths (46.2 percent) occurred within one to five years following diagnosis, and most were caused by breast cancer or other cancers. Breast cancer-related deaths decreased as years passed, however, and were eventually overcome by non-breast cancer causes of death. Within five to 10 years following diagnosis, about half of patients died of non-breast cancer causes, whereas the majority of those who survived beyond 10 years died of non-breast cancer causes.

The most common non-cancer causes of death within 10 years of diagnosis were heart diseases, followed by cerebrovascular diseases. After more than 10 years following diagnosis, the most common non-cancer causes of death were heart diseases, followed by Alzheimer's disease.

Compared with the general population, patients had a higher risk of dying from chronic liver diseases within 5-10 years following diagnosis, and from Alzheimer's disease and heart diseases after more than 10 years following diagnosis.

"Non-cancer diseases, such as heart diseases, contribute to a significant number of deaths in patients with breast cancer, even higher than in the general population," said senior author Mohamad Bassam Sonbol, MD, of Mayo Clinic in Phoenix, Arizona. "Cancers other than breast cancer are also an important cause of death in patients with a history of breast cancer."

The results will be informative for survivors in discussions with physicians about their future health. "Our findings emphasize the importance of counseling patients about their survivorship and risk of developing other cancers, with a focus on proper screening or preventive measures for other cancers and diseases," added Dr. Sonbol.

Credit: 
Wiley

E-cigarettes significantly raise risk of chronic lung disease, first long-term study finds

E-cigarette use significantly increases a person's risk of developing chronic lung diseases like asthma, bronchitis, emphysema or chronic obstructive pulmonary disease, according to new UC San Francisco research, the first longitudinal study linking e-cigarettes to respiratory illness in a sample representative of the entire U.S. adult population.

The study also found that people who used e-cigarettes and also smoked tobacco -- by far the most common pattern among adult e-cigarette users -- were at an even higher risk of developing chronic lung disease than those who used either product alone.

The findings were published Dec. 16, 2019 in the American Journal of Preventive Medicine and are based on an analysis of publicly available data from the Population Assessment of Tobacco and Health (PATH), which tracked e-cigarette and tobacco habits as well as new lung disease diagnoses in over 32,000 American adults from 2013 to 2016.

Though several earlier population studies had found an association between e-cigarette use and lung disease at a single point in time, these so-called cross-sectional studies provided a snapshot that made it impossible for researchers to say whether lung disease was being caused by e-cigarettes or if people with lung disease were more likely to use e-cigarettes.

By starting with people who did not have any reported lung disease, taking account of their e-cigarette use and smoking from the start, and then following them for three years, the new longitudinal study offers stronger evidence of a causal link between adult e-cigarette use and lung diseases than prior studies.

"What we found is that for e-cigarette users, the odds of developing lung disease increased by about a third, even after controlling for their tobacco use and their clinical and demographic information," said senior author Stanton Glantz, PhD, a UCSF professor of medicine and director of the UCSF Center for Tobacco Control Research and Education.

"We concluded that e-cigarettes are harmful on their own, and the effects are independent of smoking conventional tobacco," Glantz said.

Though current and former e-cigarette users were 1.3 times more likely to develop chronic lung disease, tobacco smokers increased their risk by a factor of 2.6. For dual users -- people who smoke and use e-cigarettes at the same time -- these two risks multiply, more than tripling the risk of developing lung disease.

"Dual users -- the most common use pattern among people who use e-cigarettes -- get the combined risk of e-cigarettes and conventional cigarettes, so they're actually worse off than tobacco smokers," said Glantz.

This finding is particularly relevant as the debate continues to rage over whether e-cigarettes should be promoted as a harm-reduction tool for smokers. While the authors found that switching from smoked tobacco to e-cigarettes lowered the risk of developing lung disease, fewer than 1 percent of the smokers had completely switched to e-cigarettes.

"Switching from conventional cigarettes to e-cigarettes exclusively could reduce the risk of lung disease, but very few people do it," said Glantz. "For most smokers, they simply add e-cigarettes and become dual users, significantly increasing their risk of developing lung disease above just smoking."

Importantly, the results reported in this study are unrelated to EVALI (E-cigarette or Vaping Product Use-Associated Lung Injury), the acute lung disease first reported last summer, severe cases of which sent several e-cigarette users to the hospital and others to an early grave. Though scientists are still working to determine the cause of EVALI, prior physiological studies in both animals and humans found that e-cigarettes suppress the immune system and increase the levels of stress-related proteins in the lungs. And chemical analyses showed that e-cigarettes contain higher levels of certain toxic chemicals than conventional cigarettes. But the new study shows that these are not the only health threats posed by e-cigarettes.

"This study contributes to the growing case that e-cigarettes have long-term adverse effects on health and are making the tobacco epidemic worse," said Glantz.

Credit: 
University of California - San Francisco

Heart-healthy diets are naturally low in dietary cholesterol and can help to reduce the risk of heart disease and stroke

DALLAS, Dec. 16, 2019 -- Reducing dietary cholesterol by focusing on an overall heart-healthy dietary pattern that replaces saturated fats with polyunsaturated fats remains good advice for keeping artery-clogging LDL cholesterol levels healthy. Such dietary patterns are naturally low in dietary cholesterol. Current research does not support a specific numerical limit on cholesterol from food according to a Scientific Advisory (Advisory) from the American Heart Association, published today in the Association's premier journal Circulation.

Much of the cholesterol in blood is manufactured in the liver and used for building cells. However, foods such as full-fat dairy products and fatty cuts of red and processed meats contain relatively high amounts of cholesterol and are also usually high in saturated fat, which may cause an accumulation of cholesterol in blood. Too much cholesterol in the blood contributes to the formation of thick, hard deposits on the inside of the arteries, a process that underlies most heart diseases and strokes.

Scientific research about the role of dietary cholesterol has not conclusively found a link between dietary cholesterol and higher LDL cholesterol at levels currently consumed. The differences in findings may be based on the way studies about diet are designed and the absolute amount of cholesterol fed, according to the Advisory. For example, evidence from observational studies conducted in several countries generally does indicate a significant association between dietary cholesterol and cardiovascular disease (CVD) risk. Observational studies, however, are not designed to prove cause and effect - they identify trends, often based on study participants filling out questionnaires about what they eat. Study findings from observational studies could be impacted by factors such as the difficulty of teasing out the specific effect of dietary cholesterol versus saturated fat because most foods that are high in saturated fats are also high in dietary cholesterol.

The meta-analysis included in the Advisory that included randomized, controlled, dietary intervention trials, which are designed to prove cause and effect, found that there was a dose-dependent relationship between dietary cholesterol and higher levels of artery-clogging LDL cholesterol when the range of dietary cholesterol tested was beyond that normally eaten. This relationship persists after adjustment for dietary fat type. The feeding studies included in the meta-analysis provided food to participants, so the researchers could accurately understand what people eat, however, they are costly to conduct. Hence, the meta-analysis was limited by the small number of participants in each randomized trial. The researchers were also not able to adequately compare the role of artery-clogging LDL cholesterol, HDL "good" cholesterol and total cholesterol in the blood among the participants because of their small size -- and HDL and total cholesterol could influence the results.

"Consideration of the relationship between dietary cholesterol and CVD risk cannot ignore two aspects of diet. First, most foods contributing cholesterol to the U.S. diet are usually high in saturated fat, which is strongly linked to an increased risk of too much LDL cholesterol. Second, we know from an enormous body of scientific studies that heart-healthy dietary patterns, such as Mediterranean-style and DASH style diets (Dietary Approaches to Stop Hypertension) are inherently low in cholesterol," said Jo Ann S. Carson, Ph.D., R.D.N., L.D., immediate-past chair and current member of the American Heart Association's nutrition committee and professor of clinical nutrition at UT Southwestern Medical Center in Dallas, Texas when the advisory was written.

"Eating a nutrient-rich diet that emphasizes fruits, vegetables, whole grains, low-fat or fat-free dairy products, lean cuts of meat, poultry, fish or plant-based protein, nuts and seeds. Saturated fats - mostly found in animal products such as meat and full fat dairy, as well as tropical oils - should be replaced with polyunsaturated fats such as corn, canola or soybean oils. Foods high in added sugars and sodium (salt) should be limited," said Carson.

As per the Advisory, in general, egg intake was not significantly associated with the risk of cardiovascular disease in the studies that were examined. It is reasonable to eat one whole egg (or its equivalent such as 3 ounces of shrimp) daily as part of a heart-healthy diet for healthy individuals.

The Advisory continues to support the recommendation in the 2019 American College of Cardiology/American Heart Association Guideline on the Primary Prevention of Cardiovascular Disease to reduce intake of dietary cholesterol for overall heart health.

Credit: 
American Heart Association

Fossil shells reveal both global mercury contamination and warming when dinosaurs perished

ANN ARBOR--The impact of an asteroid or comet is acknowledged as the principal cause of the mass extinction that killed off most dinosaurs and about three-quarters of the planet's plant and animal species 66 million years ago.

But massive volcanic eruptions in India may also have contributed to the extinctions. Scientists have long debated the significance of the Deccan Traps eruptions, which began before the impact and lasted, on and off, for nearly a million years, punctuated by the impact event.

Now, a University of Michigan-led geochemical analysis of fossil marine mollusk shells from around the globe is providing new insights into both the climate response and environmental mercury contamination at the time of the Deccan Traps volcanism.

From the same shell specimens, the researchers found what appears to be a global signal of both abrupt ocean warming and distinctly elevated mercury concentrations. Volcanoes are the largest natural source of mercury entering the atmosphere.

The dual chemical fingerprints begin before the impact event and align with the onset of the Deccan Traps eruptions.

When the researchers compared the mercury levels from the ancient shells to concentrations in freshwater clam shells collected at a present-day site of industrial mercury pollution in Virginia's Shenandoah Valley, the levels were roughly equivalent.

Evidence from the study, which is scheduled for publication Dec.16 in the journal Nature Communications, supports the idea that Deccan Traps volcanism had climatic and ecological impacts that were profound, long-lasting and global, the researchers conclude.

"For the first time, we can provide insights into the distinct climatic and environmental impacts of Deccan Traps volcanism by analyzing a single material," said Kyle Meyer, lead author of the new study. "It was incredibly surprising to see that the exact same samples where marine temperatures showed an abrupt warming signal also exhibited the highest mercury concentrations, and that these concentrations were of similar magnitude to a site of significant modern industrial mercury contamination."

Meyer conducted the study as part of his doctoral dissertation in the U-M Department of Earth and Environmental Sciences. He is now a postdoctoral researcher at Portland State University in Oregon.

Mercury is a toxic trace metal that poses a health threat to humans, fish and wildlife. Human-generated sources of mercury include coal-fired power plants and artisanal gold mines. At Virginia's South River industrially contaminated site, where the researchers collected freshwater clam shells, signs warn residents not to eat fish from the river.

"The modern site has a fishing ban for humans because of high mercury levels. So, imagine the environmental impact of having this level of mercury contamination globally for tens to hundreds of thousands of years," said U-M geochemist and study co-author Sierra Petersen, who was Meyer's co-adviser.

The researchers hypothesized that the fossilized shells of mollusks, principally bivalves such as oysters and clams, could simultaneously record both coastal marine temperature responses and varying mercury signals associated with the release of massive amounts of heat-trapping carbon dioxide and mercury from the Deccan Traps.

The long-lived Deccan Traps eruptions formed much of western India and were centered on the time of the Cretaceous-Paleogene (K-Pg) mass extinction, 66 million years ago.

The study used fossil shells collected in Antarctica, the United States (Alabama, Alaska, California and Washington), Argentina, India, Egypt, Libya and Sweden. The researchers analyzed the isotopic composition of the shell carbonate to determine marine temperatures, using a recently developed technique called carbonate clumped isotope paleothermometry.

They also measured the amount of mercury in the remarkably well-preserved fossil shells and assembled the first-ever deep-time record of mercury preserved in fossilized biomineral remains.

In previous studies, records of environmental mercury have been reconstructed from marine sediments, providing insights into the timing and scale of the Deccan Traps event. But those records lacked such a direct linkage to the climate response. In the new study, both signals are present in the same specimens--an important first, according to the authors.

"Mercury anomalies had been documented in sediments but never before in shells. Having the ability to reconstruct both climate and a volcanism indicator in the exact same materials helps us circumvent lots of problems related to relative dating," said Petersen, an assistant professor in the U-M Department of Earth and Environmental Sciences. "So, one of the big firsts in this study is the technical proof of concept."

The new technique is expected to have broad applications for the study of mass extinctions and climate perturbations in the geological record, according to the researchers.

Credit: 
University of Michigan