Culture

Stem cells show promise as drug delivery tool for childhood brain cancer

CHAPEL HILL -- The latest in a series of laboratory breakthroughs could lead to a more effective way to treat the most common brain cancer in children. Scientists from the University of North Carolina Lineberger Comprehensive Cancer Center and UNC Eshelman School of Pharmacy reported results from early studies that demonstrate how cancer-hunting stem cells, developed from skin cells, can track down and deliver a drug to destroy medulloblastoma cells hiding after surgery.

Previously, UNC Lineberger's Shawn Hingtgen, PhD, and his collaborators showed in preclinical studies they could flip skin cells into stem cells that hunt and deliver cancer-killing drugs to glioblastoma, the deadliest malignant brain tumor in adults. In their new study, published in PLOS ONE, the researchers reported they could shrink tumors in laboratory models of medulloblastoma, and extend life. The study is a necessary step toward developing clinical trials that would see if the approach works for children.

Hingtgen said this approach holds promise for reducing side effects an­­­d helping more children with medulloblastoma. More than 70 percent of patients with average-risk disease live five years on standard treatment, but not all patients respond, and treatment can cause lasting neurologic and developmental side effects.

"Children with medulloblastoma receive chemotherapy and radiation, which can be very toxic to the developing brain," said Hingtgen, who is an associate professor in the UNC Eshelman School of Pharmacy, an assistant professor in the UNC School of Medicine Department of Neurosurgery, and a member of UNC LIneberger. "If we could use this strategy to eliminate or reduce the amount of chemotherapy or radiation that patients receive, there could be quality-of-life benefits."

Hingtgen and his team showed the natural ability of the stem cells to home to tumors, and began studying them as a way to deliver drugs to tumors and limit toxicity to the rest of the body. Their technology is an extension of a discovery that won researchers a Nobel Prize in 2012, and showed they could transform skin cells into stem cells.

"The cells are like a FedEx truck that will get you to a particular location, and (deliver) potent cytotoxic agents directly into the tumor," Hingtgen said. "We essentially turn your skin into something that will crawl to find invasive and infiltrative tumors."

For the study, researchers reprogrammed skin cells into stem cells, and then genetically engineered them to manufacture a substance that becomes toxic to other cells when exposed to another drug, called a "pro-drug." Inserting the drug-carrying stem cells into the brain of laboratory models after surgery decreased the size of tumors by 15 times, and extended median survival in mice by 133 percent. Using human stem cells, they prolonged life of the mice by 123 percent.

They also developed a laboratory model of medulloblastoma to allow them to simulate the way standard care is currently delivered - surgery followed by drug therapy. Using this model, they discovered that after surgically removing a tumor, the cancer cells that remained grew faster.

"After you resect the tumor, we found it becomes really aggressive," Hingtgen said. "The cancer that remained grew faster after the tumor was resected."

Scott Elton, MD, FAANS, FAAP, chief of the UNC School of Medicine Division of Pediatric Neurosurgery and co-author on the study, said there is a need for new treatments for medulloblastomas that have come back, or recurred, as well as for treatments that are less toxic overall. The ability to use a patient's own cells to directly target the tumor would be "the holy grail" of therapy, according to Elton, and he believes it could hold promise for other rare, and sometimes fatal, brain cancer types that occur in children as well.

"Medulloblastoma is cancer that happens mostly in kids, and while current therapy has changed survival pretty dramatically, it can still be pretty toxic," Elton said. "This is a great avenue of exploration, particularly for the 30 percent of children who struggle or don't make it with standard therapy. We want to nudge the needle even further."

Credit: 
UNC Lineberger Comprehensive Cancer Center

What's that smell? Scientists find a new way to understand odors

image: A mathematical model reveals a map for odors from the natural environment. From left: Yuansheng Zhou and Tatyana Sharpee.

Image: 
Salk Institute

LA JOLLA--(August 29, 2018) Every smell, from a rose to a smoky fire to a pungent fish, is composed of a mixture of odorant molecules that bind to protein receptors inside your nose. But scientists have struggled to understand exactly what makes each combination of odorant molecules smell the way it does or predict from its structure whether a molecule is pleasant, noxious or has no smell at all.

Now, scientists from the Salk Institute and Arizona State University have discovered a new way to organize odor molecules based on how often they occur together in nature, which is the setting in which our sense of smell evolved. They were then able to map this data to discover regions of odor combinations humans find most pleasurable. The findings, published on August 29 in the journal Science Advances, open new avenues for engineering smells and tastes.

"We can arrange sound by high frequency and low frequency; vision by a spectrum of wavelengths and colors," says Tatyana Sharpee, an associate professor in Salk's Computational Neurobiology Laboratory and lead author of the new work. "But when it comes to olfaction, it's been an unsolved problem whether there is a way to organize odors."

Previously, scientists had tried to classify odorant molecules strictly based on their chemical structures. "But it turns out molecules with structures that look very similar can smell very different," says Sharpee.

Using existing data on the odorant molecules found in different samples of strawberries, tomatoes, blueberries and mouse urine, they used statistical methods to put odorant molecules on a map based on how frequently they occurred together in the four sets of samples.

Those molecules that occurred together more frequently were placed closer to each other.

"It's akin to me telling you that Chicago is x number of miles from New York, Los Angeles and Melbourne. And if you mapped the cities, you'd find out is that the Earth's surface is curved, not flat--otherwise the distance from, say, Melbourne to Los Angeles wouldn't add up. We did the same thing for odor," explains Sharpee.

Using this strategy, the scientists found that odor molecules could similarly be mapped onto a curved surface in three dimensions. But instead of a sphere, like Earth, it turned out to be the shape of a Pringles potato chip--a shape mathematicians call a hyperboloid.

When the team looked at how the molecules clustered on this surface, they found there were pleasant and unpleasant directions, as well as directions that correlated with acidity or how easily odors evaporate from surfaces. These observations now make it easier to construct pleasant odor mixtures to use, for example, in artificial environments (such as a space station).

"By revealing more about how odorant molecules and the brain interact, this work may also have implications for understanding why people with some diseases--like Parkinson's--lose their sense of smell," adds behavioral neuroscientist Brian Smith of Arizona State University, a coauthor of the paper.

Credit: 
Salk Institute

Study Demonstrates a New Recurrence-Based Method that Mimics Kolmogorov-Smirnov Test

WASHINGTON, D.C., August 29, 2018 -- The recurrence plot is a vital tool for analyzing nonlinear dynamic systems, especially systems involving empirically observed time series data. RPs show patterns in a phase space system and indicate where data visit the same coordinates. RPs can also mimic some types of inferential statistics and linear analyses, such as spectral analysis. A new paper in the journal Chaos, from AIP Publishing, provides a proof of concept for using RPs to mimic the Kolmogorov-Smirnov test, which scientists use to determine if two data sets significantly differ.

The authors, however, caution that not all types of data can be used with this new method. "Continuous data at an interval or ratio-scale level would be best suited for this technique," said Giuseppe Leonardi, one of the study's authors. "However, discretely distributed data at the same level of measurement such as dice throws would also be suitable."

The researchers analyzed recurrence points in the RPs by dividing the RP into four quadrants and counting the number of recurrence points in each cell. Then, they calculated the within-sample and between-sample recurrence rates and used those values, along with expected frequencies, to determine a p-value related to the difference between the samples. This p-value indicated whether the two groups were from the same sample or from different samples.

To verify their proof of concept, the researchers conducted a series of simulations to see how their recurrence-based test performed compared to the Kolmogorov-Smirnov test. These simulations involved two groups of normal, skewed normal, or log-normal distributions with various combinations of means and standard deviations. The researchers found that the recurrence-based method performed roughly the same as the Kolmogorov-Smirnov test with a few differences in sensitivity with different distribution types.

The recurrence-based test appeared to be more sensitive at the tails of the distribution than the Kolmogorov-Smirnov test. This could be because the test considers deviations along the whole range of values, unlike the Kolmogorov-Smirnov test which only accounts for the largest deviation between two distributions. Leonardi explained that this enhanced sensitivity would make the recurrence-based test especially useful for nonlinear data like human reaction times.

He also cautioned that their method might suggest statistically reliable differences that are too small to be meaningful. "This might be a downside of the test for practical users," Leonardi said. "However, we have not investigated such effects in depth."

This proof of concept demonstrates that the RP can be useful for statistical analysis tools. Going forward, the team plans to investigate the effects of sample size on their method. Leonardi said they would also like to further develop the test to model other types of inferential statistics including analysis of variance.

Credit: 
American Institute of Physics

Looking in the depths of the great red spot to find water on Jupiter

image: The Great Red Spot is the dark patch in the middle of this infrared image. It is dark due to the thick clouds that block thermal radiation. The yellow strip denotes the portion of the Great Red Spot used in astrophysicist Gordon L. Bjoraker's analysis.

Image: 
NASA's Goddard Space Flight Center/Gordon Bjoraker

For centuries, scientists have worked to understand the makeup of Jupiter. It's no wonder: this mysterious planet is the biggest one in our solar system by far, and chemically, the closest relative to the Sun. Understanding Jupiter is a key to learning more about how our solar system formed, and even about how other solar systems develop.

But one critical question has bedeviled astronomers for generations: Is there water deep in Jupiter's atmosphere, and if so, how much?

Gordon L. Bjoraker, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, reported in a recent paper in the Astronomical Journal that he and his team have brought the Jovian research community closer to the answer.

By looking from ground-based telescopes at wavelengths sensitive to thermal radiation leaking from the depths of Jupiter's persistent storm, the Great Red Spot, they detected the chemical signatures of water above the planet's deepest clouds. The pressure of the water, the researchers concluded, combined with their measurements of another oxygen-bearing gas, carbon monoxide, imply that Jupiter has 2 to 9 times more oxygen than the sun. This finding supports theoretical and computer-simulation models that have predicted abundant water (H2O) on Jupiter made of oxygen (O) tied up with molecular hydrogen (H2).

The revelation was stirring given that the team's experiment could have easily failed. The Great Red Spot is full of dense clouds, which makes it hard for electromagnetic energy to escape and teach astronomers anything about the chemistry within.

"It turns out they're not so thick that they block our ability to see deeply," said Bjoraker. "That's been a pleasant surprise."

New spectroscopic technology and sheer curiosity gave the team a boost in peering deep inside Jupiter, which has an atmosphere thousands of miles deep, Bjoraker said: "We thought, well, let's just see what's out there."

The data Bjoraker and his team collected will supplement the information NASA's Juno spacecraft is gathering as it circles the planet from north to south once every 53 days.

Among other things, Juno is looking for water with its own infrared spectrometer and with a microwave radiometer that can probe deeper than anyone has seen -- to 100 bars, or 100 times the atmospheric pressure at Earth's surface. (Altitude on Jupiter is measured in bars, which represent atmospheric pressure, since the planet does not have a surface, like Earth, from which to measure elevation.)

If Juno returns similar water findings, thereby backing Bjoraker's ground-based technique, it could open a new window into solving the water problem, said Goddard's Amy Simon, a planetary atmospheres expert.

"If it works, then maybe we can apply it elsewhere, like Saturn, Uranus or Neptune, where we don't have a Juno," she said.

Juno is the latest spacecraft tasked with finding water, likely in gas form, on this giant gaseous planet.

Water is a significant and abundant molecule in our solar system. It spawned life on Earth and now lubricates many of its most essential processes, including weather. It's a critical factor in Jupiter's turbulent weather, too, and in determining whether the planet has a core made of rock and ice.

Jupiter is thought to be the first planet to have formed by siphoning the elements left over from the formation of the Sun as our star coalesced from an amorphous nebula into the fiery ball of gases we see today. A widely accepted theory until several decades ago was that Jupiter was identical in composition to the Sun; a ball of hydrogen with a hint of helium -- all gas, no core.

But evidence is mounting that Jupiter has a core, possibly 10 times Earth's mass. Spacecraft that previously visited the planet found chemical evidence that it formed a core of rock and water ice before it mixed with gases from the solar nebula to make its atmosphere. The way Jupiter's gravity tugs on Juno also supports this theory. There's even lightning and thunder on the planet, phenomena fueled by moisture.

"The moons that orbit Jupiter are mostly water ice, so the whole neighborhood has plenty of water," said Bjoraker. "Why wouldn't the planet -- which is this huge gravity well, where everything falls into it -- be water rich, too?"

The water question has stumped planetary scientists; virtually every time evidence of H2O materializes, something happens to put them off the scent. A favorite example among Jupiter experts is NASA's Galileo spacecraft, which dropped a probe into the atmosphere in 1995 that wound up in an unusually dry region. "It's like sending a probe to Earth, landing in the Mojave Desert, and concluding the Earth is dry," pointed out Bjoraker.

In their search for water, Bjoraker and his team used radiation data collected from the summit of Maunakea in Hawaii in 2017. They relied on the most sensitive infrared telescope on Earth at the W.M. Keck Observatory, and also on a new instrument that can detect a wider range of gases at the NASA Infrared Telescope Facility.

The idea was to analyze the light energy emitted through Jupiter's clouds in order to identify the altitudes of its cloud layers. This would help the scientists determine temperature and other conditions that influence the types of gases that can survive in those regions.

Planetary atmosphere experts expect that there are three cloud layers on Jupiter: a lower layer made of water ice and liquid water, a middle one made of ammonia and sulfur, and an upper layer made of ammonia.

To confirm this through ground-based observations, Bjoraker's team looked at wavelengths in the infrared range of light where most gases don't absorb heat, allowing chemical signatures to leak out. Specifically, they analyzed the absorption patterns of a form of methane gas. Because Jupiter is too warm for methane to freeze, its abundance should not change from one place to another on the planet.

"If you see that the strength of methane lines vary from inside to outside of the Great Red Spot, it's not because there's more methane here than there," said Bjoraker, "it's because there are thicker, deep clouds that are blocking the radiation in the Great Red Spot."

Bjoraker's team found evidence for the three cloud layers in the Great Red Spot, supporting earlier models. The deepest cloud layer is at 5 bars, the team concluded, right where the temperature reaches the freezing point for water, said Bjoraker, "so I say that we very likely found a water cloud." The location of the water cloud, plus the amount of carbon monoxide that the researchers identified on Jupiter, confirms that Jupiter is rich in oxygen and, thus, water.

Bjoraker's technique now needs to be tested on other parts of Jupiter to get a full picture of global water abundance, and his data squared with Juno's findings.

"Jupiter's water abundance will tell us a lot about how the giant planet formed, but only if we can figure out how much water there is in the entire planet," said Steven M. Levin, a Juno project scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif.

Credit: 
NASA/Goddard Space Flight Center

Gum disease treatment may improve symptoms in cirrhosis patients

Rockville, Md. (August 29, 2018)--Routine oral care to treat gum disease (periodontitis) may play a role in reducing inflammation and toxins in the blood (endotoxemia) and improving cognitive function in people with liver cirrhosis. The study is published ahead of print in the American Journal of Physiology--Gastrointestinal and Liver Physiology.

Cirrhosis, which is a growing epidemic in the U.S., is the presence of scar tissue on the liver. When severe, it can lead to liver failure. Complications of cirrhosis can include infections throughout the body and hepatic encephalopathy, a buildup of toxins in the brain caused by advanced liver disease. Symptoms of hepatic encephalopathy include confusion, mood changes and impaired cognitive function.

Previous research shows that people with cirrhosis have changes in gut and salivary microbiota-- bacteria that populate the gastrointestinal tract and mouth--which can lead to gum disease and a higher risk of cirrhosis-related complications. In addition, studies have found that people with cirrhosis have increased levels of inflammation throughout the body, which is associated with hepatic encephalopathy.

Researchers studied two groups of volunteers that had cirrhosis and mild-to-moderate periodontitis. One group received periodontal care ("treated"), including teeth cleaning and removal of bacteria toxins from the teeth and gums. The other group was not treated for gum disease ("untreated"). The research team collected blood, saliva and stool samples before and 30 days after treatment. Each volunteer took standardized tests to measure cognitive function before and after treatment.

The treated group, especially those with hepatic encephalopathy, had increased levels of beneficial gut bacteria that could reduce inflammation, as well as lower levels of endotoxin-producing bacteria in the saliva when compared to the untreated group. The untreated group, on the other hand, demonstrated an increase in endotoxin levels in the blood over the same time period. The improvement in the treated group "could be related to a reduction in oral inflammation leading to lower systemic inflammation, or due to [less harmful bacteria] being swallowed and affecting the gut microbiota," the research team wrote.

Cognitive function also improved in the treated group, suggesting that the reduced inflammation levels in the body may minimize some of the symptoms of hepatic encephalopathy in people who are already receiving standard-of-care therapies for the condition. This finding is relevant because there are no further therapies approved by the U.S. Food and Drug Administration to alleviate cognition problems in this population, the researchers said. "The oral cavity could represent a treatment target to reduce inflammation and endotoxemia in patients with cirrhosis to improve clinical outcomes."

Credit: 
American Physiological Society

Removable balloon is as good as permanent stent implant for opening small blocked arteries

Munich, Germany - 28 Aug 2018: A removable balloon is as good as a permanent stent implant for opening small blocked arteries, according to late breaking results from the BASKET-SMALL 2 trial presented in a Hot Line Session today at ESC Congress 20181 and simultaneously published in The Lancet.

Principal investigator Professor Raban Jeger, of the University Hospital Basel, Switzerland, said: "The results of this trial move us a step closer towards treating small blocked arteries without having to insert a permanent implant."

One of the standard treatments for opening blocked arteries is to insert an expandable metal tube (stent) covered with drugs via a catheter. The stent remains in the body permanently. In smaller arteries there is a risk that tissue will grow inside the stent and narrow it, causing the artery to become blocked a second time (in-stent restenosis), or that a blood clot will develop on the stent (stent thrombosis) and cause a heart attack or stroke.

Balloons covered with drugs, also inserted using a catheter, are approved in Europe to reopen stented arteries that have become blocked a second time. The balloon is removed after the procedure.

BASKET-SMALL 2 is the largest randomised trial to examine whether drug coated balloons are as good as drug-eluting stents for opening small arteries that have become blocked for the first time. The effectiveness of the two treatments was evaluated by comparing the rate of major adverse cardiac events (MACE) at 12 months.

Between 2012 and 2017 the trial enrolled 758 patients with a first-time lesion in an artery smaller than 3 mm in diameter. The average age of study participants was 68 years, 72% had stable coronary artery disease and 28% had an acute coronary syndrome (heart attack or unstable angina).

Patients were randomised to receive drug coated balloon angioplasty (382 patients) or second-generation drug-eluting stent implantation (376 patients). The balloon was coated with iopromide and paclitaxel, and the stents were covered with everolimus or paclitaxel.

After the procedure, patients were followed-up for 12 months for the occurrence of MACE, which included death from cardiac causes, non-fatal heart attack, and the need to reopen the artery due to it becoming blocked again (called target vessel revascularisation). Secondary endpoints included the single components of MACE at 12 months, and major bleeding at 12 months.

At 12 months, there was no difference in the rates of MACE between patients who received a stent (7.5%) and patients who underwent the balloon procedure (7.6%) (p=0.918). Professor Jeger said: "The BASKET-SMALL 2 trial met its primary endpoint of non-inferiority for major adverse cardiac events at 12 months. This is a long-awaited milestone in clinical evidence for the drug coated balloon technique, which so far has primarily been used for the treatment of in-stent restenosis."

There were no statistical differences between groups in the rates of the individual components of the primary endpoint at 12 months: rates of cardiac death were 3.1% versus 1.3% (p=0.113), rates of nonfatal heart attack were 1.6% versus 3.5% (p=0.112), and rates of target vessel revascularisation were 3.4% versus 4.5% (p=0.438) in the balloon versus stent groups, respectively. The rate of major bleeding at 12 months was similar in the balloon (1.1%) and stent (2.4%) groups (p=0.183).

"The potential benefits of a stent-free option to treat small blocked arteries are numerous," said Professor Jeger. "With no permanent implant left after the procedure, the problem of tissue growth and clot formation within the stent is eliminated. In addition, there may be no need for prolonged treatment with anticlotting medicines, which has been controversial since it increases the risk of bleeding."

He concluded: "Drug coated balloon angioplasty has the possibility to become the standard treatment for small blocked arteries. We will continue to monitor patients in the trial for a further two years for major adverse cardiac events, stent thrombosis, and bleeding."

Credit: 
European Society of Cardiology

126 patient and provider groups to CMS: Proposed E/M service cuts will hurt sickest patients

WASHINGTON, DC - A broad coalition of 126 patient and provider groups - led by leading national organizations including the American College of Rheumatology - today sent a letter to the Centers for Medicare and Medicaid Services (CMS) urging the agency not to move forward with a proposal that would significantly reduce Medicare reimbursements for evaluation and management (E/M) services provided by specialists, citing concerns that these time-intensive services - which include examinations, disease diagnosis and risk assessments, and care coordination - are already grossly under-compensated and that additional payment cuts would worsen workforce shortages in already strained specialties like rheumatology.

The proposal, which was included in the 2019 Physician Fee Schedule proposed rule, would consolidate billing codes for E/M office visits, resulting in a flat payment for all E/M visits regardless of the complexity of the visit. Though the proposal was intended to reduce Medicare provider documentation and reporting burdens, it would also result in significant payment cuts for specialty care involving face-to-face visits with patients who have complex care needs, penalizing doctors who treat sicker patients or patients with multiple, chronic conditions.

"We applaud CMS for recognizing the problems with the current evaluation and management documentation guidelines and codes and for including a significant proposal to address them in the CY 2019 physician fee schedule proposed rule," the letter reads. "However, we urge CMS to reconsider this proposal to cut and consolidate evaluation and management services, which would severely reduce Medicare patients' access to care by cutting payments for office visits, adversely affecting the care and treatment of patients with complex conditions, and potentially exacerbate physician workforce shortages."

The groups warn that payment cuts of this magnitude will not only compromise patient access to care by forcing physicians to spend less time with their patents but could create a disastrous ripple effect throughout the U.S. health care system, discouraging medical students from pursuing specialties that provide complex care and disincentivizing doctors from taking new Medicare patients altogether.

"Not only will this will result in an additional burden on patients with more copayments and costs associated with time and travel, it will also reduce the quality of care, particularly for patients with complex medical conditions," the letter continues.

The proposed cuts go against the recommendations of the Medicare Payment Advisory Commission (MedPAC), an independent advisory commission to the Medicare program, which earlier this year proposed increasing reimbursement for E/M services given the time and intensity they require, and that E/M services are already undervalued relative to other physician services.

"We therefore urge CMS not to move forward with the proposal as it currently stands, and instead convene stakeholders to identify other strategies to reduce paperwork and administrative burden that do not threaten patient access to care," the letter concludes.

To view the letter, click here.

Credit: 
American College of Rheumatology

CU researchers identify potential target for treating pain during surgery

AURORA, Colo. (Aug. 28, 2018) - A research team lead by faculty of the University of Colorado School of Medicine have published a study that improves the understanding of the pain-sensing neurons that respond to tissue injury during surgery.

The team, led by Slobodan Todorovic, MD, PhD, Professor of Anesthesiology at the School of Medicine and the Neuroscience Graduate Program on the CU Anschutz Medical Campus, reports its findings today in the journal Science Signaling.

"We investigated the potential role and molecular mechanisms of nociceptive ion channel dysregulation in acute pain conditions such as those resulting from skin and soft tissue incision," Todorovic said.

Nociceptors represent a type of a receptor that exist to feel pain when the body is harmed. When activated, nociceptors notify the brain about the injury. In their study, the CU-led team looked at a specific channel for transmitting that information, aimed at developing a better understanding of potential ways to address pain after surgery.

By gaining a better understanding of how these nociceptors work, the researchers aim to identify potential new therapies for pain during surgery and to decrease the need for narcotics.

"Although opioids are very effective in treating the acute pain associated with surgical procedures, their use is associated with serious side effects, which include constipation, urinary retention, impaired cognitive function, respiratory depression, tolerance, and addiction," Todorovic and his co-authors write. "More than 12 million people in the United States abused prescription opioids in 2010 alone, resulting in more overdose deaths than heroin and cocaine combined. The necessity to treat this acute type of pain is of paramount importance since its duration and intensity influence the recovery process after surgery, as well as the onset of chronic post-surgical pain."

Credit: 
University of Colorado Anschutz Medical Campus

Getting to the roots of our ancient cousin's diet

image: Paranthropus robustus fossil from South Africa SK 46 (discovered 1936, estimated age 1.9-1.5 million years) and the virtually reconstructed first upper molar used in the analyses.

Image: 
Kornelius Kupczik, Max Planck Institute for Evolutionary Anthropology

Food needs to be broken down in the mouth before it can be swallowed and digested further. How this is being done depends on many factors, such as the mechanical properties of the foods and the morphology of the masticatory apparatus. Palaeoanthropologists spend a great deal of their time reconstructing the diets of our ancestors, as diet holds the key to understanding our evolutionary history. For example, a high-quality diet (and meat-eating) likely facilitated the evolution of our large brains, whilst the lack of a nutrient-rich diet probably underlies the extinction of some other species (e.g., P. boisei). The diet of South African hominins has remained particularly controversial however.

Using non-invasive high-resolution computed tomography technology and shape analysis the authors deduced the main direction of loading during mastication (chewing) from the way the tooth roots are oriented within the jaw. By comparing the virtual reconstructions of almost 30 hominin first molars from South and East Africa they found that Australopithecus africanus had much wider splayed roots than both Paranthropus robustus and the East African Paranthropus boisei. "This is indicative of increased laterally-directed chewing loads in Australopithecus africanus, while the two Paranthropus species experienced rather vertical loads", says Kornelius Kupczik of the Max Planck Institute for Evolutionary Anthropology.

Paranthropus robustus, unlike any of the other species analysed in this study, exhibits an unusual orientation, i.e. "twist", of the tooth roots, which suggests a slight rotational and back-and-forth movement of the mandible during chewing. Other morphological traits of the P. robustus skull support this interpretation. For example, the structure of the enamel also points towards a complex, multidirectional loading, whilst their unusual microwear pattern can conceivably also be reconciled with a different jaw movement rather than by mastication of novel food sources. Evidently, it is not only what hominins ate and how hard they bit that determines its skull morphology, but also the way in which the jaws are being brought together during chewing.

The new study demonstrates that the orientation of tooth roots within the jaw has much to offer for an understanding of the dietary ecology of our ancestors and extinct cousins. "Perhaps palaeoanthropologists have not always been asking the right questions of the fossil record: rather than focusing on what our extinct cousins ate, we should equally pay attention to how they masticated their foods", concludes Gabriele Macho of the University of Oxford.

Molar root variation in hominins is therefore telling us more than previously thought. "For me as an anatomist and a dentist, understanding how the jaws of our fossil ancestors worked is very revealing as we can eventually apply such findings to the modern human dentition to better understand pathologies such as malocclusions", adds Viviana Toro-Ibacache from the University of Chile and one of the co-authors of the study.

Credit: 
Max Planck Institute for Evolutionary Anthropology

Take a vacation -- it could prolong your life

image: Figure of the intervention and control groups.

Image: 
European Society of Cardiology

Munich, Germany - 28 Aug 2018: Taking vacations could prolong life. That's the finding of a 40-year study presented today at ESC Congress and accepted for publication in The Journal of Nutrition, Health & Aging.1,2

"Don't think having an otherwise healthy lifestyle will compensate for working too hard and not taking holidays," said Professor Timo Strandberg, of the University of Helsinki, Finland. "Vacations can be a good way to relieve stress."

The study included 1,222 middle-aged male executives born in 1919 to 1934 and recruited into the Helsinki Businessmen Study in 1974 and 1975. Participants had at least one risk factor for cardiovascular disease (smoking, high blood pressure, high cholesterol, elevated triglycerides, glucose intolerance, overweight).

Participants were randomised into a control group (610 men) or an intervention group (612 men) for five years. The intervention group received oral and written advice every four months to do aerobic physical activity, eat a healthy diet, achieve a healthy weight, and stop smoking. When health advice alone was not effective, men in the intervention group also received drugs recommended at that time to lower blood pressure (beta-blockers and diuretics) and lipids (clofibrate and probucol). Men in the control group received usual healthcare and were not seen by the investigators.

As previously reported, the risk of cardiovascular disease was reduced by 46% in the intervention group compared to the control group by the end of the trial.3 However, at the 15-year follow-up in 1989 there had been more deaths in the intervention group than in the control group.4,5

The analysis presented today extended the mortality follow-up to 40 years (2014) using national death registers and examined previously unreported baseline data on amounts of work, sleep, and vacation. The researchers found that the death rate was consistently higher in the intervention group compared to the control group until 2004. Death rates were the same in both groups between 2004 and 2014.

Shorter vacations were associated with excess deaths in the intervention group. In the intervention group, men who took three weeks or less annual vacation had a 37% greater chance of dying in 1974 to 2004 than those who took more than three weeks. Vacation time had no impact on risk of death in the control group. (see figures)

Professor Strandberg said: "The harm caused by the intensive lifestyle regime was concentrated in a subgroup of men with shorter yearly vacation time. In our study, men with shorter vacations worked more and slept less than those who took longer vacations. This stressful lifestyle may have overruled any benefit of the intervention. We think the intervention itself may also have had an adverse psychological effect on these men by adding stress to their lives."

Professor Strandberg noted that stress management was not part of preventive medicine in the 1970s but is now recommended for individuals with, or at risk of, cardiovascular disease.6 In addition, more effective drugs are now available to lower lipids (statins) and blood pressure (angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, calcium channel blockers).

He concluded: "Our results do not indicate that health education is harmful. Rather, they suggest that stress reduction is an essential part of programmes aimed at reducing the risk of cardiovascular disease. Lifestyle advice should be wisely combined with modern drug treatment to prevent cardiovascular events in high-risk individuals."

Credit: 
European Society of Cardiology

Anxiety, depression, other mental distress may increase heart attack, stroke risk in adults over 45

DALLAS, Aug. 28, 2018 - Adults ages 45 or older who experience psychological distress such as depression and anxiety may have an increased risk of developing cardiovascular disease, according to new research in Circulation: Cardiovascular Quality and Outcomes, an American Heart Association journal.

In a study of 221,677 participants from Australia, researchers found that:

among women, high/very high psychological distress was associated with a 44 percent increased risk of stroke; and

in men ages 45 to 79, high/very high versus low psychological distress was associated with a 30 percent increased risk of heart attack, with weaker estimates in those 80 years old or older.

The association between psychological distress and increased cardiovascular disease risk was present even after accounting for lifestyle behaviors (smoking, alcohol intake, dietary habits, etc.) and disease history.

"While these factors might explain some of the observed increased risk, they do not appear to account for all of it, indicating that other mechanisms are likely to be important," said Caroline Jackson, Ph.D., the study's senior author and a Chancellor's Fellow at the University of Edinburgh in Edinburgh, Scotland.

The research involved participants who had not experienced a heart attack or stroke at the start of the study and who were part of the New South Wales 45 and Up Study that recruited adults ages 45 or older between 2006 and 2009.

Researchers categorized psychological distress as low, medium and high/very high using a standard psychological distress scale which asks people to self-assess the level.

The 10-question survey asks questions such as: "How often do you feel tired out for no good reason?" How often do you feel so sad that nothing could cheer you up?" How often do you feel restless or fidgety?"

Of the participants - 102,039 men (average age 62) and 119,638 women (average age 60) - 16.2 percent reported having moderate psychological distress and 7.3 percent had high/very high psychological distress.

During follow-up of more than four years, 4,573 heart attacks and 2,421 strokes occurred. The absolute risk - overall risk of developing a disease in a certain time period - of heart attack and stroke rose with each level of psychological distress.

The findings add to the existing evidence that there may be an association between psychological distress and increased risk of heart attack and stroke, she said. But they also support the need for future studies focused on the underlying mechanisms connecting psychological distress and cardiovascular disease and stroke risk and look to replicate the differences between men and women.

Mental disorders and their symptoms are thought to be associated with increased risk of heart disease and stroke, but previous studies have produced inconsistent findings and the interplay between mental and physical health is poorly understood.

People with symptoms of psychological distress should be encouraged to seek medical help because, aside from the impact on their mental health, symptoms of psychological distress appear to also impact physical health, Jackson said. "We encourage more proactive screening for symptoms of psychological distress. Clinicians should actively screen for cardiovascular risk factors in people with these mental health symptoms."

All factors analyzed in this research, apart from the outcomes of heart attack and stroke, were identified at the same point in time, which made it difficult for researchers to understand the relationship between psychological distress and variables such as unhealthy behaviors like smoking and poor diet. With that analysis approach, they may have underestimated the effect psychological distress has on the risk of heart attack and stroke.

Credit: 
American Heart Association

Current advice to limit dairy intake should be reconsidered

Munich, Germany - UNDER EMBARGO UNTIL 28 Aug 2018: The consumption of dairy products has long been thought to increase the risk of death, particularly from coronary heart disease (CHD), cerebrovascular disease, and cancer, because of dairy's relatively high levels of saturated fat. Yet evidence for any such link, especially among US adults, is inconsistent. With the exception of milk, which appears to increase the risk of CHD, dairy products have been found to protect against both total mortality and mortality from cerebrovascular causes, according to research presented today at ESC Congress 2018, the annual congress of the European Society of Cardiology.1 Therefore, current guidelines to limit consumption of dairy products, especially cheese and yogurt, should be relaxed; at the same time, the drinking of non-fat or low-fat milk should be recommended, especially for those who consume large quantities of milk.
"A meta-analysis of 29 cohort studies2 published in 2017 found no association between the consumption of dairy products and either cardiovascular disease (CVD) or all-cause mortality," said Professor Maciej Banach, from the Department of Hypertension at Medical University of Lodz, Poland. "Yet a large 20-year prospective study of Swedish adults3, also published in 2017, found that higher consumption of milk was associated with a doubling of mortality risk, including from CVD, in the cohort of women."

Professor Banach and his co-researchers examined data from a 1999-2010 National Health and Nutrition Examination Surveys (NHANES) study of 24,474 adults with a mean age of 47.6 years, 51.4% of whom were female. (NHANES is conducted by the US's Centers for Disease Control and Prevention.) During the follow-up period of 76.4 months, 3,520 total deaths were recorded, including 827 cancer deaths, 709 cardiac deaths, and 228 cerebrovascular disease deaths. The researchers found consumption of all dairy products to be associated with a 2% lower total mortality risk and consumption of cheese to be associated with an 8% lower total mortality risk (hazard ratio [HR]: 0.98, 95% confidence interval [CI]: 0.95-0.99; HR: 0.92, 95% CI: 0.87-0.97, respectively). For cerebrovascular mortality, they found a 4% lower risk with total dairy consumption and 7% lower risk with milk consumption (HR: 0.96, 95% CI: 0.94-0.98; HR: 0.93, 95% CI: 0.91-0.96, respectively).

A meta-analysis by Professor Banach and his co-researchers of 12 prospective cohort studies with 636,726 participants who were followed for approximately 15 years confirmed these results. But milk consumption was also associated with a 4% higher CHD mortality, while consumption of fermented dairy products such as yogurt was associated with a 3% lower rate of total mortality. The yogurt finding, however, was determined to be not significant after further adjustment (Q4: HR: 0.98, p=0.125).

The researchers concluded that among US adults, higher total dairy consumption protected against both total mortality and mortality from cerebrovascular causes. At the same time, higher milk consumption was associated with an increased risk of CHD, an association that needs further study. Causality, however, could be difficult to determine, as most people who consume milk also consume other dairy products.

"In light of the protective effects of dairy products," said Professor Banach, "public health officials should revise the guidelines on dairy consumption. And given the evidence that milk increases the risk of CHD, it is advisable to drink fat-free or low-fat milk."

Credit: 
European Society of Cardiology

Physicians deserve answers as public service loan forgiveness program hangs in the balance

PHILADELPHIA --With medical school loan debt averaging $200,000, many physicians pursue the Public Service Loan Forgiveness Program that eliminates federal student loans after 10 years of service in the public sector. But the fate of the program hangs in the balance, as government officials signal a desire to end it, leaving physicians in a lingering uncertainty that's unnecessary and unfair, health policy experts from the Perelman School of Medicine at the University of Pennsylvania and three other medical institutions argue in a new commentary published in the Annals of Internal Medicine.

"There are justifiable reasons to support the program and also justifiable reasons to change it," senior author David A. Asch, MD, a professor of Medicine and Medical Ethics and Health Policy and executive director of the Penn Center for Health Care Innovation, and his colleagues wrote. "But there are no justifiable reasons to keep recent graduates in suspense."

The program, which began in 2007 under President George W. Bush, forgives remaining federal debt for borrowers who have made 120 payments or 10 years' worth of loan repayments while employed at nonprofit or public institutions. Those conditions are particularly favorable for physicians, as most begin their careers in residency programs - which counts towards years of repayment - at nonprofit medical centers or hospitals, and often continue in that setting afterwards. According to the authors, a third of 2017 graduates who borrowed money for medical school report planning to use the program.

Recently, the program has been threatened with elimination by Congress and the Trump administration, with little guidance about the fate of current borrowers.

A U.S. House of Representatives proposal makes borrowers after July 1, 2019 ineligible for the program - and leaves unclear whether borrowers pursuing federal loans prior to that time will be grandfathered in. Further muddying the waters, the Department of Education retroactively reversed certifications for some lawyers working in nonprofit institutions, and indicated that certifications are now temporary and subject to final approval by them.

It's what the authors call an "unnecessary uncertainty" that deserves clarity and swift decisions. "Physicians are trained to handle uncertainty but that does not excuse leaving new physicians facing uncertainties that can be easily resolved," they wrote. "Even as we consider new approaches toward financing training for public service, we should insist on clarity for those who have already pursued it."

The program is not without its problems, including issues such as insufficient cost estimates.

While it could help the mounting problem of educational debt in America, the designers of the program may have been thinking more about lower paid public school teachers and not have anticipated how many higher paid physicians would be eligible and how the long years of training would increase the proportion of physician debt relieved from the program. "It's structured to encourage borrowers to minimize current loan repayment to maximize eventual loan forgiveness," says Asch. "That's a strategy that could work out well for physicians and other borrowers," he added, "but it's also a risky one--one that could lead young physicians to accumulate even more debt as they delay paying off loans."

Physicians pursuing higher earning specialties with typically longer training, such as neurosurgery, also end up responsible for less of their debt repayment than physicians pursuing lower earning specialties with typically shorter training, such as family medicine. "That probably was not an intended consequence of the program's structure, but it's nevertheless a perverse outcome, given recognized shortages and relatively low pay in primary care fields compared to subspecialties," says Justin Grischkan, MD, the lead author of the study and a resident physician in Internal Medicine at the Massachusetts General Hospital in Boston.

Still, expunging the program, rather than fixing it, would not only worsen educational debt, it may also prevent people from entering the medical workforce. "Physicians who are planning to use the program are more likely to graduate with higher debt, receive less scholarship support and come from backgrounds with lower parental income" Grischkan says. As more medical students come from high-income backgrounds that can support their education without debt (nearly one third of graduates), overall debt is further concentrated among those with the most need, the authors said.

"A medical degree is increasingly out of reach of many who might contribute to a workforce more responsive to diverse national needs," the authors wrote. Ending that program "may remove a financial support critical to national interests."

The authors estimate annualized costs for the program at around $1 billion for physician medical education. Ending it could redirect that $1 billion more efficiently toward health care workforce goals, but it's more plausible that those funds would go elsewhere, they said. "A flawed program may be better than none at all."

Credit: 
University of Pennsylvania School of Medicine

Many Arctic pollutants decrease after market removal and regulation

image: Persistent Organic Pollutants, also known as POPs, can having lasting impacts on both people and wild animals in the Arctic. Research shows some POPs are decreasing in the region after being pulled from market or regulated around the globe.

Image: 
Arturo de Frias Marques (https://commons.wikimedia.org/wiki/File:Polar_Bear_AdF.jpg)

Levels of some persistent organic pollutants (POPs) regulated by the Stockholm Convention are decreasing in the Arctic, according to an international team of researchers who have been actively monitoring the northern regions of the globe.

POPs are a diverse group of long-lived chemicals that can travel long distances from their source of manufacture or use. Many POPs were used extensively in industry, consumer products or as pesticides in agriculture. Well-known POPs include chemicals such as DDT and PCBs (polychlorinated biphenyls), and some of the products they were used in included flame retardants and fabric coatings.

Because POPs were found to cause health problems for people and wildlife, they were largely banned or phased out of production in many countries. Many have been linked to reproductive, developmental, neurological and immunological problems in mammals. The accumulation of DDT, a well-known and heavily used POP, was also linked to eggshell-thinning in fish-eating birds, such as eagles and pelicans, in the late 20th century, and caused catastrophic population declines for those animals.

In 2001, 152 countries signed a United Nations treaty in Stockholm, Sweden intended to eliminate, restrict or minimize unintentional production of 12 of the most widely used POPs. Later amendments added more chemicals to the initial list. Today, more than 33 POP chemicals or groups are covered by what is commonly called the "Stockholm Convention," which has been recognized by182 countries.

"This paper shows that following the treaty and earlier phase-outs have largely resulted in a decline of these contaminants in the Arctic," says John Kucklick, a biologist from the National Institute of Standards and Technology (NIST) and the senior U.S. author on the paper, published August 23 in Science of the Total Environment. "When POP use was curtailed, the change was reflected by declining concentrations in the environment."

"In general, the contaminants that are being regulated are decreasing," says Frank Rigét from the Department of Bioscience, Aarhus University, Denmark, and lead author.

POPs are particularly problematic in the Arctic because the ecosystem there is especially fragile, and pollution can come from both local sources and from thousands of miles away due to air and water currents. POPs also bioaccumulate. This means that they build up faster in animals and humans than they can be excreted, and that exposure can increase up the food chain. Plankton exposed to POPs in water are eaten by schools of fish, which are in turn eaten by seals or whales, and with each jump up the food chain the amount of POPs increases. The same is true for terrestrial animals. A large mammal's exposure, therefore, can be large and long-lasting.

Indigenous people living in northern coastal areas such as Alaska often consume more fish and other animals that come from higher on the food chain than the average American. Such communities, therefore, are potentially exposed to larger amounts of these pollutants.

For almost two decades beginning in 2000, Kucklick and Rigét worked in conjunction with scientists from Denmark, Sweden, Canada, Iceland and Norway to track POPs in the fat of several marine mammals and in the tissue of shellfish and seabirds. They also monitored air in the Arctic circle for pollution.

To gain a fuller picture of how the deposition of POPs might have changed over time, the study included specimens archived since the 1980s and '90s in special storage facilities around the globe. The U.S. specimens were provided by the NIST Biorepository, located in Charleston, South Carolina. Samples archived in that facility are part of the Alaska Marine Mammal Tissue Archival Project (AMMTAP) or the Seabird Tissue Archival and Monitoring Project (STAMP). Both collections are conducted in collaboration with other federal agencies.

The study pooled more than 1,000 samples taken over the course of several decades from many different locations throughout the Arctic Circle. In general, the so-called legacy POPs--those that have been eliminated or restricted from production--were shown to be decreasing over the past two to three decades, although some had decreased more than others.

The biggest decreases were in a byproduct of the pesticide lindane, a-HCH, with a mean annual decline of 9 percent in Arctic wildlife.

The research team found PCBs had decreased as well. Most industrial countries banned PCBs in the 1970s and '80s, and their production was reduced under the Stockholm Convention in 2004. Previously, the compounds had been widely used in electrical systems. In this study, it was found that their presence had decreased by almost 4 percent per year across the Arctic region since being pulled from the market.

Two of the legacy POPs listed under Stockholm, β-HCH and HCB, showed only small declines of less than 3 percent per year. β-HCH was part of a heavily-used pesticide mixture with the active ingredient lindane and HCB was used both in agriculture and industry.

A small number of the legacy POPs had increased in a few locations, although some of those were at sites suspected to be influenced by strong, still-existing local pollution sources.

Notably, the flame retardant hexabromocyclododecane (HBCDD) showed an annual increase of 7.6 percent. HBCDD was one of 16 additional POPs added to the Stockholm Convention as of 2017 and is recommended for elimination from use, with certain exemptions.

Most of the research conducted for this paper was a direct result of the 2001 treaty stipulations, which included a requirement that sponsors participate in ongoing, long-term biological monitoring. Although the U.S. participated in the research, it has not ratified the treaty. It is expected that work on the treaty will continue as new POPs are identified.

This recent research work highlights the usefulness of long-term data and international scientific collaboration, says Rigét. "You really need to gather more than 10 years of data before you can see the trend because in the short term there can be some small fluctuations," he notes. "Looking at this data also showed us how to be more economical and avoid over-sampling in the future."

Credit: 
National Institute of Standards and Technology (NIST)

Environmentally friendly farming practices used by nearly 1/3 of world's farms

image: Washington State University soil scientist John Reganold is part of an international team that found nearly one-third of the world's farms have adopted more environmentally friendly practices while continuing to be productive.

Image: 
Washington State University

PULLMAN, Wash. - Nearly one-third of the world's farms have adopted more environmentally friendly practices while continuing to be productive, according to a global assessment by 17 scientists in five countries.

The researchers analyzed farms that use some form of "sustainable intensification," a term for various practices, including organic farming, that use land, water, biodiversity, labor, knowledge and technology to both grow crops and reduce environmental impacts like pesticide pollution, soil erosion, and greenhouse gas emissions.

Writing in the journal Nature Sustainability, the researchers estimate that nearly one-tenth of the world's farmland is under some form of sustainable intensification, often with dramatic results. They have seen that the new practices can improve productivity, biodiversity and ecosystem services while lowering farmer costs. For example, they document how West Africa farmers have increased yields of maize and cassava; some 100,000 farmers in Cuba increased their productivity 150 percent while cutting their pesticide use by 85 percent.

Sustainable intensification "can result in beneficial outcomes for both agricultural output and natural capital," the researchers write.

"Although we have a long way to go, I'm impressed by how far farmers across the world and especially in less developed countries have come in moving our food-production systems in a healthy direction," said John Reganold, Washington State University Regents Professor of Soil Science and Agroecology and a co-author of the paper. Reganold helped identify farming systems that meet sustainable intensification guidelines and analyze the data.

Less developed countries tend to see the largest improvements in productivity, while industrialized countries "have tended to see increases in efficiency (lower costs), minimizing harm to ecosystem services, and often some reductions in crop and livestock yields," the authors write.

Jules Pretty, the study's lead author and a professor of environment and society at the University of Essex in England, first used the term "sustainable intensification" in a 1997 study of African agriculture. While the word "intensification" typically applies to environmentally harmful agriculture, Pretty used the term "to indicate that desirable outcomes, such as more food and better ecosystem services, need not be mutually exclusive."

The term now appears in more than 100 scholarly papers a year and is central to the United Nations Sustainable Development Goals.

For the Nature Sustainability paper, the researchers used scientific publications and datasets to screen some 400 sustainable intensification projects, programs and initiatives around the world. They chose only those that were implemented on more than 10,000 farms or 10,000 hectares, or nearly 25,000 acres. They estimate that 163 million farms covering more than a billion acres are affected.

The researchers focused on seven different farming changes in which "increases in overall system performance incur no net environmental cost." The changes include an advanced form of Integrated Pest Management that involves Farmer Field Schools teaching farmers agroecological practices, such as building the soil, in more than 90 countries. Other changes include pasture and forage redesign, trees in agricultural systems, irrigation water management, and conservation agriculture, including the soil-saving no-till technique used in eastern Washington.

Sustainable intensification "has been shown to increase productivity, raise system diversity, reduce farmer costs, reduce negative externalities and improve ecosystem services," the researchers write. They say it has now reached a "tipping point" in which it can be more widely adopted through governmental incentives and policies.

"Stronger government policies across the globe are now needed to support the greater adoption of sustainable intensification farming systems so that the United Nations Sustainable Development Goals endorsed by all members of the UN are met by 2030," said Reganold. "This will help provide sufficient and nutritious food for all, while minimizing environmental impact and enabling producers to earn a decent living."

Credit: 
Washington State University