Culture

Canada-US Free Trade Agreement (CUSFTA) increased caloric intake in Canada

image: These are trends in calorie availability in Canada and synthetic controls, 1978-2006. Data from the United Nations Food and Agricultural Office (2016). 'Synthetic controls' are constructed from a weighted combination of OECD countries, where weights correspond to the similarity of each country with Canada before CUSFTA.

Image: 
<i>American Journal of Preventive Medicine</i>

Ann Arbor, March 26, 2018 - A new study published in the American Journal of Preventive Medicine shows that the 1989 Canada-US Free Trade Agreement (CUSFTA) was associated with an increase in caloric availability of approximately 170 kilocalories per person per day in Canada. These findings suggest that the rise in caloric intake and obesity in Canada since the early 1990s can be partially attributed to its close trade and investment arrangements with the US.

The escalating global prevalence of overweight and obesity, or "globesity," is often described as a pandemic. Globalization via free trade agreements (FTAs) in is often implicated in this pandemic because of its role in spreading high-calorie diets rich in salt, sugar, and fat through the reduction of trade barriers like tariffs in the food and beverage sector.

"Concerns center on how free trade and investment agreements increase population exposure to unhealthy, high-calorie diets, but existing studies preclude causal conclusions," explained lead investigator Pepita Barlow, MSc, Department of Sociology, University of Oxford. "Few studies of free trade and investment agreements and diets isolated their impact from other factors, and none examined any effect on caloric intake, despite its critical role in causing obesity. This study addresses these limitations by analyzing a unique natural experiment arising from the exceptional circumstances surrounding the implementation of CUSFTA."

Investigators used a "natural experiment" design (that mimics a randomized controlled trial as closely as possible) and data from the United Nations Food and Agricultural Office to evaluate the impact of CUSFTA on caloric availability in Canada. They found that it was associated with an increase in caloric availability and likely intake of approximately 170 kilocalories per person per day in Canada. Since 1994, the rise in calorie availability in Canada far exceeded other countries.

Using sophisticated models, researchers also showed that this rise in caloric intake can contribute to weight gain of between 1.8-9.3 kg for men and 2.0-12.2 kg for women aged 40, depending on their physical activity levels and the extent to which availability affects caloric intake. This coincided with a US$1.82 billion increase in US investment in the Canadian food and beverage industry and a US$5.26 billion rise in food and beverage imports from the US.

US FTAs are especially likely to encourage elevated caloric intake because of the highly competitive processed food and caloric beverage industry and associated 'obesogenic' environment that pertains to the US. Processed food and caloric beverages play an important role in increasing caloric intake, as they are often calorie dense, leading people to unknowingly consume too many calories, and highly palatable, encouraging further consumption.

"Our analysis is particularly relevant now when many governments are seeking to implement free trade agreements with the US," noted Ms. Barlow. "These include the British government, which is currently seeking a UK-US trade deal as part of its post-Brexit growth strategy. Our study suggests that these trade deals may have a deleterious impact on population diets and obesity when they are implemented in countries where food supplies are already adequate to meet food demands."

This study has important implications for policy. It strengthens the quality of evidence that is used to highlight the potentially detrimental impacts of US FTAs on diets and shows empirically how trade policy is a structural driver of dietary behaviors. By doing so, it also highlights the need for greater coherence between nutrition and trade policy-making if governments wish to minimize the potentially deleterious impacts of FTAs on population health - and maximize their health benefits - without jeopardizing obesity prevention.

Credit: 
Elsevier

Blowin' in the wind -- A source of energy?

image: This is Magnus Jonsson and Mina Shiran Chaharsoughi at the Laboratory of Organic Electronics, Linköping University.

Image: 
Thor Balkhed

It may in the future be possible to harvest energy with the aid of leaves fluttering in the wind. Researchers at the Laboratory of Organic Electronics at Linköping University have developed a method and a material that generate an electrical impulse when the light fluctuates from sunshine to shade and vice versa.

"Plants and their photosynthesis systems are continuously subjected to fluctuations between sunshine and shade. We have drawn inspiration from this and developed a combination of materials in which changes in heating between sunshine and shade generate electricity," says Magnus Jonsson, docent and principal investigator for the research group in organic photonics and nano-optics at the Laboratory of Organic Electronics, Linköping University.

The results, which have been verified in both experiments and computer simulations, have been published in Advanced Optical Materials.

Together with researchers from the University of Gothenburg, Magnus Jonsson and his team have previously developed small nanoantennas that absorb sunlight and generate heat. They published an article together in Nano Letters in 2017, describing how the antennas when incorporated into window glass could reduce cold downdraughts and save energy. The antennas, with dimensions in the order of tens of nanometers, react to near infrared light and generate heat.

Mina Shiran Chaharsoughi, PhD student in Magnus Jonsson's group, has now developed the technology further and created a tiny optical generator by combining the small antennas with a pyroelectric film.

"Pyroelectric" means that an electrical voltage develops across the material when it is heated or cooled. The change of temperature causes charges to move and the generation of an electric current in the circuit.

The antennas consist of small metal discs, in this case gold nanodiscs, with a diameter of 160 nm (0.16 micrometres). They are placed on a substrate and coated with a polymeric film to create the pyroelectric properties.

"The nanoantennas can be manufactured across large areas, with billions of the small discs uniformly distributed over the surface. The spacing between discs in our case is approximately 0.3 micrometres. We have used gold and silver, but they can also be manufactured from aluminium or copper," says Magnus Jonsson.

The antennas generate heat that is then converted to electricity with the aid of the polymer. It is first necessary to polarize the polymer film in order to create a dipole across it, with a clear difference between positive and negative charge. The degree of polarisation affects the magnitude of the generated power, while the thickness of the polymer film seems not to have any effect at all.

"We force the polarisation into the material, and it remains polarised for a long time," says Mina Shiran Chaharsoughi.

Mina Shiran Chaharsoughi carried out an experiment in order to demonstrate the effect clearly, holding a twig with leaves in the air flow from a fan. The motion of the leaves created sunshine and shade on the optical generator, which in turn produced small electrical pulses and powered an external circuit.

"The research is at an early stage, but we may in the future be able to use the natural fluctuations between sunshine and shade in trees to harvest energy," says Magnus Jonsson.

Applications that are closer to hand can be found in optics research, such as the detection of light at the nanometre scale. Other applications may be found in optical computing.

Credit: 
Linköping University

Treating koalas for chlamydia alters gut microbes

Koalas are one of Australia's iconic animals, but they have been hard hit by an epidemic of Chlamydia infections contributing to a steep decline in numbers. Sick koalas brought to wildlife hospitals may be treated with antibiotics to clear up the chlamydia, but the antibiotics themselves can have severe side effects in the animals.

A new study led by Katherine Dahlhausen, a graduate student at the UC Davis Genome Center, published in the journal PeerJ, shows that those antibiotics may be changing the balance of gut microbes thought to allow koalas to digest eucalyptus leaves.

Koalas rely on specialized gut microbes to break down tannins and other toxic compounds that would otherwise make eucalyptus leaves indigestible. Infant koalas pick up these microbes from their mothers by eating a specialized type of feces called "pap."

Dahlhausen and colleagues studied the diversity of microbes in koalas treated or not treated with antibiotics at the Australia Zoo Wildlife Hospital, Queensland and the Port Macquarie Koala Hospital, New South Wales. They did not find a difference in gut microbes between treated and untreated animals, but did find that koalas that were treated with antibiotics and survived had a more diverse microbe population than animals that died during treatment.

Health status was closely correlated with presence of bacteria related to Lonepinella koalarum, a microbe known to digest tannins.

There have been other studies showing that antibiotic treatment can disturb gut microbes in other species, Dahlhausen and colleagues noted in the paper. But this might be especially important in animals like koalas where gut microbes are essential to their survival.

The project highlights the possible need to restore a healthy balance of microbes in antibiotic treated koalas and for development of antibiotic-free treatments of koala chlamydia infections, such as as a koala chlamydia vaccine, under development by Peter Timms' lab at the University of the Sunshine Coast, Dahlhausen said.

Credit: 
University of California - Davis

Study: More people rely on government catastrophic drug plans

TORONTO, March 26, 2018-- Government spending for the catastrophic drug program in Ontario rose 700 per cent between 2000 and 2016, during which there was a three-fold increase in the use of this plan, a new study has found.

The study, published today in CMAJ Open, said the spending increase also appears to be due to the rise in the use of high-cost medications, including a class of drugs called biologics.

"Our study illuminates the pressure facing Canadians due to rising drug costs," said Dr. Mina Tadrous, a research associate with the Ontario Drug Policy Research Network, based at St. Michael's Hospital, and a fellow with the Institute for Clinical Evaluative Sciences . "Catastrophic drug programs are the last line of defense for helping protect citizens from drug expenses that threaten their family's financial security."

"With a larger number of expensive drugs currently under development, continued pressure on private insurers to control costs, and changing insurance coverage for workers, we anticipate that use of catastrophic drug programs, will continue to grow," he said.

Using databases housed at ICES, Dr. Tadrous looked at changing patterns of use, government spending and characteristics of people making claims to Ontario's catastrophic drug program, the Trillium Drug Program from Jan. 1, 2000 to Dec. 31, 2016. The program is for people under the age of 65 who spend about three to four per cent of their after-tax household income on prescription drugs.

The researchers found:

* Use of the program increased three-fold from 3.6 users per 1,000 Ontarians to 10.9 users

* Total government spending rose by 735 percent, reaching $487 million in 2016

* Between 2000 and 2015, the last year for which demographic data on people making claims was available, more users were under the age of 35 (19.6 rising to 25.3 per cent) and had high deductibles (2.3 per cent to 8 per cent), suggesting they are younger, healthier adults, and have higher incomes than in the past. Dr. Tadrous noted that in today's labour market, many young people work on contracts or at jobs without drug insurance plans.

* There was a significant increase in the proportion of users with one or more drug claims greater than $1,000 (3.4 per cent to 10.4 per cent) and those dispensed a high-cost biologic (1.6 percent to 5.5 percent). Biologics are drugs made from complex molecules manufactured using living microorganisms, plants or animal cells.

Dr. Tadrous said it was no longer unusual for newly approved treatments to cost more than $1,000 a month, and "even more apparent" was the number of newly approved drugs costing more than $10,000 a year, which rose from 20 drugs in 2005 to 124 in 2015.

The most frequently reimbursed drugs in the plan were relatively consistent over time and were commonly prescribed medications such as painkillers, antibiotics and cholesterol medications. However, the medications with the highest total spending did change over time, shifting from chronic oral medications to newer biologic medications. The highest drug cost in 2000 was $2.6 million for atorvastatin (brand name Lipitor) used to lower cholesterol; in 2015 it was $55.4 million for infliximab (brand name Remicade) used to treat diseases such as Crohn's disease and rheumatoid arthritis. HIV and hepatitis C treatments have remained some of the highest costing medications for the program throughout the study period.

"Recent attention on the need for a national pharmacare strategy has largely focused on the coverage of essential medications, which doesn't address the rising cost of expensive drugs and the burden they place on Canadians," Dr. Tadrous said. "With more high-cost drugs coming our way, with even bigger price tags, we need to be able to make sure that we are balancing access and value."

Credit: 
St. Michael's Hospital

Study examines blood lead levels of Flint children before and after water crisis

Flint children's blood lead levels were nearly three times higher almost a decade before the year of the Flint water crisis, new research shows.

Childhood blood lead levels in the city have been on a steady decline since 2006, with the exception of two spikes -- including between 2014 and 2015 when lead contaminated the city's drinking water -- according to the study led by Michigan Medicine and Rutgers New Jersey Medical School.

Researchers analyzed lead concentrations of 15,817 blood samples of Flint children five years and younger over an 11-year period, including before, during and after the city's water source switch. Between 2006 and 2016, the percentage of children with blood lead levels over 5 micrograms per deciliter (the level at which the Centers for Disease Control and Prevention recommends public health actions) dropped from 11.8 to 3.2 percent.

The study, which appears in the Journal of Pediatrics, found a decrease in Flint childhood blood lead levels, from 2.33 micrograms per deciliter in 2006 to 1.15 micrograms per deciliter in 2016 -- a historic low for the city. The mean blood lead level in 2015 during the height of the water crisis was 1.3 micrograms per deciliter, up from 1.19 in 2014 before the water source switch.

"The Flint River water exposure particularly raised concerns about the potential health impact on children," says lead author Hernan Gomez, M.D., a medical toxicologist and pediatrician at Michigan Medicine who is focused on pediatric care at Hurley Medical Center's Emergency Department in Flint. Michigan Medicine has run Hurley's Emergency Medicine since 1996, and Hurley is a major teaching site for U-M's Emergency Medicine Residency.

"It's unacceptable that any child was exposed to drinking water with elevated lead concentrations. There is no known safe blood level of lead, and the ultimate public health goal is for children to have zero amounts of lead in their system.

"We wanted to provide a complete picture of blood lead concentrations of Flint children before, during and after their exposure to contaminated drinking water," Gomez adds. "We found that compared to a decade ago, children's blood lead levels in Flint are historically low."

Gomez and his two Michigan co-authors Dominic Borgialli, D.O., M.P.H., pediatric intensivist, and hospitalist Mahesh Sharman, M.D., FAAP, have spent a combined 62 years treating Flint children at Hurley.

Gomez, who has spent much of his career in industrial cities facing economic hardship, sought out Rutgers faculty members James Oleske, M.D., M.P.H., a pediatrician and John Bogden, SC.B, Ph.D., M.S., an environmental health researcher.

Oleske and Bogden have been collaborating on lead research since the early 1970s, when lead poisoning and high blood lead concentrations were much more widespread than they are now.

"In the 1970s, almost all Newark children and most adults had blood lead levels greater than 5 micrograms per deciliter, but as in Flint, now only a small percentage do," Bogden says.

The research team used data from Hurley, which is the major source of pediatric blood lead levels in Flint, with 2006 being the earliest year available for analysis.

Researchers also identified a random, unexplained increase in blood lead levels over the time period analyzed, up from 1.75 micrograms per deciliter of children in 2010 to 1.87 micrograms per deciliter in 2011. Authors note that the spike, which predates the Flint River water exposure by four years, is similar in magnitude to the increased blood lead levels noted during the water crisis, and may need to be explored further.

"Our study provides a historical perspective of childhood lead exposure in the community," Gomez says. "We found that the increased blood lead levels of Flint children during the water crisis -- while very concerning -- was not higher than that found in years prior to 2013."

"These findings suggest that, even when taking into account exposure to corrosive Flint water, long term public health efforts to reduce lead exposure in the community have been largely effective."

Authors also point to other communities where children have elevated blood lead levels. During the same period of Flint's water source change, 5.1 percent of Jackson, Mich. children age five and under, 8 percent of Grand Rapids, Mich. children and 7.5 percent of Detroit children had blood lead levels higher than the CDC reference point (compared with 3.7 percent of Flint children). During the same period, an average 3.4 percent of Michigan children and 3.3 percent of U.S. children had lead levels above the CDC reference point.

"Childhood lead exposure is a problem that existed long before the Flint water contamination," Gomez says. "Other communities continue to need resources to help prevent lead exposure for their youth."

Historically, chipping or peeling lead-based paint in old homes has been the biggest culprit for childhood lead exposure. Deteriorating paint in homes, old toys and furniture can create lead-containing dust in windowsills, door frames and in yards that children may ingest. Older homes may also have lead pipes and fixtures that could contaminate water.

A series of legislation over the past half century have helped reduce lead exposure in the U.S., including banning lead-based paint and phasing out lead in consumer products, gasoline and pluming. Lead abatement measures have also helped reduce exposure to lead in old homes.

Blood lead levels have sharply declined among U.S. children ages 1 to 5, from nearly 90 percent having blood lead levels above 10 micrograms per deciliter in 1976 to 8 percent in 2010 according to the 2013 Morbidity and Mortality Weekly Report.

In the rust-belt community of Flint, children were already at a higher risk for lead exposure from multiple sources before the highly publicized water crisis, Gomez says. In 2014, the city switched its water source from Detroit to the Flint River, which led to tainted drinking water that contained lead and other toxins. Tests found a concerning increase in the number of children with elevated lead levels in their blood after the water switch.

The new study supports previously reported data showing that corrective measures, including switching back the city's water source and instructing residents to use filtered water for drinking and cooking, made significant progress in children's blood lead levels.

Lead exposure of young children continues to be widespread in the U.S., especially in post-industrial, underserved communities such as Flint and Detroit, Gomez says. Lead is a potent neurotoxin and elevated blood lead levels are associated with increased risk of lower intelligence quotient scores, academic failure and aggressive behavior in children. Toxic effects for levels far higher than those reported during the crisis may also include anemia and kidney damage.

For Flint children, risks of the most severe consequences of lead exposure -- which are most concerning when exposure is prolonged over years -- are low compared to children growing up in the city a decade earlier, Gomez notes.

Authors of the study note several limitations. The data likely account for about half of Flint children during the timeframe, and no child young enough to be exposed to lead through water in formula was tested. The study also was not designed to determine what source of lead was responsible for children's blood lead levels.

No external funding was used for the study.

"The Flint story has raised national awareness of the important public health issue of lead exposure of young children. We are seeing health professionals across the nation evaluate potential lead exposure in their own communities," Gomez says.

"Public health officials, legislators and clinicians should continue efforts and allocate resources to further decrease environmental lead exposure to children in all communities at risk."

Credit: 
Michigan Medicine - University of Michigan

Machine learning model provides rapid prediction of C. difficile infection risk

Every year nearly 30,000 Americans die from an aggressive, gut-infecting bacteria called Clostridium difficile (C. difficile), which is resistant to many common antibiotics and can flourish when antibiotic treatment kills off beneficial bacteria that normally keep it at bay. Investigators from Massachusetts General Hospital (MGH), the University of Michigan (U-M) and Massachusetts Institute of Technology (MIT) now have developed investigational "machine learning" models, specifically tailored to individual institutions, that can predict a patient's risk of developing C. difficile much earlier than it would be diagnosed with current methods. Preliminary data from their study, which is being published today in Infection Control and Hospital Epidemiology, were presented last October at the ID Week 2017 conference. (This link to the paper will be active after the embargo drops: https://doi.org/10.1017/ice.2018.16).

"Despite substantial efforts to prevent C. difficile infection and to institute early treatment upon diagnosis, rates of infection continue to increase," says Erica Shenoy, MD, PhD, of the MGH Division of Infectious Diseases, co-senior author of the study and assistant professor of Medicine at Harvard Medical School. "We need better tools to identify the highest risk patients so that we can target both prevention and treatment interventions to reduce further transmission and improve patient outcomes."

The authors note that most previous models of C. difficile infection risk were designed as "one size fits all" approaches and included only a few risk factors, which limited their usefulness. Co-lead authors Jeeheh Oh, a U-M graduate student in Computer Science and Engineering, and Maggie Makar, MS, of MIT's Computer Science and Artificial Intelligence Laboratory and their colleagues took a "big data" approach that analyzed the whole electronic health record (EHR) to predict a patient's C. difficile risk throughout the course of hospitalization. Their method allows the development of institution-specific models that could accommodate different patient populations, different EHR systems and factors specific to each institution.

"When data are simply pooled into a one-size-fits-all model, institutional differences in patient populations, hospital layouts, testing and treatment protocols, or even in the way staff interact with the EHR can lead to differences in the underlying data distributions and ultimately to poor performance of such a model," says Jenna Wiens, PhD, assistant professor of Computer Science and Engineering at U-M and co-senior author of the study. "To mitigate these issues, we take a hospital-specific approach, training a model tailored to each institution."

Using their machine-learning-based model, the investigators analyzed de-identified data - including individual patient demographics and medical history, details of their admission and daily hospitalization, and the likelihood of exposure to C. difficile - from the EHRs of almost 257,000 patients admitted to either MGH or to Michigan Medicine - U-M's academic medical center - over periods of two years and six years, respectively. The model generated daily risk scores for each individual patient that, when a set threshold is exceeded, classify patients as at high risk.

Overall, the models were highly successful at predicting which patients would ultimately be diagnosed with C. difficile. In half of those who were infected, accurate predictions could have been made at least five days before diagnostic samples were collected, which would allow highest-risk patients to be the focus of targeted antimicrobial interventions. If validated in prospective studies, the risk prediction score could guide early screening for C. difficile. For patients diagnosed earlier in the course of disease, initiation of treatment could limit the severity of the illness, and patients with confirmed C. difficile could be isolated and contact precautions instituted to prevent transmission to other patients.

The research team has made the algorithm code freely available here for others to review and adapt for their individual institutions. Shenoy notes that facilities that explore applying similar algorithms to their own institutions will need to assemble the appropriate local subject-matter experts and validate the performance of the models in their institutions.

Study co-author Vincent Young, MD, PhD, the William Henry Fitzbutler Professor in the Department of Internal Medicine at U-M, adds, "This represents a potentially significant advance in our ability to identify and ultimately act to prevent infection with C. difficile. The ability to identify patients at greatest risk could allow us to focus expensive and potentially limited prevention methods on those who would gain the greatest potential benefit. I think that this project is a great example of a 'team science' approach to addressing complex biomedical questions to improve healthcare, which I expect to see more of as we enter the era of precision health."

Credit: 
Massachusetts General Hospital

Cancer patients' pain eased by simple bedside chart, study shows

Patients with cancer could benefit from a simple bedside system to manage their pain, a study suggests.

The new approach reduces pain levels compared with conventional care, the research with patients shows.

Pain affects half of all people with cancer and an estimated 80 per cent of those with advanced cancer, causing both physical and emotional impact on patients.

Researchers at the University of Edinburgh worked with doctors to develop the Edinburgh Pain Assessment and management Tool (EPAT) - a pen and paper chart which medical staff use to regularly record pain levels in a simple traffic light system.

Amber or red pain levels - indicating moderate or severe pain - prompts doctors to review medications and side effects and monitor pain more closely.

The trial looked at pain levels in almost 2000 cancer patients over five days, following admission to regional cancer centres.

Patients whose care included use of the chart reported less pain during this time, compared with patients with standard care, who did not show an improvement.

Importantly, use of the chart was not linked to higher medicine doses. Authors suggest that it works by encouraging doctors to ask the right questions and reflect on pain medications and side effects more frequently, before patients reach a crisis point.

Researchers say the system is a simple way to put pain management at the forefront of routine care, but caution that more studies are needed to understand how it could work longer term.

The study was published in the Journal of Clinical Oncology and was funded by Cancer Research UK.

Professor Marie Fallon, of the Palliative and Supportive Care Group at the University of Edinburgh, said: "These exciting findings show the important benefits of influencing doctors' behaviours, rather than looking for more complex and expensive interventions. These findings are a positive step towards reducing the burden of pain for patients and making them as comfortable as possible at all stages of cancer."

Martin Ledwick, Head Information Nurse at Cancer Research UK, said: "In most cases it should be possible for cancer pain to be controlled if it is assessed and managed effectively. Any work that encourages medical teams to assess and monitor pain more carefully to help this happen has to be a good thing for patients."

Credit: 
University of Edinburgh

What three feet of seawater could mean for the world's turtles

image: The habitat of red-bellied short-necked turtles is expected to be affected by rising sea levels.

Image: 
Todd Stailey/Tennessee Aquarium

Ninety percent of the world's coastal freshwater turtle species are expected to be affected by sea level rise by 2100, according to a study from the University of California, Davis.

The study, published in Early View online today in the journal Biological Reviews, is the first comprehensive global assessment of freshwater turtles that frequent brackish, or slightly salty, waters. The study may help guide conservation strategies for turtles.

"About 30 percent of coastal freshwater species have been found or reported in a slightly saltwater environment," said lead author Mickey Agha, a UC Davis graduate student in associate professor Brian Todd's lab in the Department of Wildlife, Fish and Conservation Biology. "But they tend to live within a low-level range of salinity. If sea level rise increases salinity, we don't yet know if they'll be able to adapt or shift their range."

FROM SUISUN MARSH TO THE WORLD

Of the world's 356 turtle species, only about seven are sea turtles, 60 are terrestrial tortoises, and the rest live in freshwater environments, such as lakes, ponds and streams. Of those, about 70 percent live near coastlines, which are expected to experience rising sea levels. Some freshwater turtles lose body mass and can die when exposed to high levels of salty water, while others can tolerate a broader range of salinity.

UC Davis wildlife biologists have been studying western pond turtles in the semi-salty waters of Suisun Marsh in Northern California. While abundant in the marsh, this species is in decline in many other parts of the state. The researchers observed that the turtles could face a triple threat of drought, water diversions and increased sea level rise, all of which can result in saltier habitats for them. It prompted the researchers to wonder not only how the western pond turtle would cope with such changes, but also how freshwater turtles around the world are expected to fare under the projected sea level rise of three feet by 2100.

WHEN SEAWATER MEETS FRESHWATER

In their assessment, the study's authors used a warming scenario projected for 2100 to overlay estimates of sea level rise on georeferenced maps of coastal turtle species worldwide.

The results indicated that turtles most at risk from sea level rise live in Oceania -- Southeast Asia, Australia, and New Guinea -- and southeastern North America. In those regions, about 15 species may lose more than 10 percent of their present range.

Of the species most affected, half live on the island of New Guinea, where many species are predicted to see an average of 21 percent of their range flooded by a 3-foot rise in sea level.

Additionally, the study estimates that sea level rise will affect:

65 percent of the range of Australia's snake-necked turtle

About 20 percent of the range of the pig-nosed turtle, native to northern Australia and southern New Guinea

30 percent of the range of the diamondback terrapin, which lives in the eastern and southern U.S., as well as the New Guinea giant softshell turtle and Brazilian slider turtle

HARMLESS, INOFFENSIVE, LONG-LIVED

Turtles are arguably one of the more beloved reptiles on the planet.

"They're harmless, they're friendly, they live a long time," said senior author Todd. "They're about as inoffensive as you can get for an animal, and they've existed for so long."

Western pond turtles, for example, can survive more than 50 years in the wild. And turtle species have been on Earth for tens of millions of years.

More research is needed to see how freshwater turtles can adapt and adjust to these changing environments, but what researchers see now raises concerns.

"If we've underestimated the impact of sea level rise along coastlines, we don't yet know whether these turtles can adapt or shift fast enough to move with the changing salinity, or whether that part of its range will be gone forever," Agha said.

Todd added: "This is a species that is slow to evolve. If we rely on natural selection to sustain them, they will likely disappear."

Credit: 
University of California - Davis

Tetrahedrality is key to the uniqueness of water

image: An image of the clathrate structure (Si34) of water-type liquid formed at a negative pressure (left) and the phase diagram as a function of the strength of tetrahedrality λand pressure P.

Image: 
2018 Hajime Tanaka, Institute of Industrial Science, The University of Tokyo

Tokyo - Water holds a special place among liquids for its unusual properties, and remains poorly understood. For example, it expands just upon the freezing to ice, and becomes less viscous under compression, around atmospheric pressure. Rationalizing these oddities is a major challenge for physics and chemistry. Recent research led by The University of Tokyo's Institute of Industrial Science (IIS) suggests they result from the degree of structural ordering in the fluid.

Water belongs to a class of liquids whose particles form local tetrahedral structures. The tetrahedrality of water is a consequence of hydrogen bonds between molecules, which are constrained to fixed directions. In a study in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), the researchers investigated why the physical properties of water -- as expressed by its phase diagram -- are so remarkable, even compared with other tetrahedral liquids, such as silicon and carbon.

Tetrahedral liquids are often simulated by an energy potential named the SW model. The liquid is assumed to contain two phases -- a disordered state that has high rotational symmetry, and a tetrahedrally ordered state that does not -- in thermodynamic equilibrium. Despite its simplicity, the model accurately predicts anomalous liquid behaviors. The two-state property is controlled by the parameter lambda (λ), which describes the relative strength of pairwise and three-body intermolecular interactions. The higher λ is, the degree of tetrahedral order increases.

"We realized that λ, which is rather large for water, was key to the uniqueness of these liquids," study co-lead author John Russo says. "Effectively, λ controls the degree of tetrahedrality: as λ increases, tetrahedral shells forming around each molecule become energetically more stable. Hence, these shells overcome the unfavorable loss of entropy that accompanies the creation of order." The local tetrahedra resemble solid-state structures, which is why high-λ liquids crystallize more easily.

By continuously adjusting λ, they simulated a set of phase diagrams to model what happens when a "simple" liquid becomes progressively more water-like. With increasing λ, the various thermodynamic and dynamic anomalies of tetrahedral liquids - such as expansion at low temperature and the breaking of the standard Arrhenius law for diffusion --- became more pronounced.

However, it was not as simple as "more tetrahedra equals weirder behavior." The influence of tetrahedrality was maximized for water, which has λ = 23.15. Above here, the behavior of density as a function of temperature approached normal again, because the difference in volume between ordered and disordered states began to drop. Thus, water has an exquisitely fine-tuned or "Goldilocks" value of λ that lets it shift easily between order and randomness. This gives it high structural flexibility in response to changing temperature or pressure, which is the origin of its unique behavior.

"Linking observable properties, such as viscosity to microscopic structures, is what physical chemistry is all about," co-lead author Hajime Tanaka says. "Water, the most abundant and yet most unusual substance on earth, has long been the final frontier in this respect. We were delighted that a simple, well-known model can fully explain the strangeness of water, which arises from the delicate balance between order and disorder in the liquid."

Credit: 
Institute of Industrial Science, The University of Tokyo

Breakthrough in battle against rice blast

Scientists have found a way to stop the spread of rice blast, a fungus that destroys up to 30% of the world's rice crop each year.

An international team led by the University of Exeter showed that chemical genetic inhibition of a single protein in the fungus stops it spreading inside a rice leaf - leaving it trapped within a single plant cell.

The finding is a breakthrough in terms of understanding rice blast, a disease that is hugely important in terms of global food security.

However, the scientists caution that this is a "fundamental" discovery - not a cure that can yet be applied outside the laboratory.

The research revealed how the fungus can manipulate and then squeeze through natural channels (called plasmodesmata) that exist between plant cells.

"This is an exciting breakthrough because we have discovered how the fungus is able to move stealthily between rice cells, evading recognition by the plant immune system," said senior author Professor Nick Talbot FRS, of the University of Exeter.

"It is clearly able to suppress immune responses at pit fields (groups of plasmodesmata), and also regulate its own severe constriction to squeeze itself through such a narrow space.

"And all this is achieved by a single regulatory protein. It's a remarkable feat."

Rice blast threatens global food security, destroying enough rice each year to feed 60 million people.

It spreads within rice plants by invasive hyphae (branching filaments) which break through from cell to cell.

In their bid to understand this process, the researchers used chemical genetics to mutate a signalling protein to make it susceptible to a specific drug.

The protein, PMK1, is responsible for suppressing the rice's immunity and allowing the fungus to squeeze through pit fields - so, by inhibiting it, the researchers were able to trap the fungus within a cell.

This level of precision led the team to discover that just one enzyme, called a MAP kinase, was responsible for regulating the invasive growth of rice blast.

The research team hope this discovery will enable them to identify targets of this enzyme and thereby determine the molecular basis of this devastating disease.

The research was led by Dr Wasin Sakulkoo, who recently received his PhD from Exeter.

Dr Sakulkoo is a Halpin Scholar, a programme initiated by the generosity of Exeter alumni Les and Claire Halpin, which funds students from rice-growing regions of the world to study with Professor Talbot's research group.

Dr Sakulkoo is from Thailand, and has returned home to a new position in industry following graduation.

Credit: 
University of Exeter

Captured on film for the first time: Microglia nibbling on brain synapses

image: Multiple synapse heads send out filopodia (green) converging on one microglia (red), as seen by focused ion beam scanning electron microscopy (FIBSEM).
L. Weinhard, EMBL Rome

Image: 
L. Weinhard, EMBL Rome

For the first time, EMBL* researchers have captured microglia nibbling on brain synapses. Their findings show that the special glial cells help synapses grow and rearrange, demonstrating the essential role of microglia in brain development. Nature Communications will publish the results on March 26.

Around one in ten cells in your brain are microglia. Cousins of macrophages, they act as the first and main contact in the central nervous system's active immune defense. They also guide healthy brain development. Researchers have proposed that microglia pluck off and eat synapses - connections between brain cells - as an essential step in the pruning of connections during early circuit refinement. But, until now, no one had seen them do it.

Microglia make synapses stronger

That is why Laetitia Weinhard, from the Gross group at EMBL Rome, set out on a massive imaging study to actually see this process in action in the mouse brain, in collaboration with the Schwab team at EMBL Heidelberg. "Our findings suggest that microglia are nibbling synapses as a way to make them stronger, rather than weaker," says Cornelius Gross, who led the work.

Warm welcome

The team saw that around half of the time that microglia contact a synapse, the synapse head sends out thin projections or 'filopodia' to greet them. In one particularly dramatic case - as seen in the accompanying image - fifteen synapse heads extended filopodia toward a single microglia as it picked on a synapse. "As we were trying to see how microglia eliminate synapses, we realised that microglia actually induce their growth most of the time," Laetitia Weinhard explains.

It turns out that microglia might underly the formation of double synapses, where the terminal end of a neuron releases neurotransmitters onto two neighboring partners instead of one. This process can support effective connectivity between neurons. Weinhard: "This shows that microglia are broadly involved in structural plasticity and might induce the rearrangement of synapses, a mechanism underlying learning and memory."

Perseverance

Since this was the first attempt to visualise this process in the brain, the current paper entails five years of technological development. The team tried three different state-of-the-art imaging systems before she succeeded. Finally, by combining correlative light and electron microscopy (CLEM) and light sheet fluorescence microscopy - a technique developed at EMBL - they were able to make the first movie of microglia eating synapses.

"This is what neuroscientists fantasised about for years, but nobody had ever seen before," says Cornelius Gross. "These findings allow us to propose a mechanism for the role of microglia in the remodeling and evolution of brain circuits during development." In the future, he plans to investigate the role of microglia in brain development during adolescence and the possible link to the onset of schizophrenia and depression.

Credit: 
European Molecular Biology Laboratory

New family of promising, selective silver-based anti-cancer drugs discovered

video: A family of economical silver-based complexes show very promising results against a number of human cancers in laboratory tests, with very low toxicity in rat studies and minimal effects on healthy cells. One of these, UJ3, is as effective as the industry-standard drug Cisplatin in killing cancer cells in laboratory tests done on human esophageal cancer, breast cancer and melanoma.
Dr Zelinda Engelbrecht, from the University of Johannesburg, shows how UJ3 targets mitochondria inside cancer cells.

Image: 
Dr Zelinda Engelbrecht, Ms Therese van Wyk at the University of Johannesburg.

A new family of very promising silver-based anti-cancer drugs has been discovered by researchers in South Africa. The most promising silver thiocyanate phosphine complex among these, called UJ3 for short, has been successfully tested in rats and in human cancer cells in the laboratory.

In research published in BioMetals, UJ3 is shown to be as effective against human esophageal cancer cells, as a widely-used chemotherapy drug in use today. Esophageal cancer cells are known to become resistant to current forms of chemotherapy.

"The UJ3 complex is as effective as the industry-standard drug Cisplatin in killing cancer cells in laboratory tests done on human breast cancer and melanoma, a very dangerous form of skin cancer, as well," says Professor Marianne Cronjé, Head of the Department of Biochemistry at the University of Johannesburg.

"However, UJ3 requires a 10 times lower dose to kill cancer cells. It also focuses more narrowly on cancer cells, so that far fewer healthy cells are killed," she says.

Fewer side effects

Apart from needing a much lower dose than an industry standard, UJ3 is also much less toxic.

"In rat studies, we see that up to 3 grams of UJ3 can be tolerated per 1 kilogram of bodyweight. This makes UJ3 and other silver phosphine complexes we have tested about as toxic as Vitamin C," says Professor Reinout Meijboom, Head of the Department of Chemistry at the University of Johannesburg.

See the researchers commenting in this video.

If UJ3 becomes a chemotherapy drug in future, the lower dose required, lower toxicity and greater focus on cancer cells will mean fewer side effects from cancer treatment.

Powerhouse pathway to neat cancer cell death

UJ3 appears to target the mitochondria, resulting in programmed cell death to kill cancer cells - a process called apoptosis. When a cancer cell dies by apoptosis, the result is a neat and tidy process where the dead cell's remains are "recycled", not contaminating healthy cells around them, and not inducing inflammation.

Certain existing chemotherapy drugs are designed to induce apoptosis, rather than "septic" cell death which is called necrosis, for this reason.

Cancer cells grow much bigger and faster, and make copies of themselves much faster, than healthy cells do. In this way they create cancerous tumors. To do this, they need far more energy than healthy cells do.

UJ3 targets this need for energy, by shutting down the "powerhouses" of a cancer cell, the mitochondria. The complex then causes the release of the "executioner" protein, an enzyme called caspase-3, which goes to work to dismantle the cell's command centre and structural supports, cutting it up for recycling in the last stages of apoptosis.

See microscope images of human esophageal cancer cells treated with the UJ3 complex.

Unusual compounds

UJ3 complex and the others in the family are based on silver. This makes the starter materials for synthesizing the complex far more economical than a number of industry-standard chemotherapy drugs based on platinum.

"These complexes can be synthesized with standard laboratory equipment, which shows good potential for large scale manufacture. The family of silver thiocyanate phosphine compounds is very large. We were very fortunate to test UJ3, with is unusually 'flat' chemical structure, early on in our exploration of this chemical family for cancer treatment," says Prof Meijboom.

Research on UJ3 and other silver thiocyanate phosphine complexes at the University is ongoing.

Credit: 
University of Johannesburg

Treatment rates for dangerously high cholesterol remains low

DALLAS, March 26, 2018 -- Less than 40 percent of people with severe elevations in cholesterol are being prescribed appropriate drug treatment, according to a nationally representative study reported in the American Heart Association's journal Circulation.

Data from the 1999-2014 National Health and Nutrition Examination Survey was used to estimate prevalence rates of self-reported screening, awareness and statin therapy among U.S. adults age 20 and older with severely elevated LDL or "bad cholesterol" levels of 190 mg/dL or higher. In addition, they considered a subgroup of patients with familial hypercholesterolemia, a genetic disorder that causes extreme elevations in cholesterol leading to an increased risk of early cardiovascular disease.

The frequency of cholesterol screening and awareness were high (at more than 80 percent) among adults with definite/probable familial hypercholesterolemia and severely elevated cholesterol; however, use of cholesterol-lowering statins was low (38 percent). Of those, only 30 percent of patients with severely elevated cholesterol had been prescribed a high intensity statin.

The discrepancy between cholesterol screening and medical treatment was most pronounced in younger patients, uninsured patients and patients without a regular source of healthcare - such as a doctor's office or an outpatient clinic.

"Young adults may be less likely to think that they are at risk of cardiovascular disease, and clinicians may be less likely to initiate statin therapy in this population," wrote lead author Emily Bucholz, M.D., Ph.D., MPH, Department of Medicine at Boston Children's Hospital in Massachusetts. "It is possible that lifestyle modifications continue to be prescribed as an initial treatment prior to initiating statin therapy."

However, both the original 2002 National Cholesterol Education Program's Adult Treatment Panel III (ATP-III guidelines) and the current American College of Cardiology and American Heart Association cholesterol guidelines recommend initiation of statin therapy in patients with LDL cholesterol at or above 190 mg/dL.

"Markedly elevated levels of 'bad' cholesterol put you at increased risk of developing heart disease and developing it earlier in life," said Circulation Editor-in-Chief, Joseph A. Hill. "If your 'bad' cholesterol is over 190 you should work with your physician regarding optimal drug treatment, in addition to lifestyle changes and management of other risk factors."

Study authors said additional studies are needed to better understand how to close these gaps of screening and treatment.

Credit: 
American Heart Association

Top sports leagues heavily promote unhealthy food and beverages, new study finds

video: Watch a video of Marie Bragg, PhD, assistant professor of Population Heath at NYU School of Medicine, and others as they explain the findings of a study linking the top sports leagues with promotion of unhealthy food and beverage products.

Image: 
NYU Langone Health

The majority of food and beverages marketed through multi-million-dollar television and online sports sponsorships are unhealthy -- and may be contributing to the escalating obesity epidemic among children and adolescents in the U.S., warn social scientists from NYU School of Medicine and other national academic health institutions.

The descriptive study publishes online March 26 in the peer-reviewed journal Pediatrics.

Researchers analyzed Nielsen statistics of televised sports programs among children 2-17 years of age. The study found that, among the 10 most watched sports organizations (e.g. NFL), most of the food products were rated "unhealthy" under the guidelines of the Nutrient Profile Model, a profiling system that identifies nutritious value in the United Kingdom and Australia. (The U.S. does not have a comparable measurement system.)

Specifically, the NPM system assigns a score to all foods, and scores can be converted to a 0-100 point scale called the Nutrient Profile Index (NPI). An NPI of 64 or higher indicates a food product as "nutritious." When NPI scoring was applied to foods most widely promoted through sports sponsorships, the researchers found a deeply troubling result: More than three-quarters failed to meet minimal standards for nutrition -- with the average NPI score of around 38-39 for promoted foods such as potato chips and sugary cereals.

The researchers examined sports sponsorship agreements covering 2006-2016 between food and beverage manufacturers and the 10 sports organizations with the most youth viewers: the National Football League (NFL),Major League Baseball (MLB), the National Hockey League (NHL), the National Basketball Association (NBA), the Fédération Internationale de Football Association (FIFA), the National Collegiate Athletic Association (NCAA)-even Little League Baseball and Ultimate Fighting Championship (UFC).

The NFL led all organizations with 10 food and beverage sponsors, followed by the NHL with seven. Surprisingly, Little League Baseball landed in third, with six sponsorships -- which particularly concerned the researchers given its child-targeted nature. Five sports leagues each had four, including MLB, NBA, NCAA, the Professional Golfers Association (PGA) and the National Association for Stock Car Auto Racing (NASCAR).

According to the study results, the NFL also led -- by a substantial margin -- the number of television impressions from its ads among viewers aged 2-17 (more than 224 million) and total YouTube views (more than 93 million).

"The U.S. is in the throes of a child and adolescent obesity epidemic, and these findings suggest that sports organizations and many of their sponsors are contributing, directly and indirectly, to it," says Marie Bragg, PhD, assistant professor of Population Health at NYU School of Medicine and the study's lead investigator. "Sports organizations need to develop more health-conscious marketing strategies that are aligned with recommendations from national medical associations."

This latest descriptive study follows an earlier study, published in 2016 and also led by Dr. Bragg and her research team, which found similar results from an analysis of food and beverage sponsorships with top music celebrities.

How the Study Was Conducted

The researchers used Nielsen television ratings for sports programs aired during 2015 to identify the 10 sports organizations whose events were most frequently watched by youth. For each of the top 10, a list was compiled of all sponsors, and sorted into 11 categories, such as "retail", "automotive" and "food/beverage."

The "food/beverage" ads were then identified through YouTube and an ad database called AdScope. Researchers used specific search criteria, including name of the product and/or whether a product logo was utilized in the ad.

Nielsen audience viewership data indicated that more than 412 million youth ages 2-17 viewed sports programs associated with 10 sports organizations in 2015, and over 234 sponsors were associated with the 500 most-watched programs. Food/nonalcoholic beverage was the second-most common advertising category (almost 19%), second only to automotive-related ads (almost 20%). Of the 173 instances where food and non-alcoholic beverages were shown, more than 76% promoted products with NPM-derived scores of less than 64.

Bragg points out the study's limitations, such as exclusion of in-stadium advertising and sponsorship appearances within games, and an inability to distinguish between unique views versus repeated views of YouTube ads. Still, Bragg asserts, the message is clear.

"Unhealthy food and beverage promotion through organized sports is pervasive," she says. "These organizations must put forth a better effort to protect their youngest and most impressionable fans."

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Genetic factors for most common disease in the first year of life are identified

A scientific study conducted at the State University of Campinas (UNICAMP) in São Paulo State, Brazil, has identified genetic factors associated with the severity of acute viral bronchiolitis. The study was supported by the Sao Paulo Research Foundation - FAPESP. Its results were published in the journal Gene.

Principal investigator for the study and a professor at UNICAMP's Medical School (FCM-UNICAMP), José Dirceu Ribeiro recalls that bronchiolitis is the most common disease during the first year of life, and also the leading cause of hospitalization during this period of infancy worldwide.

Bronchiolitis, which is basically an infection of the respiratory tract that causes acute inflammatory damage to the bronchioles, is mostly a disease with minor consequences. However, 1%-3% of its patients require hospitalization, with some of them needing supplemental oxygen. A smaller proportion requires ICU treatment including mechanical ventilation.

"Detecting genetic associations in cases of acute viral bronchiolitis is the first step toward the development of tests to predict the possible clinical outcome for each patient diagnosed with the disease soon after arrival at the emergency room," said Fernando Augusto de Lima Marson, a researcher at FCM-UNICAMP and one of the authors of the article.

The new study set out to find correlations between genetic factors and the severest forms of acute viral bronchiolitis in patients that did not present any of the risk factors, such as prematurity, a history of lung disease, and passive smoking. "A very significant proportion of patients present with no risk factors, and the question arises in these cases of how to explain progression of the disease to its most severe form," Ribeiro said.

To investigate the existence of possible genetic factors that may influence the severity of the disease, the researchers studied 181 children admitted over a period of two years to three hospitals in the Campinas area. All were diagnosed with acute viral bronchiolitis and given oxygen therapy. Screening was conducted at UNICAMP's teaching hospital (Hospital das Clínicas), the Sumaré State Hospital, and Vera Cruz Hospital.

The researchers first took samples of the patients' nasal secretions to determine the type of virus that had caused bronchiolitis in each case. As expected, in most cases, it was respiratory syncytial virus (RSV). More specifically, infection by RSV accounted for 69.9% of the cases, while rhinovirus accounted for 26.5%.

The researchers also evaluated all 181 children to find out if they fell within one or more risk groups for acute viral bronchiolitis. The result of this analysis was revealing in that 131, or 72%, were not part of any risk group.

Molecular biology and statistical techniques were used to study and compare the patients' DNA. Specific genetic markers were sought, especially single-nucleotide polymorphisms (SNPs), a type of DNA sequence variation that accounts for over 90% of genetic variation in the human genome.

In the statistical treatment of the data, patient outcomes were compared, and polymorphism frequency was compared between patients and a control group comprising 536 healthy individuals aged 19-25, randomly invited and with no personal or family history of lung disease.

Polymorphism frequencies were also analyzed for each type of virus, including RSV subtypes A and B, as well as rhinovirus, and possible cases of virus co-detection were identified.

"Our study focused on the genetic factors that might be associated with the severity of acute viral bronchiolitis," Marson said. "It provides evidence of a link between the patient's genetic predisposition and the severity of the disease. As far as we're aware, it's the first study worldwide to show this in such detail, including a large number of genetic variants."

Some genes are indeed associated with the presence of specific viruses that can cause the disease. The researchers at UNICAMP found a link between the SNP rs2107538*CCL5 and bronchiolitis caused by RSV and RSV subtype A and a link between the SNP rs1060826*NOS2 and bronchiolitis caused by rhinovirus.

"The SNPs rs4986790*TLR4, rs1898830*TLR2 and rs2228570*VDR were associated with very severe cases of the disease, which progressed to a fatal outcome. The SNP rs7656411*TLR2 was associated with the need for oxygen supplementation, while rs352162*TLR9, rs187084*TLR9 and rs2280788*CCL5 were associated with cases in which ICU admission was required. Finally, rs1927911*TLR4, rs352162*TLR9 and rs2107538*CCL5 were associated with the need for mechanical ventilation," Marson said.

The authors of the study stress the importance of replication using other datasets. Nevertheless, they consider the results highly promising.

"Medicine is advancing toward the development of therapies tailored to the needs of each patient," Marson said. "In this context, the identification of SNPs associated with the disease in question could provide a target for genetic therapy, so that treatments and management strategies can be developed for precision medicine and preventive medicine, respectively."

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo