Culture

Doctors more likely to prescribe preventive therapy if prompted by EMR

Purely educating doctors about the importance of prescribing certain therapies may not be enough to make a meaningful impact, according to a new Penn Medicine study. Using acid suppression therapy--an effective method of reducing the risk of gastrointestinal bleeding in vulnerable cardiac patients--Penn researchers tested interventions that utilized both education and an electronic "dashboard" system linked to patients' electronic medical records (EMRs) which gave doctors up-to-date information on which patients would likely benefit from the therapy. Researchers found that the education on acid suppression therapy alone did not have a noticeable effect on prescribing rates, but adding use of the dashboard resulted in an 18 percent increase in needed medication orders. The study was published this month in the Joint Commission Journal on Quality and Patient Safety.

"This study shows that education alone is typically not a sufficient method for changing the behavior of providers and care teams," said the study's senior author, Shivan Mehta, MD, MBA, associate chief innovation officer and an assistant professor of Medicine. "We demonstrated that although clinical leaders should collaborate to identify best practices, care redesign, technology, and behavior change strategies are also needed."

Acid suppression therapy involves prescribing patients with medications to reduce the level of acid in their stomach, which helps reduce heartburn symptoms and treat ulcers. It can also decrease some patients' risk of even developing ulcers, such as cardiac patients who are on certain medications that may increase their chance of bleeding.

"The main reason the patients are at risk is because they're placed on medications--or combinations of medications--such as anti-platelet agents or anticoagulation," said the study's lead author, Carolyn Newberry, MD, a Penn Medicine Gastroenterology fellow at the time the research was conducted who is currently an assistant professor of Medicine in the division of Gastroenterology and Hepatology at Weill Cornell Medicine in New York. "These medications are important for treating or preventing cardiovascular disease but they also have side effects such as increased bleeding in the G.I. tract."

Before the study's EMR-linked dashboard was developed and implemented, through help from Penn Medicine's Center for Health Care Innovation, prescription rates for cardiac patients who could benefit from acid suppression therapy was just shy of 73 percent, according to the health system's data of inpatients in the Cardiac Intensive Care Unit (CICU) from September 2016 and January 2017. Afterward, from January until September 2017 when the "dashboard" was implemented, rates quickly jumped to 86 percent for patients in the CICU.

Great gains were made using this type of technology-assisted nudge, which the study team notes could improve desired outcomes in other clinical areas. Software developers at the Center for Health Care Innovation are working on similar dashboards or alerts in many other clinical areas where there is an opportunity to increase adoption of evidence-based practices. However, the researchers emphasized that this "nudge" approach is not one size fits all.

"No one dashboard or technology will work in every area, so it is important to partner with clinicians and identify workflows and processes where it can complement care," Newberry said. "Our experience highlights this individualized nature and the importance of continued collaboration, along with process redesigns, to achieve sustainable success."

Credit: 
University of Pennsylvania School of Medicine

Fluorescence discovered in tiny Brazilian frogs

video: A pumpkin toadlet (Brachycephalus ephippium) crawls.

Image: 
NYU Abu Dhabi Postdoctoral Associate Sandra Goutte

"The fluorescent patterns are only visible to the human eye under a UV lamp. In nature, if they were visible to other animals, they could be used as intra-specific communication signals or as reinforcement of their aposematic coloration, warning potential predators of their toxicity," says Sandra Goutte

Pumpkin toadlets (also called Brachycephalus ephippium) are tiny, brightly-colored, and poisonous frogs that can be found in the Brazilian Atlantic forest. During the mating season, they can be seen by day walking around the forest and producing soft buzzing calls in search of a mate.

An international team of researchers led by NYU Abu Dhabi Postdoctoral Associate Sandra Goutte was studying the acoustic communications of these miniature frogs. When they discovered that Brachycephalus ephippium could not hear its own mating calls, they searched for alternative visual signals the frogs could use to communicate instead. Unexpectedly, when they shone an ultra-violet (UV) lamp on the frogs, their backs and heads glowed intensely.

In a new paper published in the journal Scientific Reports, the researchers report that fluorescent patterns are created by bony plates lying directly beneath a very thin skin. In fact, the toadlet's entire skeleton is highly fluorescent, but the fluorescence is only externally visible where the layer of skin tissue over the bones is very thin (about seven micrometers thick). The lack of dark skin pigment cells (which block the passage of light) and the thinness of the skin allow the ultraviolet light to pass through and excite the fluorescence of the bony plates of the skull. The fluorescent light is then reflected back from the frog's bone, and can be seen as bluish-white markings by the observer if they have a UV lamp.

"The fluorescent patterns are only visible to the human eye under a UV lamp. In nature, if they were visible to other animals, they could be used as intra-specific communication signals or as reinforcement of their aposematic coloration, warning potential predators of their toxicity," said Goutte. "However, more research on the behavior of these frogs and their predators is needed to pinpoint the potential function of this unique luminescence."

The researchers compared the skeletons of the two species of pumpkin toadlets to closely related, non-fluorescent species. The pumpkin toadlets' bones proved to be much more fluorescent. Pumpkin toadlets are diurnal, and in their natural habitat, the UV or near-UV components of daylight might be able to create fluorescence at a level detectable by certain species.

Credit: 
New York University

Air quality to remain a problem in India despite pollution control policies

According to an independent study released today by the International Institute for Applied Systems Analysis (IIASA) and the Council on Energy, Environment, and Water (CEEW), more than 674 million Indian citizens are likely to breathe air with high concentrations of PM2.5 in 2030, even if India were to comply with its existing pollution control policies and regulations.

The study shows that only about 833 million citizens would be living in areas that meet India's National Ambient Air Quality Standards (NAAQS) in 2030 and that implementation failure could increase these numbers significantly. However, aligning sustainable development policies to the implementation of advanced emission control technologies could provide NAAQS-compliant air quality to about 85% of the Indian population. The study was released at a CEEW dialogue, On Air: Pathways to Achieving India's Ambient Air Quality Standards, held in New Delhi today (Friday, 29 March).

In 2015, more than half the Indian population - about 670 million citizens - were exposed to PM2.5 concentrations that did not comply with India's NAAQS for PM2.5 (40 μg/m³). Further, less than 1% enjoyed air quality that met the World Health Organisation (WHO) benchmark limit of 10 μg/m³.

"A significant share of emissions still originates from sources associated with poverty and underdevelopment such as solid fuel use in households and waste management practices," explains Markus Amann, Air Quality and Greenhouse Gases Program director at IIASA.

In January 2019, the Indian government launched the National Clean Air Program (NCAP), a five-year action plan to curb air pollution, build a pan-India air quality monitoring network, and improve citizen awareness. The program focuses on 102 polluted Indian cities and aims to reduce PM2.5 levels by 20-30% over the next five years. The analysis conducted by researchers from IIASA and CEEW however suggests that NCAP needs to be backed by a legal mandate to ensure successful ground-level implementation of emission control measures. In the long-term, NCAP also needs to be scaled-up significantly to ensure that rapid economic growth and meeting NAAQs are aligned.

Pallav Purohit, an IIASA researcher and lead author of the study said, "While current ambient PM2.5 monitoring in India reveals high levels in urban areas, remote sensing, comprehensive air quality modeling, and emission inventories, suggest large-scale exceedances of the NAAQS, also in rural areas. Pollution from rural areas is transported into the cities (and vice versa), where it constitutes a significant share of pollution making the coordination of urban-rural and inter-state responses critical."

Hem Dholakia, a senior research associate at CEEW, and one of the authors of the study added, "The health burden of air pollution is significant in India. Limited control of air pollution will aggravate this burden in the future. The IIASA-CEEW study clearly shows that the policy choices of today will impact future air quality and its aftermaths. The central and state governments must do more to align air quality, climate change, and sustainable development goals in a resource efficient manner."

The study also found that the Indo-Gangetic plain, covering parts of states such as Punjab, Haryana, Uttar Pradesh, Bihar, and West Bengal, has the highest population exposure to significant PM2.5 concentrations. This is mainly due to the high density of polluting sources and reduced ventilation by the obstructing presence of the Himalayas. Citizens living in parts of Bihar, West Bengal, Chhattisgarh, and Odisha are also exposed to high levels of PM2.5. The governments in these regions must design state-specific policies to comply with NAAQS and embrace a low-carbon growth model to ensure better air quality for its citizens.

Further, the study highlighted a stark variance in factors contributing to air pollution across the states. Solid fuel, including biomass combustion for residential cooking, is the largest contributor in the major states of the Indo-Gangetic Plain. However, in Delhi and Goa, it contributes only a small amount due to enhanced access to clean fuels in these states. Instead, NOx emissions from transportation are major contributors to air pollution in these two states. Similarly, SO2 emissions from power plants are dominant contributors to air pollution in Haryana and Maharashtra. In coming years, every state government must commission detailed scientific studies to better understand the sources contributing to air pollution in their cities.

Another challenge for many states is that emission sources that are outside their immediate jurisdiction contribute significantly to ambient pollution levels of PM2.5. For example, transboundary transport or crop burning are sources of secondary pollution in some states. Such states could achieve significant improvements in air quality only with a region-wide coordinated approach to reduce air pollution and strict on-ground enforcement to ensure compliance with emissions control measures.

The IIASA-CEEW study also recommends focusing on energy efficiency, enhanced public transport, increased use of cleaner fuels, improved agricultural production practices, and replacement of coal with natural gas and renewables in the power and industrial sector to achieve better air quality and meet multiple Sustainable Development Goals (SDGs).

This is a joint press release from the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria and the Council on Energy, Environment, and Water (CEEW) in Delhi, India.

Credit: 
International Institute for Applied Systems Analysis

Binding affinities of perfluoroalkyl substances to Baikal seal PPARα

image: The Baikal seal (Pusa sibirica) is a top predator in Lake Baikal, Russia and is contaminated with a variety of environmental pollutants.

Image: 
Center for Marine Environmental Studies (CMES), Ehime University

A team of researchers at Ehime University revealed the binding affinities of perfluoroalkyl substances (PFASs) to Baikal seal peroxisome proliferator-activated receptor α (PPARα) using in vitro and in silico approaches. The finding was published on January 16 in the highly reputed environmental science journal, Environmental Science and Technology.

PFASs, such as perfluoroalkyl carboxylates (PFCAs) and perfluoroalkyl sulfonates (PFSAs), are man-made organic chemicals, which have been globally detected in the environment, humans and wildlife. Owing to their environmental persistence, bioaccumulation potencies, and toxic properties, one of PFASs, perfluorooctane sulfonic acid (PFOS), has been internationally regulated by the Stockholm Convention on Persistent Organic Pollutants (POPs). On the other hand, no regulations of other PFSAs have been implemented worldwide.

The Baikal seal (Pusa sibirica), a freshwater mammalian species, is a top predator found in Lake Baikal, Russia. It is exposed to various POPs such as dioxins, polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and organochlorine pesticide. In addition, our research group has previously determined the accumulation levels of various PFASs in the tissues of wild Baikal seals, which were particularly high for PFOS, perfluorononanoic acid (PFNA) and perfluorodecanoic acid (PFDA). However, the toxic effects and risks of PFASs in animals, particularly the non-model wildlife, are not fully understood.

In this paper, we evaluated the binding affinities of PFASs with various carbon chain lengths (C4-C11) to in vitro-synthesized Baikal seal PPARα. Similar experiments were performed for human PPARα as well and the results were compared with those of Baikal seal PPARα to investigate interspecies differences in the role of PPARα in the toxicity of PFASs. PPARα is a member of the ligand-activated nuclear receptor superfamily. This receptor protein participates in the regulation of lipid metabolism in the liver and thus is involved in liver tumors. Previous studies have investigated the potencies of PFASs to activate mouse, rat, and human PPARα in in vitro reporter gene assays, suggesting the disruption of the PPARα signaling pathway by PFASs. However, it has not been investigated whether PFASs can interact with PPARα of seals that are actually contaminated with PFASs.

An in vitro competitive binding assay showed that six PFCAs and two PFSAs bound to in vitro-synthesized Baikal seal PPARα in a dose-dependent manner. PFOS, PFDA, PFNA, and perfluoroundecanoic acid (PFUnDA) showed higher binding affinities to Baikal seal PPARα than other PFASs. Moreover, in silico PPARα homology modeling predicted that there were two ligand-binding pockets (LBPs) in the Baikal seal PPARα and human PPARα LBDs. Structure-activity relationship analyses suggested that the binding potencies of PFASs to PPARα might depend on LBP binding cavity volume, hydrogen bond interactions, the number of perfluorinated carbons, and the hydrophobicity of PFASs. Interspecies comparison of the in vitro binding affinities revealed that Baikal seal PPARα had a higher preference for PFASs with long carbon chains than that of human PPARα. The in silico docking simulations suggested that the 1st LBP of Baikal seal PPARα had higher affinities than that of human PPARα; however, the second LBP of Baikal seal PPARα had lower affinities than that of human PPARα. The interaction energies of PFASs with Baikal seal PPARα (first and second LBPs) determined using in silico docking simulations had a significant negative correlation with their binding affinities determined using in vitro PPARα binding assays. These results suggested that in silico docking simulation may be a useful tool for screening potential ligands for the seal PPARα. To our knowledge, this is the first evidence showing interspecies differences in the binding of PFASs to PPARαs and their structure-activity relationships. These findings urge us to incorporate these in vitro and in silico approaches into assessing the risk of PFASs in seal species.

Credit: 
Ehime University

Galápagos islands have nearly 10 times more alien marine species than once thought

image: The bryozoan Amathia verticillata. Known in other parts of the world for fouling pipes and fishing gear and killing seagrasses, its discovery in the Galapagos is especially concerning for scientists.

Image: 
Dan Minchin/Marine Organism Investigations

Over 50 non-native species have found their way to the Galápagos Islands, almost 10 times more than scientists previously thought, reports a new study in Aquatic Invasions published Thursday, March 28.

The study, a joint effort of the Smithsonian Environmental Research Center, Williams College, and the Charles Darwin Foundation, documents 53 species of introduced marine animals in this UNESCO World Heritage Site, one of the largest marine protected areas on Earth. Before this study came out, scientists knew about only five.

"This increase in alien species is a stunning discovery, especially since only a small fraction of the Galápagos Islands was examined in this initial study," said Greg Ruiz, a co-author and marine biologist with the Smithsonian Environmental Research Center.

"This is the greatest reported increase in the recognition of alien species for any tropical marine region in the world," said lead author James Carlton, an emeritus professor of the Maritime Studies Program of Williams College-Mystic Seaport.

The Galápagos lie in the equatorial Pacific, roughly 600 miles west of Ecuador. Made famous by Charles Darwin's visit in 1835, the islands have long been recognized for their remarkable biodiversity. But with their fame, traffic has spiked. In 1938, just over 700 people lived on the Galápagos. Today, more than 25,000 people live on the islands, and nearly a quarter-million tourists visit each year.

Carlton and Ruiz began their study in 2015, with Inti Keith of the Charles Darwin Foundation. They conducted field surveys on two of the larger Galápagos Islands: Santa Cruz and Baltra, where they hung settlement plates from docks one meter underwater to see what species would grow on them. They also collected samples from mangrove roots, floating docks and other debris and scoured the literature for previous records of marine species on the islands.

The team documented 48 additional non-native species in the Galápagos. Most of them (30) were new discoveries that could have survived on the islands for decades under the radar. Another 17 were species scientists already knew lived on the Galápagos but previously thought were native. One final species, the bryozoan Watersipora subtorquata, was collected in 1987 but not identified until now.

Sea squirts, marine worms and moss animals (bryozoans) made up the majority of the non-native species. Almost all of the non-natives likely arrived inadvertently in ships from tropical seas around the world. Some of the most concerning discoveries include the bryozoan Amathia verticillata--known for fouling pipes and fishing gear and killing seagrasses--and the date mussel Leiosolenus aristatus, which researchers have already seen boring into Galápagos corals.

"This discovery resets how we think about what's natural in the ocean around the Galápagos, and what the impacts may be on these high-value conservation areas," Carlton said.

To reduce future invasions, the Galápagos already have one of the most stringent biosecurity programs in the world. International vessels entering the Galápagos Marine Reserve may anchor in only one of the main ports, where divers inspect the vessel. If the divers find any non-native species, the vessel is requested to leave and have its hull cleaned before returning for a second inspection.

Still, the authors say, the risks remain high. The expansion of the Panama Canal in 2015 may bring the Indo-Pacific lionfish--a major predator in the Caribbean--to the Pacific coast of Central America. Once there, it could make its way to the Galápagos, where the likelihood of its success would be very high. Another possible arrival is the Indo-Pacific snowflake coral, which has already caused widespread death of native corals on the South American mainland.

Credit: 
Smithsonian

Poor oral health may increase the risk of pancreatic cancer among African American women

(Boston)-- African American women with poor oral health may be more likely to get pancreatic cancer (PC).

In the U.S., studies show that African Americans are more likely to get pancreatic cancer than Caucasians. Poor oral health, specifically adult tooth loss and periodontal disease prevalence, has a similar pattern. Using data from the Black Women's Health Study, researchers from the Slone Epidemiology Center at Boston University found that compared to African American women who showed no signs of poor oral health, those who reported adult tooth loss had a substantially increased risk of PC. This association become even stronger for those who had lost at least five teeth.

According to the researchers, these observations may be related to oral bacteria and the inflammation caused by certain bacteria. In previous studies among different populations the presence of circulating antibodies to selected oral periodontal pathogens was associated with increased risk of PC.

"Oral health is a modifiable factor. Apart from avoiding cigarette smoking, there is little an individual can do to reduce risk of PC. Improving access to low cost, high quality dental care for all Americans may decrease racial disparities in this cancer," said Julie Palmer, ScD, associate director of BU's Slone Epidemiology Center and a professor of epidemiology at BUSPH.

Credit: 
Boston University School of Medicine

Virtual reality could be used to treat autism

Playing games in virtual reality (VR) could be a key tool in treating people with neurological disorders such as autism, schizophrenia and Parkinson's disease.

The technology, according to a recent study from the University of Waterloo, could help individuals with these neurological conditions shift their perceptions of time, which their conditions lead them to perceive differently.

"The ability to estimate the passage of time with precision is fundamental to our ability to interact with the world," says co-author Séamas Weech, post-doctoral fellow in Kinesiology. "For some individuals, however, the internal clock is maladjusted, causing timing deficiencies that affect perception and action.

"Studies like ours help us to understand how these deficiencies might be acquired, and how to recalibrate time perception in the brain."

The UWaterloo study involved 18 females and 13 males with normal vision and no sensory, musculoskeletal or neurological disorders. The researchers used a virtual reality game, Robo Recall, to create a natural setting in which to encourage re-calibration of time perception. The key manipulation of the study was that the researchers coupled the speed and duration of visual events to the participant's body movements.

The researchers measured participants' time perception abilities before and after they were exposed to the dynamic VR task. Some participants also completed non-VR time-perception tasks, such as throwing a ball, to use as a control comparison.

The researchers measured the actual and perceived durations of a moving probe in the time perception tasks. They discovered that the virtual reality manipulation was associated with significant reductions in the participants' estimates of time, by around 15 percent.

"This study adds valuable proof that the perception of time is flexible, and that VR offers a potentially valuable tool for recalibrating time in the brain," says Weech. "It offers a compelling application for rehabilitation initiatives that focus on how time perception breaks down in certain populations."

Weech adds, however, that while the effects were strong during the current study, more research is needed to find out how long the effects last, and whether these signals are observable in the brain. "For developing clinical applications, we need to know whether these effects are stable for minutes, days, or weeks afterward. A longitudinal study would provide the answer to this question."

"Virtual reality technology has matured dramatically," says Michael Barnett-Cowan, neuroscience professor in the Department of Kinesiology and senior author of the paper. "VR now convincingly changes our experience of space and time, enabling basic research in perception to inform our understanding of how the brains of normal, injured, aged and diseased populations work and how they can be treated to perform optimally."

Credit: 
University of Waterloo

Screening for colorectal cancer at 45 would avert deaths, but testing older adults would do more

Starting routine colorectal cancer screening at age 45 rather than 50 would decrease U.S. cancer deaths by as much as 11,100 over five years, according to a new study led by researchers at the Stanford University School of Medicine.

The move would also decrease the number of cancer cases nationwide by up to 29,400 over that time period. However, screening a greater number of older and high-risk adults would avert nearly three times as many diagnoses and deaths at a lower cost, the study found.

The study models potential effects of a 2018 change to the American Cancer Society's screening guidelines. Following increases in the incidence of colon and rectal cancer among people in their 40s, the society lowered the recommended age for a person at average risk of colorectal cancer to begin screening from 50 to 45. Other groups, including the U.S. Preventive Services Taskforce, are studying whether their screening recommendations should also change.

The shift has concerned some physicians who worry that screening resources may be drawn away from higher-risk populations. Overall, colorectal cancer incidence remains two to 13 times higher among people over the age of 50 than in younger people.

"This is one of the most important changes to guidelines that has occurred in the colorectal cancer screening world recently, and it was very controversial," said Uri Ladabaum, MD, professor of medicine at Stanford. "Our aim was to do a traditional cost-effectiveness analysis, but then also look at the potential tradeoffs and national impact. We wanted to crystalize the qualitative issues into tangible numbers, so people could then have a productive debate about these very issues."

The study found that over the next five years, initiating testing at age 45 could reduce the number of cancer cases by as many as 29,400 and deaths by up to 11,100, at an added societal cost of $10.4 billion. An additional 10.6 million colonoscopies would be required.

By comparison, increasing screening participation to 80 percent of 50- to 75-year-olds would reduce cases by 77,500 and deaths by 31,800 at an added cost of only $3.4 billion, according to the model. The number of additional colonoscopies needed would be 12 million.

A paper describing the work will be published online March 28 in Gastroenterology. Ladabaum is the lead author. Robert Schoen, MD, professor of medicine and epidemiology at the University of Pittsburgh, is the senior author.

Cost versus benefits

The incidence of colorectal cancer among people 50 and older decreased by 32 percent between 2000 and 2013, largely due to a broad embrace of screening. But rates for people in their 40s rose by 22 percent, according to the American Cancer Society.

Physicians haven't definitively identified what has driven the increase, but obesity and diet likely are factors, said Ladabaum, who directs the gastrointestinal cancer prevention program at Stanford Health Care.

"With obesity being such a big problem and hard to tackle, and other potently influential factors not well-defined, people turn to what we know can help in terms of colon cancer risk mitigation, and that's screening," he said. "That's what brings us to this question."

Aiming to stem the rise in colon cancer cases among younger people, the American Cancer Society's new guidelines recommend screening for an estimated 21 million additional people.

The new study compares the potential costs and benefits of this approach by modeling five screening strategies, including a colonoscopy every 10 years; annual fecal immunochemical testing; and a sigmoidoscopy at age 45 followed by other tests in subsequent years.

To assess cost-effectiveness, Ladabaum and his colleagues calculated the cost of the additional screening in relation to years gained without or with cancer, a measure known as quality-adjusted life years. An intervention is generally accepted as cost-effective if it costs less than $100,000 per quality-adjusted life-year gained.

The study found that all five strategies offered benefits at acceptable costs when started at age 45 versus age 50, with the cost per additional quality-adjusted life-year ranging from $2,500 to $55,900.

"Is screening starting at 45 cost-effective by traditional standards? The answer is yes," Ladabaum said. "But the bottom line for me is that this is nuanced. The crucial question is: Can we screen younger people and at the same time do a better job of screening older and higher-risk people?"

Navigating tradeoffs

Physicians who were hesitant to endorse the American Cancer Society's new recommendations point to the work that remains in getting higher-risk people screened. Although the vast majority of colorectal cancer cases occur in people older than 50, only about 62 percent of them participate in screening, despite the goal of the health care community to bring that number closer to 80 percent.

In their study, Ladabaum and his colleagues explored the potential results of allocating resources in different ways. According to the model, initiating colonoscopy screening at age 45 would require 758 additional colonoscopies per 1,000 people, and would lead to a reduction of four cancer cases and two deaths per 1,000 people. By comparison, those procedures instead could be used to screen 231 previously unscreened 55-year-olds or 342 previously unscreened 65-year-olds through age 75. Those options would avert 13 to 14 cases and six to seven deaths per 1,000 people. They also would save $163,700 to $445,800 on balance, due to averted cancer treatment costs.

"If we actually do face tradeoffs on the societal level, either in terms of the effort we can put into this or the supply of colonoscopies and the distribution of colonoscopies by geography, then one can debate whether the efforts should go toward now bringing in younger people or whether we should focus on older people," Ladabaum said. "If we can bring in everybody, great. But if not, screening older and higher-risk people is higher yield in terms of public health benefit. It can get emotional and passionate because death from cancer at a young age is particularly devastating."

Credit: 
Stanford Medicine

New Yorkers brace for self-cloning Asian longhorned tick

image: The new species, identified last summer in Westchester and Staten island, is increasing and spreading quickly.

Image: 
Center for Disease Control and Prevention

Staten Island residents have another reason to apply insect repellent and obsessively check for ticks this spring and summer: the population of a new, potentially dangerous invasive pest known as the Asian longhorned tick has grown dramatically across the borough, according to Columbia University researchers. And the tick--which unlike other local species can clone itself in large numbers--is likely to continue its conquest in the months ahead.

"The concern with this tick is that it could transmit human pathogens and make people sick," explains researcher Maria Diuk-Wasser, an associate professor in the Columbia University Department of Ecology, Evolution and Environmental Biology, who studies ticks and human disease risk.

In a new study appearing in the April issue of the journal Emerging Infectious Diseases, Diuk-Wasser and colleagues provide the most exhaustive local census of the new species to date--and suggest the Staten Island infestation is far more advanced than previously known.

The researchers found the species Haemaphysalis longicornis in 7 of 13 parks surveyed in 2017 and in 16 of 32 in 2018. In one park, the density of the ticks per 1000 square meters rose almost 1,698 percent between 2017 and 2018, with the number of ticks picked up in the sample area rising from 85 to 1,529. They also found the ticks on anesthetized deer from the area.

The news comes less than a year after the New York City Department of Health announced the discovery of the first member of the species in the city--a single tick--found on southern Staten Island last August.

The tick, native to Asia and Australia, had been identified in the months prior to the Staten Island sighting in New Jersey, West Virginia, North Carolina and Arkansas and just a few weeks earlier in Westchester County. The Westchester sighting prompted a number of state senators to send a letter urging state health officials to act aggressively to stop the spread of the new species.

Public health officials are particularly concerned because the longhorned tick is notorious for its ability to quickly replicate itself. Unlike deer ticks, the common local variety known for carrying Lyme disease, the female Asian longhorned can copy itself through asexual reproduction in certain environmental conditions, or reproduce sexually, laying 1,000-2,000 eggs at a time. They are typically found in grass in addition to the forested habitats that deer ticks prefer, adding a new complication to public health messaging. The Columbia analysis suggests that the public warnings may have come too late.

"The fact that longhorned tick populations are so high in southern Staten Island will make control of this species extremely difficult," says Meredith VanAcker, a member of Diuk-Wasser's lab who collected the data as part of her Ph.D. thesis. "And because females don't need to find male mates for reproduction, it is easier for the population to spread."

The threat these new arrivals pose to human health is still unknown. In Asia, there have been reports of ticks passing on a virus that can cause a number of diseases, including hemorrhagic fever and ehrlichiosis, a bacterial illness that can cause flu-like symptoms and lead to serious complications if not treated.

The arrival of the species on Staten Island adds another unwelcome dimension to the region's tick woes, which have grown dramatically in recent years. Thanks to an expanding deer population, Lyme disease spread through deer ticks has reached epidemic proportions in some areas of the Northeast. Deer ticks (also called black-legged ticks) are capable of disseminating six other human pathogens.

The first Asian long-horned tick in the U.S. was identified in New Jersey in 2013. A large population was later found on sheep in Mercer County, New Jersey. Diuk-Wasser became aware of the potential danger when a doctor at a Westchester clinic removed a tick from a patient and sent it in for identification. The discovery of the first human bite prompted widespread alarm.

By then, the Columbia team was already in the midst of an extensive "tick census" on Staten Island to determine how the landscape connectivity between urban parks influenced the spread of disease.

The Asian longhorned is easy to miss because it resembles a rare native species of rabbit tick. VanAcker spent months combing areas of Staten Island for ticks, dragging a square-meter corduroy cloth over leaf litter and examining it every 10 to 20 meters Diuk-Wasser, post-doctoral student Danielle Tufts and other members of the Diuk-Wasser lab found huge numbers of them on the bodies of unconscious deer that had been captured and anesthetized by wildlife authorities.

VanAcker found her collections were overflowing with the new species, leading to publication of the current study in Emerging Infectious Diseases. Her work on landscape connectivity, slated to appear in the June issue of the same journal, drives home the difficult decisions facing policymakers as they attempt to arrest the spread of the new species and others like it.

"The easier it is for deer to maneuver through urban landscapes between parks, the more likely the ticks are to spread to new areas," Diuk-Wasser says. "This suggests that the emphasis on urban wildlife corridors has a previously unappreciated downside for human health."

Credit: 
Columbia University

Dissecting dengue: Innovative model sheds light on confounding immune response

image: This graphic illustrates that the B cell response containing antibodies that target dengue serotype 2 appear two weeks after a dengue serotype 2 infection in humans and are still present 6 months after infection. Such B cells are important players in long-term protection from future dengue 2 infections.

Image: 
Sean Diehl, Ph.D., UVM Larner College of Medicine

About 40 percent of the global population is at risk for contracting dengue - the most important mosquito-borne viral infection and a close "cousin" of the Zika virus - and yet, no effective treatment or safe licensed vaccine exists. But a new study, reported recently in the Lancet's open-access journal EBioMedicine, has uncovered details about the human immune response to infection with dengue that could provide much-needed help to the evaluation of dengue vaccine formulations and assist with advancing safe and effective candidate vaccines.

Like Zika, yellow fever and West Nile viruses, dengue belongs to a group of mosquito-borne viruses that circulate in many tropical countries. However, without effective treatment and a safe licensed vaccine, dengue infection can lead to debilitating illnesses, including severe pain and hemorrhagic fever. One of the challenges of dengue infection is that it can be caused by one of four versions - or serotypes - of the virus, which are numbered dengue 1 to 4. Infection by one serotype typically results in long-term protection specific to that serotype. However, a later exposure to a different serotype can result in more severe disease. Experts believe this phenomenon occurs due to a part of the immune response, the antibodies, which may recognize and promote the second infection rather than defeat it.

"Trying to tease out the protective immune response in naturally infected patients is a challenge, since people living in high-risk areas likely have been exposed to multiple serotypes of the virus, which confound the observation," said senior study author Sean Diehl, Ph.D., an assistant professor of microbiology and molecular genetics at the University of Vermont's (UVM) Vaccine Testing Center and Center for Translational Global Infectious Disease Research. "In our model, we controlled the infection for safety reasons and the participants were monitored for six months in order to understand the biological changes that occur following the infection."

Over a six-month period, Diehl and his colleagues tracked the immune response and measured its different aspects, from the levels of certain immune blood cells to the levels of antibodies they produce and how these antibodies can recognize different dengue serotypes. In this study, co-led by Huy Tu, a fifth-year Ph.D. candidate in UVM's Cellular and Molecular Biomedical Sciences program, the group defined the evolution of the antibody response in dengue infection in a controlled human model where subjects were treated with a weakened version of the virus.

The research showed that the study participants developed an antibody response against the virus as early as two weeks after the infection. This immune response was highly focused against the infecting serotype, neutralized the virus, and persisted for months afterwards. With a comprehensive approach, the study dissected the antibody response at the single-cell level resolution, mapped the interaction between human antibodies to structural components on the virus's surface, and connected the functional features of the response during acute infection to time points past recovery.

Credit: 
Larner College of Medicine at the University of Vermont

New way of designing systems against correlated disruptions uses negative probability

image: Yanfeng Ouyang, Professor of Civil and Environmental Engineering at the University of Illinois.

Image: 
University of Illinois at Urbana-Champaign Department of Civil and Environmental Engineering.

In March of 2011, a powerful earthquake off the coast of Japan triggered the automatic shutdown of reactors at the Fukushima Daiichi Nuclear Power Plant and simultaneously disrupted electricity lines that supported their cooling. Had the earthquake been the only disaster that hit that day, emergency backup generators would have prevented a meltdown. Instead, a tsunami immediately followed the earthquake, flooding the generators and leading to the most serious nuclear accident in recent history. For systems expert Yanfeng Ouyang, a professor of civil and environmental engineering (CEE) at the University of Illinois, it was a perfect example of the problem of designing systems against correlated disruptions.

Until now, systems engineers have struggled with the problem of planning for disaster impacts that are linked by correlation - like those of earthquakes and tsunamis - because of the cumbersome calculations necessary to precisely quantify the probabilities of all possible combinations of disruption occurrences. When correlation exists, the probability of a joint disruption is not simply the product of those of individual disruptions. This leaves gaps in our understanding of how to design infrastructure systems with the greatest disaster resistance and resilience.

Now Ouyang and fellow CEE researchers have developed a new method for designing and optimizing systems subject to correlated disruptions. This method eliminates the need for directly addressing the many combinations of disruptions that have made such problems difficult to model in the past. They described it in a paper published this month in Transportation Research Part B, Methodological, the latest in a series of related papers from recent years. One of the keys of their method was incorporating negative probability, a concept seemingly never before utilized for system design purposes.

"With this concept, we developed a new methodology to help design systems with which we had difficulty before, such that they can be more resistant to disasters and more resilient than before," said Ouyang, the George Krambles Endowed Professor in Rail and Public Transit, who led the series of work with former doctoral students including Siyang Xie (Ph.D. 18), now a research scientist at Facebook, and former postdoctoral researcher Kun
An, now a faculty member at Monash University in Australia.

The team's new computational method is widely applicable because it can be used to model and optimize any networked system - for example supply chains, transportation systems, communication networks, electrical grids and more. The method incorporates a virtual system of "supporting stations" to represent the correlated vulnerabilities of infrastructure components in the real world. This allows systems engineers to translate complex impacts of disasters on the components into simple and independent impacts on the supporting stations. For example, in the case of two warehouses whose operations may both be disrupted by a snowstorm, one imagines that their functionalities rely on some virtual power supply sources, each of which serves as a supporting station to the warehouses. By setting proper dependency between the two warehouses and these power sources, one can translate the correlated functionality states of the two warehouses into independent disruptions of the shared power supplies.

"We showed that any number of infrastructure components with any type of disruption correlation among them can be described by a properly set-up system of such virtual stations, where each of them fails only independently of each other," Ouyang said.
This construct makes the calculations considerably more manageable because it significantly reduces the complexity of representing failure correlations in the design model.

"We now have a new way of describing the system," Ouyang said. "We go from a system where there is correlation into an equivalent system where there is no correlation - every failure is now independent of the others, so the probabilities are much easier to compute."

In order to accurately represent the behavior of systems in the real world, the team had to introduce the concept of negative probability for station disruptions, which allows their models to address negatively correlated disruption risks of system components. While positive correlation indicates that infrastructure components have dependencies driving their behaviors under disasters to move in the same direction, negative correlation, on the contrary, expresses the idea that the effects of disasters on one component implies the opposite effects on another. For example, when two warehouses compete for limited resources, one would gain benefit when its competitor is under loss or experiencing difficulty. Similarly, if an area near a river is flooded, other areas downstream might be better off because the water pressure was released.

Although negative correlation is a well-known concept, negative probability sounds somewhat unorthodox. At first the researchers were unaware that a similar concept was already in use in the discipline of quantum mechanics; they just knew from mathematics that they needed to represent the possibility of a disaster affecting competing entities in opposite ways. Because they had to translate correlation from the real-world system to the virtual structure of supporting stations, the likelihood of a supporting station to be affected by a disaster had to incorporate the risk of multiple components, some of which would be negatively affected and some of which might be positively affected. The "failure propensity," as they originally called such a negative probability in a 2015 paper, of a supporting station could therefore be larger than 1 - or equivalently, the complement being negative.

To the best knowledge of the researchers, using this concept for engineering applications is brand new, enabling them to solve problems that were previously prohibitively difficult. The team hopes engineering designers of all kinds of networked infrastructure systems will embrace it, leading to smarter engineering designs for greater disaster resistance across a broad spectrum of system types.

Credit: 
University of Illinois Grainger College of Engineering

How to make self-driving cars safer on roads

image: In this example, a perception algorithm misclassifies the cyclist as a pedestrian

Image: 
Anand Balakrishnan

It's a big question for many people in traffic-dense cities like Los Angeles: When will self-driving cars arrive? But following a series of high-profile accidents in the United States, safety issues could bring the autonomous dream to a screeching halt.

At USC, researchers have published a new study that tackles a long-standing problem for autonomous vehicle developers: testing the system's perception algorithms, which allow the car to "understand" what it "sees."

Working with researchers from Arizona State University, the team's new mathematical method is able to identify anomalies or bugs in the system before the car hits the road.

Perception algorithms are based on convolutional neural networks, powered by machine learning, a type of deep learning. These algorithms are notoriously difficult to test, as we don't fully understand how they make their predictions. This can lead to devastating consequences in safety-critical systems like autonomous vehicles.

"Making perception algorithms robust is one of the foremost challenges for autonomous systems," said the study's lead author Anand Balakrishnan, a USC computer science PhD student.

"Using this method, developers can narrow in on errors in the perception algorithms much faster and use this information to further train the system. The same way cars have to go through crash tests to ensure safety, this method offers a pre-emptive test to catch errors in autonomous systems."

The paper, titled Specifying and Evaluating Quality Metrics for Vision-based Perception Systems, was presented at the Design, Automation and Test in Europe conference in Italy, Mar. 28.

Learning about the world

Typically autonomous vehicles "learn" about the world via machine learning systems, which are fed huge datasets of road images before they can identify objects on their own.

But the system can go wrong. In the case of a fatal accident between a self-driving car and a pedestrian in Arizona last March, the software classified the pedestrian as a "false positive" and decided it didn't need to stop.

"We thought, clearly there is some issue with the way this perception algorithm has been trained," said study co-author Jyo Deshmukh, a USC computer science professor and former research and development engineer for Toyota, specializing in autonomous vehicle safety.

"When a human being perceives a video, there are certain assumptions about persistence that we implicitly use: if we see a car within a video frame, we expect to see a car at a nearby location in the next video frame. This is one of several 'sanity conditions' that we want the perception algorithm to satisfy before deployment."

For example, an object cannot appear and disappear from one frame to the next. If it does, it violates a "sanity condition," or basic law of physics, which suggests there is a bug in the perception system.

Deshmukh and his PhD student Balakrishnan, along with USC PhD student Xin Qin and master's student Aniruddh Puranic, teamed up with three Arizona State University researchers to investigate the problem.

No room for error

The team formulated a new mathematical logic, called Timed Quality Temporal Logic, and used it to test two popular machine-learning tools--Squeeze Det and YOLO--using raw video datasets of driving scenes.

The logic successfully honed in on instances of the machine learning tools violating "sanity conditions" across multiple frames in the video. Most commonly, the machine learning systems failed to detect an object or misclassified an object.

For instance, in one example, the system failed to recognize a cyclist from the back, when the bike's tire looked like a thin vertical line. Instead, it misclassified the cyclist as a pedestrian. In this case, the system might fail to correctly anticipate the cyclist's next move, which could lead to an accident.

Phantom objects--where the system perceives an object when there is none--were also common. This could cause the car to mistakenly slam on the breaks--another potentially dangerous move.

The team's method could be used to identify anomalies or bugs in the perception algorithm before deployment on the road and allows developer to pinpoint specific problems.

The idea is to catch issues with perception algorithm in virtual testing, making the algorithms safer and more reliable. Crucially, because the method relies on a library of "sanity conditions," there is no need for humans to label objects in the test dataset--a time-consuming and often-flawed process.

In the future, the team hopes to incorporate the logic to retrain the perception algorithms when it finds an error. It could also be extended to real-time use, while the car is driving, as a real-time safety monitor.

Credit: 
University of Southern California

FAAH-OUT: Woman with novel gene mutation lives almost pain-free

A woman in Scotland can feel virtually no pain due to a mutation in a previously-unidentified gene, according to a research paper co-led by UCL.

She also experiences very little anxiety and fear, and may have enhanced wound healing due to the mutation, which the researchers say could help guide new treatments for a range of conditions, they report in the British Journal of Anaesthesia.

"We found this woman has a particular genotype that reduces activity of a gene already considered to be a possible target for pain and anxiety treatments," said one of the study's lead researchers, Dr James Cox (UCL Medicine).

"Now that we are uncovering how this newly-identified gene works, we hope to make further progress on new treatment targets."

At age 65, the woman sought treatment for an issue with her hip, which turned out to involve severe joint degeneration despite her experiencing no pain. At age 66, she underwent surgery on her hand, which is normally very painful, and yet she reported no pain after the surgery. Her pain insensitivity was diagnosed by Dr Devjit Srivastava, Consultant in Anaesthesia and Pain Medicine at an NHS hospital in the north of Scotland and co-lead author of the paper.

The woman tells researchers she has never needed painkillers after surgery such as dental procedures.

She was referred to pain geneticists at UCL and the University of Oxford, who conducted genetic analyses and found two notable mutations. One was a microdeletion in a pseudogene, previously only briefly annotated in medical literature, which the researchers have described for the first time and dubbed FAAH-OUT. She also had a mutation in the neighbouring gene that controls the FAAH enzyme.

Further tests by collaborators at the University of Calgary, Canada, revealed elevated blood levels of neurotransmitters that are normally degraded by FAAH, further evidence for a loss of FAAH function.

The FAAH gene is well-known to pain researchers, as it is involved in endocannabinoid signalling central to pain sensation, mood and memory. The gene now called FAAH-OUT was previously assumed to be a 'junk' gene that was not functional. The researchers found there was more to it than previously believed, as it likely mediates FAAH expression.

Mice that do not have the FAAH gene have reduced pain sensation, accelerated wound healing, enhanced fear-extinction memory and reduced anxiety.

The woman in Scotland experiences similar traits. She notes that in her lifelong history of cuts and burns (sometimes unnoticed until she can smell burning flesh), the injuries tend to heal very quickly. She is an optimist who was given the lowest score on a common anxiety scale, and reports never panicking even in dangerous situations such as a recent traffic incident. She also reports memory lapses throughout life such as forgetting words or keys, which has previously been associated with enhanced endocannabinoid signalling.

The researchers say that it's possible there are more people with the same mutation, given that this woman was unaware of her condition until her 60s.

"People with rare insensitivity to pain can be valuable to medical research as we learn how their genetic mutations impact how they experience pain, so we would encourage anyone who does not experience pain to come forward," said Dr Cox.

The research team is continuing to work with the woman in Scotland, and are conducting further tests in cell samples, in order to better understand the novel pseudogene.

"We hope that with time, our findings might contribute to clinical research for post-operative pain and anxiety, and potentially chronic pain, PTSD and wound healing, perhaps involving gene therapy techniques," said Dr Cox.

"The implications for these findings are immense," said Dr Srivastava.

"One out of two patients after surgery today still experiences moderate to severe pain, despite all advances in pain killer medications and techniques since the use of ether in 1846 to first 'annul' the pain of surgery. There have already been unsuccessful clinical trials targeting the FAAH protein - while we hope the FAAH-OUT gene could change things particularly for post-surgical pain, it remains to be seen if any new treatments could be developed based on our findings."

"The findings point towards a novel pain killer discovery that could potentially offer post-surgical pain relief and also accelerate wound healing. We hope this could help the 330 million patients who undergo surgery globally every year," Dr Srivastava said.

"I would be elated if any research into my own genetics could help other people who are suffering," the woman in Scotland commented.

"I had no idea until a few years ago that there was anything that unusual about how little pain I feel - I just thought it was normal. Learning about it now fascinates me as much as it does anyone else."

Credit: 
University College London

Cities under pressure

Cities to swelter as planners face unenviable trade-off between tackling climate change and quality of life, new research has shown.

The study, led by experts at Newcastle University, UK, has shown the challenge we face to reduce greenhouse gas emissions, increase cities' resilience to extreme weather and also give people quality space to live in.

Publishing the research in the journal Cities, the team have for the first time analysed the trade-offs between different sustainability objectives. These include minimising climate risks such as heat waves and flooding, reducing emissions from transport, constraining urban sprawl, making best use of our brownfield sites, ensuring adequate living space, and protecting green space which is important for our health and wellbeing.

Focussing on London - an example of a large rapidly growing city that is also at the forefront of tackling climate change - the team show the 'best case' scenario would be to increase development in a small number of central locations, such as East Barnet, Wood Green and Ealing.

Avoiding development along the Thames, this optimum plan would reduce flood risk, minimise transport emissions and reduce urban sprawl.

But, says author Dr Dan Caparros-Midwood, the trade-off will be more people exposed to extreme temperatures.

"Many of the lowest heat hazard areas coincide with the flood zone on the banks of the River Thames due to the cooling effect of blue infrastructure," explains Dr Caparros-Midwood, who carried out the work as part of his PhD at Newcastle University and is now a Senior GIS Specialist at Wood.

"But moving development away from the river while also protecting our green spaces and reducing sprawl really only leaves two options; either shrinking our homes or developing in higher heat risk areas.

"And while our study looked at London, this could apply to most cities in the world."

Building resilience in our cities

By 2050 it is estimated that two-thirds of the world's population will live in cities, highlighting the urgent need for urban development to be sustainable.

"Urban areas must radically transform if they are to reduce their greenhouse gas emissions and consumption of resources whilst also increasing their resilience to climate change and extreme weather," explains Professor Stuart Barr, co-author and part of the Geospatial Engineering group at Newcastle University.

Project lead Professor Richard Dawson, of the School of Engineering at Newcastle University, said the findings reinforced the scale of the challenge.

"We are already starting to see the impact of hotter summers and flooding on our cities," he says.

"Balancing trade-offs between these objectives is complex as it spans sectors such as energy, buildings, transport, and water.

"What our study shows in stark detail is this cannot be done using our current approach to planning and engineering our cities - difficult choices will have to be made."

Even in Europe, says Professor Dawson, only a quarter of cities have a comprehensive climate strategy. And yet, with the right impetus, we have the potential to accelerate and upscale action in our cities to tackle climate change.

"We have to be more creative about how we design and build our buildings and infrastructure," he says.

"This will include weaving green infrastructure into urban spaces; facilitating lifestyle choices such as walking and cycling that reduce energy demand, pollution and greenhouse gas emissions; and integrating new technologies that can shift carbon-intensive energy patterns by optimizing transport efficiency, vehicle sharing and reducing congestion.

"For the moment though, there are difficult, and often irreconcilable, trade-offs to be made in urban areas and we need to be making them now."

Credit: 
Newcastle University

Many NHS partnerships with drug companies are out of public sight

NHS organisations are entering into working partnerships with drug companies, but they are not making the details, and even existence, of many of these deals available to the public, reveals an investigation by The BMJ today.

These partnerships are used to support a variety of initiatives, including several projects to review the medication of people with ADHD, and more than 20 projects that focus on patients with age-related macular degeneration.

The BMJ, working with a team of university researchers, sent freedom of information requests to all 194 acute care NHS trusts in England to find out how many were involved in joint working arrangements in 2016 and 2017 and what joint working policies trusts had in place.

Joint working arrangement is the term used to refer to initiatives that involve shared investment by the NHS and drug companies. They are designed to bring benefits to patients, the NHS, and the companies.

Companies spent £3m in 2016 and £4.7m in 2017 on joint working arrangements, and under the NHS Long Term Plan, collaboration between health services and industry is set to treble over the next decade.

Yet the researchers found that a fifth of trusts would not release details of the deals, despite official guidance that joint working agreements must be conducted in an "open and transparent" manner.

Trusts are also expected to record and monitor all funding agreements related to the joint working projects, yet 13 (7%) said that they did not keep a central record of any such arrangements and so could not provide the information.

Even when trusts did provide details, the information they provided was often inaccurate or contradicted by other sources. Others claimed not to know about joint working arrangements at all.

Cathy Augustine of the Keep Our NHS Public campaigning group, argues that allowing industry to provide NHS services in this way helps to mask the degree of government under-resourcing.

While Robert Morley, executive secretary of Birmingham Local Medical Committee, says the lack of transparency by trusts over these arrangements is "truly shocking" and "blatant neglect of their obligations."

The industry says that joint working projects can accelerate the spread of new treatments for the benefit of patients. The BMJ found that many of the 93 projects running in 2016 and 2017 specifically referred to increasing the use of products marketed by the company funding the project.

Robert Morley says that the NHS should be taking the lead on determining the focus of projects to which it is committing investment, and he is concerned that these joint working arrangements "are being designed first and foremost around the interests of pharmaceutical companies."

John Puntis, a consultant paediatrician and secretary of Keep Our NHS Public, argues that the lack of expertise in NHS organisations in negotiating contractual arrangements has meant that the health service is often left at a disadvantage when collaborating with the industry.

"My concern would always be, 'What's in it for the private sector?' They never do these things purely for the benefit of the NHS and the benefit of patients," he says.

"They're often buying goodwill as well. Doctors say, 'Well, we're not influenced by the drug companies,' but clearly they are, because otherwise the industry wouldn't be pouring all the money into it," he adds.

Credit: 
BMJ Group