Culture

Discharge incentives in emergency rooms could lead to higher patient readmission rates

In an effort to address emergency department overcrowding, pay-for-performance (P4P) incentive programs have been implemented in various regions around the world, including hospitals in Metro Vancouver. But a new study from the UBC Sauder School of Business shows that while such programs can reduce barriers to access for admitted patients, they can also lead to patient discharges associated with return visits and readmissions.

The study looked at over 800,000 patient visits to the four major emergency departments in Metro Vancouver over a three-year period from April 1, 2013, to March 31, 2016. The study focused on patients with higher acuity levels (triage level 1, 2, or 3). During the first year of the study period, two P4P incentive programs were in effect, funded by the BC provincial government: emergency departments received a $100 compensation for each discharged patient with a length-of-stay (LOS) of less than four hours. Emergency departments also received a $600 compensation for admitted patients that spent less than 10 hours in the emergency department.

The BC government terminated both P4P programs on March 31, 2014, however the regional health authority governing all four emergency departments studied decided to internally fund the exact same $600 admission incentive scheme, which continued without interruption. Only the $100 discharge incentive completely disappeared post-government P4P policy termination.

"In the past, the extent to which these types of programs affected the length of stay of individual patients was not well understood, because previous studies have only examined aggregate performance metrics as they relate to length of stay," said Yichuan (Daniel) Ding, study co-author and assistant professor in the Operations and Logistics Division at the UBC Sauder School of Business. "Our study took a much more granular approach, where we focused specifically on patient discharges that took place within 20 minutes of the deadline for the incentive, because we wanted to know: were these patients discharged to catch the deadline?"

What the study found was that for those patients that were discharged home, there was a significant discontinuity around the four-hour mark, meaning that there was a significant number of patients that were discharged right before the four-hour mark. But after the four-hour mark, there was a decreasing likelihood that a patient would be discharged. This phenomenon was observed in only two of the four emergency departments; the other two did not exhibit this same discontinuity.

"Our study confirmed that this type of financial incentive altered system performance. And in the positive sense, that means that the program is effective, because it impacts length of stay, for both discharged and admitted patients," said Eric Park, study co-author and assistant professor in the Faculty of Business and Economics, University of Hong Kong. "But when we looked more granularly at the patients that were discharged within 20 minutes before the deadline, we found that one of the four emergency departments had a greater revisit and readmission rate within seven days - meaning that within seven days, those patients are more likely to come back and be admitted to hospital. It is possible that this is a signal of premature discharge."

"However, we cannot assert that discharge is premature using this metric alone, especially given that it was only observed in one of the four emergency departments; but it is a potentially worrisome finding," added Yuren Wang, study co-author with the National University of Defense Technology in Changsha, China.

The study also found that for the case of admitted patients at the 10-hour mark, the discontinuity was even more significant, and it applied to all four emergency departments, not just the two.

"Our recommendations based on this research are that setting an incentive for admitted patients improves length of stay, but the four-hour benchmark for discharged patients should be implemented with care," said Dr. Garth Hunte, study co-author and emergency physician at St. Paul Hospital in Vancouver. "There is no sense for an incentive to discharge patients that may require admission to hospital." This is consistent with what the hospitals are actually now doing, thanks to the regional health authorities' ongoing funding of the admission incentive.

Credit: 
University of British Columbia - Sauder School of Business

African elephants demonstrate movements that vary in response to ecological change

image: Results from this study bring new light to elephants' individuality, said Associate Professor George Wittemyer.

Image: 
Guillaume Bastille-Rousseau/Colorado State University

Wild African elephants, known for their intelligence, show markedly different movements and reactions to the same risks and resources. A new study led by Colorado State University and Save the Elephants reveals the magnitude and complexity of this variation in behavior and how it occurs in space and time, and among individual animals.

The findings, published in the September issue of Ecology Letters, indicate how elephants employ a diverse array of strategies that they adjust based on ecological changes. In particular, poaching causes elephants to switch their movements. The study results indicate that landscape conservation efforts should consider the needs of the different tactics elephants display.

The research team used GPS tracking data from more than 150 individual elephants followed over 17 years in the Samburu and Buffalo Springs National Reserve in northern Kenya as part of Save the Elephants' long-term monitoring project. The scientists evaluated individual behavior of elephants to identify how each animal used various food and water resources. They then developed new analytical approaches to understand what drives variations among individual elephant behaviors.

The authors found that many elephants were targeting a specific resource, while others were avoiding that same resource - an unintuitive result, given that movement of most species is driven by the same factors of food, security and social interactions. The variation in elephant behaviors was stronger during the resource-limited dry season, compared with the resource-rich wet season, suggesting a key driver of the different movement strategies was avoidance of competition with other elephants.

"The extent and complexity of the variation among individuals was greater than we anticipated and demonstrated much more diversity than that found in other species," said Guillaume Bastille-Rousseau, a post-doctoral fellow at CSU and lead author of the study.

George Wittemyer, an associate professor of fish, wildlife and conservation biology at CSU and co-author of the study, noted that elephants are hyper-social, with social interactions structuring everything in their lives, including their movements and space use.

"The results from this study bring new light to elephants' individuality, where even when using the same location and facing the same constraints, elephants do not conform to a single behavior," said Wittemyer, who also serves as the chairman of the scientific board of Save the Elephants. "We found this individuality was most clear in the manner by which elephants interact with humans, with some more willing to take risks than others."

Bastille-Rousseau said the team was amazed to see that the elephants shifted their tactics over time, apparently adjusting their strategies relative to changes in the landscape.

"The next step for our research will be to try to understand if individuals displaying a given tactic are more successful than individuals using a different tactic," he said.

Credit: 
Colorado State University

BRCA1/2 genetic testing recommendations still leave issues unresolved

PHILADELPHIA - The U.S. Preventive Services Task Force (USPSTF) has released a new Recommendation Statement for BRCA1/2 evaluation, urging the medical community to widen the parameters used to assess BRCA1 and BRCA2 mutation risks and increase the use of genetic counseling and testing for those with the highest risk. While the changes are beneficial, the recommendations still fail to address many persisting problems in the modern world of genetic testing, according to a new JAMA editorial co-authored by Susan Domchek, MD, executive director of the Basser Center for BRCA at the Abramson Cancer Center at the University of Pennsylvania.

"Genetic testing is an area of medicine that is progressing very quickly, which means providers need to be nimble in order to keep up," Domchek says. "The medical community needs to consider what genetic health data is truly helpful to a patient, strive to test those who may be genetically predisposed to an increased risk of cancer, and work to educate patients and providers on how to correctly and effectively use their test results to make better healthcare decisions."

Mutations in BRCA1 and BRCA2 have been linked to significantly increased risks of breast, ovarian, prostate, and pancreatic cancers, and there are many commercially available tests that can reliably show whether someone has a BRCA1 and/or BRCA2 mutation. Domchek, and co-author Mark Robson, MD, a medical oncologist and chief of Breast Medicine Service at Memorial Sloan Kettering Cancer Center, write that one important point not included in the new recommendations is the link between genetic testing and treatment plans. They note that BRCA1/2 status can impact surgical decision making for patients newly diagnosed with early stage breast cancer and influence treatment plans for certain advanced cancers, such as metastatic breast cancer. The USPSTF does not include newly diagnosed breast or ovarian cancer patients or advanced cancer patients in its recommendations.

Authors express other concerns, which are not addressed in the new recommendations, specifically relating to large-panel genetic tests that are available. Previous genetic tests analyzed a few specific genes at a time, but there are now tests that can sequence up to 80 genes at once. While that sounds like invaluable innovation, there are a plethora of genetic mutations with weak, questionable, or no links to cancer at all. Positive results for those types of mutations could create fear or distract from real genetic indicators like changes to BRCA1/2 genes. Additionally, the direct-to-consumer multi-panel tests one can do at home - such as those offered by companies like 23andMe - further remove people from genetic specialists trained to educate and evaluate how results may be more or less meaningful given an individual's health, history, and family history.

"We should think of genetic testing like the internet," Domchek says. "It's a tool, full of information, but there's nuance in making sense of that information and determining how to act on it."

Although the authors would have liked to see more from the new USPSTF recommendations, they say the two main changes to those recommendations are certainly valuable.

"The statement adds those who have previously been diagnosed with breast or ovarian cancer, but are now cancer free, to the list of those who should undergo careful genetic risk-assessment, which is a positive addition as finding a BRCA1/2 mutation in these patients could directly impact their medical care and have implications for their relatives. It also more explicitly includes ancestry as a risk factor," Domchek says.

The new recommendation urges more broad ancestry knowledge to be used when considering genetic testing, not just family history of cancer. Certain populations, specifically those with Ashkenazi Jewish heritage, have a higher prevalence of BRCA1/2 mutations.

While these expansions are positive, Domchek notes that many individuals at the highest risk of having a BRCA1 or BRCA2 mutation do not undergo genetic testing. In addition, racial and socioeconomic disparities in the uptake of genetic testing remain.

"It's the duty of all health care professionals to help our patients effectively employ genetic testing," Domchek says. "These updates are a positive step forward, but we need to continue advancing BRCA-related research and ensure that those at the highest risk have access to testing."

Credit: 
University of Pennsylvania School of Medicine

More children suffer head injuries playing recreational sport than team sport

image: Study finds children who do recreational sports like bike riding are more likely to suffer serious head injuries than children who play contact sport like AFL or rugby.

Image: 
Murdoch Children’s Research Institute

An Australian/ New Zealand study examining childhood head injuries has found that children who do recreational sports like horse riding, skate boarding and bike riding are more likely to suffer serious head injuries* than children who play contact sport like AFL or rugby.

Research**, conducted by the PREDICT research network, Murdoch Children's Research Institute (MCRI), published on Wiley and soon to be published in the Australian Medical Journal, examined the data of 8,857 children presenting with head injuries to ten emergency departments in Australian and New Zealand hospitals.

A third of the children, who were aged between five and 18 years, injured themselves playing sport. Of these children four out of five were boys.

Lead research author, MCRI's Professor Franz Babl, says the team looked at 'íntracranial' injuries in children because while there is a lot of interest about sport and concussion, less is understood about the severity of head injuries children suffer while playing sport.

"The study found that in children who presented to the emergency departments after head injury and participated in recreational sports like horse riding, skate boarding and bike riding were more likely to sustain serious head injuries than children who played contact sport like AFL, rugby, soccer or basketball," he says.

"We found that 45 of the 3,177 sports-related head injuries were serious and classified as clinically important Traumatic Brain Injury (ciTBI), meaning the patient required either neuro-surgery, at least two nights in hospital and/or being placed on a breathing machine. One child died as a result of head injuries."

Prof Babl says that the sports which resulted in the most frequent reason for presentation to emergency departments included bike riding (16 per cent), rugby (13 per cent), AFL (10 per cent), other football (9 per cent), and soccer (8 per cent).

The most frequent causes of serious injury included bike riding (44 per cent), skateboarding (18 per cent), horse riding (16 per cent), with AFL and rugby resulting in one serious head injury each and soccer resulting none.

A total of 524 patients with sports-related head injuries (16 per cent) needed CT imaging, and 14 children required surgery.

Credit: 
Murdoch Childrens Research Institute

Simple computational models can help predict post-traumatic osteoarthritis

Knee joint injuries, such as ligament rupture, are common in athletes. As the intact joint ligaments offer a precondition for joint stability, ligament injuries are often surgically reconstructed. However, in many cases these injuries or surgeries can lead to post-traumatic osteoarthritis. The articular cartilage, which serves to provide frictionless contact between bones, wears out completely, causing severe joint pain, lack of mobility and even social isolation. Currently, preventing the onset and development of osteoarthritis is still the best clinical course of action. Computational modelling can be used to predict locations susceptible to osteoarthritis; however, they are too complicated for clinical use and lack verification of predictions.

Researchers from the University of Eastern Finland, in collaboration with the University of California in San Francisco, Cleveland Clinic, the University of Queensland, the University of Oulu and Kuopio University Hospital, have developed a method to predict post-traumatic osteoarthritis in patients with ligament ruptures using a simplified computational model. The researchers also verified the model predictions against measured structural and compositional changes in the knee joint between follow-up times. The findings were reported in Clinical Biomechanics.

In this proof-of-concept study, computational models were generated from patient clinical magnetic resonance images and measured motion. Articular cartilage was assumed to degenerate due to excessive tissue stresses, leading to collagen fibril degeneration, or excessive deformations, causing proteoglycan loss. These predictions were then compared against changes in MRI-specific parameters linked to each degeneration mechanism.

"Our results suggest that a relatively simple finite element model, in terms of geometry, motion and materials, can identify areas susceptible to osteoarthritis, in line with measured changes in the knee joint from MRI. Such methods would be particularly useful in assessing the effect of surgical interventions or in evaluating non-surgical management options for avoiding or delaying osteoarthritis onset and/or progression," Researcher Paul Bolcos, a PhD student at the University of Eastern Finland, says.

The findings are significant and could provide pathways for patient-specific clinical evaluation of osteoarthritis risks and reveal optimal and individual rehabilitation protocols.

"We are currently working on adding more patients in order to help tune the degeneration parameters and ensure the sensitivity of the mechanical to MRI parameters. Later, this method could be combined with a fully automated approach for generating these computational models developed in our group, narrowing the gap between research and clinical application," Bolcos continues.

Credit: 
University of Eastern Finland

Embryology: a sequence of reflexive contractions triggers the formation of the limbs

image: Accelerated rolling of the future hindlimb (arrow) and abrupt formation of the amniotic sac (triangle) are observed. The lightning bolt represents the electrode.

Image: 
Fleury et al. / CNRS photo library

It normally takes about 21 days for chicken embryos to develop into chicks. By observing chicken hindlimb formation, a CNRS / Université de Paris research team (1) has just discovered that the mechanism at the origin of embryonic development consists of a sequence of reflexive contractions. The researchers were able to artificially recreate the same process and accelerate it by as much as a factor of 20. Their findings have been published in the European Physical Journal on August 15, 2019.

In its first days of life, a chicken embryo may be likened to a flat disc internally organised into concentric rings. During its development, the embryo stretches, rolls up and twists, this seggregates the concentric rings into as many folded tissues, which eventually give rise to various anatomical features. The scientists realised that during formation of the future chick's tail, one of these rings is stretched and mechanically deforms the posterior region of the embryo. This deformation sets off a series of reflexive contractions of the surrounding rings, exhibiting a domino effect. The contracting rings fold to yield the primitive contours of the hindlimbs.

In order to prove the physical nature of this phenomenon, the researchers designed an electric stimulator through which they administered brief low-intensity shocks (1 volt for 1-3 seconds) to the posterior portion of the embryo. These impulses mimicked the effect of a mechanical deformation like that produced during tail formation, triggered embryonic development in a cascading pattern, and even accelerated it up to 20-fold.

The scientists would like to pursue their research by investigating the technical limits of this discovery. Furthermore, this new method may be used outside the field of embryonic development, to study the effects certain diseases have on cells.

Credit: 
CNRS

Decades-old puzzle of the ecology of soil animals solved

image: Mealworms offered the mould Fusarium graminearum with Aurofusarin (right) and its mutant without Aurofusarin (left), prefer the mutant.

Image: 
Ruth Pilot

An international research team led by the University of Göttingen has deciphered the defence mechanism of filamentous fungi. Moulds are a preferred food source for small animals. As fungi cannot escape predation by running away, they produce defence metabolites, thereby rendering themselves toxic or unpalatable. After decades-long unsuccessful investigation, these defence compounds have now been identified. The results were published in Nature Communications.

Small soil animals such as worms, springtails, and mites constitute about 20% of the living biomass in soil. Since the 1980s, studies on fungal defence against animal predators have focused on mycotoxins. The toxicity of mycotoxins to insects has been documented in numerous studies; however, attempts to prove the ecological function of mycotoxins in defence against predation have failed. Researchers in Göttingen discovered that rather than mycotoxins, certain fungal pigments protect fungi from predation. These pigments are produced by many ascomycetes and belong to the class of dimeric naphthopyrones. Red pigment aurofusarin - which is produced by fungi of the genus Fusarium and by some tropical genera - was studied in detail.

Springtails and insect larvae recognised and avoided food modified to contain aurofusarin. Definitive proof of the ecological function of aurofusarin was obtained with the help of fungal mutants in which aurofusarin synthesis was disrupted by genetic engineering. Springtails, woodlice, and insect larvae accepted the mutants as food while avoiding fungal colonies with aurofusarin. Feeding experiments with different fungal species and mutants revealed that aurofusarin served as the major - or even only - defence compound in these fungi. Initial experiments with other bis-naphthopyrones, produced by the fungal genera Aspergillus and Penicillium, revealed that they also show antifeedant activity (ie they inhibit feeding).

Why do bis-naphthopyrones repel fungivores? According to the mycotoxin hypothesis, bis-naphthopyrones should be toxic. "We could not detect any toxicity when feeding springtails with food containing aurofusarin", explains Yang Xu, a PhD student in Göttingen and first author of the paper. "The animals survived feeding on aurofusarin for five weeks without apparent harm. Aurofusarin thus appears to be a non-toxic antifeedant."

Why has aurofusarin not lost its antifeedant effect after millions of years? Synthetic fungicides often lose inefficiency after a couple of years, and plant defence chemicals do not protect their producers from adapted herbivores. Why have soil animals not adapted to fungal defence chemicals? "An explanation may lie in the very large amounts of defence chemicals that accumulate in fungal cultures", explains Professor Petr Karlovsky, head of Molecular Phytopathology and Mycotoxin Research Lab. "Mutations leading to the inactivation of defence chemicals or that reduce their binding to (yet unknown) receptors would not abolish the effect of defence chemicals."

If this hypothesis is proven correct, aurofusarin would be the first example of a new phenomenon in chemical ecology: the prevention of the adaptation of target organisms due to extremely high concentrations of defence chemicals.

Credit: 
University of Göttingen

Centuries-old Japanese family firms make history relevant to today's business world

Strategy-makers in long-lived Japanese firms face a challenge to match generations of history and guidance with modern-day corporate challenges and change.

A study by researchers from Lancaster University, Politecnico di Milano, UCL and Aaalto University, published in the Strategic Management Journal, reveals that in many Japanese firms, foundational ka-kun - loosely translated as family mottos - remain relevant for decades, or even centuries.

Revered founders and leaders laid out the statements, such as family lessons, testaments and open letters, for their successors, articulating values for personal and business conduct and expressing principles that ensured past prosperity.

The researchers found strategy-makers grapple with this history to turn them from a potential source of inertia into a resource for change. Some ka-kun - in amended form - are still formally adhered to, despite changes within companies and their environments, while others are radically altered or no longer mentioned, reflecting the challenge of keeping them relevant many years after they were set down. Only one company - which had preserved the same core business, ownership within the family and scale - honoured the ancient motto in its original form.

"The ka-kun tend to become emotionally-laded symbols of historical commitments for these firms. When they are used effectively, they can create a shared sense of purpose, mobilise collective action and responsiveness to changing competitive conditions, and lay the groundwork for sustainable competitive advantages," said co-author Dr Innan Sasaki, of Lancaster University Management School.

"When they were first forged, these statements were future-oriented - looking at where the firms wanted to be, and channeling energy, effort and resource in that direction. However, the passage of time means many are no longer relevant, even though they have acquired symbolic status, charged with emotion and inextricably tied to the firms' collective sense of self and legacy.

"This creates a tension between looking to the future and recognising the past of the statements, a struggle which is likely to become more pronounced over time, presenting the challenge if what to do regarding the ka-kun."

Professor Davide Ravasi, of the UCL Management School, added: "Corporate leaders are using a variety of strategies to deal with the revered past when going through strategic change, which both address the need to maintain continuity with the past and strategic relevance now."

The researchers found three differing strategies in the usage of the ka-kun in the face of strategic change in modern Japan to establish a sense of continuity: elaborating, recovering and decoupling.

Elaborating sees the transfer of part of the content of the historical statement into a new one. This was seen with sake manufacturer Gekkeikan, who adapted ka-kun set out in 1933 both in 1955 and 1997.

Recovering forges a new statement based on the retrieval and re-use of historical references, such as with Tokyo Keizai University, who looked back to their 1902 foundation in new mottos in 1992 and 2006.

Decoupling allows the co-existence of the historical statement and a contemporary one with different values, as seen with Yamanaka Hyoemon Shouten, founded in 1718 to commercialize food and sake, and adapting a new motto with a newly-appointed CEO in 2016.

Firms in Japan use all three methods to recognise their past while looking to the future, with recovering and decoupling often triggered by significant changes to the organization and/or its strategy.

"Elaborating helps maintain a sense of continuity by explicitly linking part of the revised statement with the original," said Dr Sasaki. "Revised statements are often presented as a development or an update of previous iterations, highlighting continuity while also refocusing attention on values managers view as important to keep the organisation viable in the present.

"The recovering strategy rests on the search of written, oral or even material memory. References to legendary leaders or a glorious past are used to emotionally energise and rally the organisation around a new strategy.

"Managers redirect attention to values they consider relevant to inspire and legitimise strategic change. At the same time, they claim continuity by reusing texts produced in the distant past. The new statement focuses on emerging issues and justifies changes, while the historical statement maintains a reassuring anchor in the past.

"Decoupling allows the maintenance of historical statements as a reassuring anchor with the past, maintaining a sense of stability and continuity in times of change. Like in the case of recovering, the new statement is associated with organisational or strategic, however, decoupling is more frequently associated with emerging issues not addressed by historical statements."

Professor Eero Varra, of Aalto University Business School and Lancaster University Management School added: "All three strategies involve selective remembering and forgetting to varying degrees to bring the mottos into the modern business world. Change needs to be accommodated, but without threatening the integrity of the historical identity of companies, with values passed on from generation to generation through the ka-kun."

Credit: 
Lancaster University

Shedding light on the reaction mechanism of PUVA light therapy for skin diseases

image: Reaction stages when a psoralen molecule binds to DNA. The result is that the psoralen is permanently bound to the DNA via a cyclobutane ring. The cell is altered and thus damaged, and triggers the process of programmed cell death.

Image: 
ACS / Janina Diekmann

The term 'PUVA' stands for 'psoralen' and 'UV-A radiation'. Psoralens are natural plant-based compounds that can be extracted from umbelliferous plants such as giant hogweeds. Plant extracts containing psoralens were already used in Ancient Egypt for the treatment of skin diseases. Modern medical use began in the 1950s. From then on, they were applied for light-dependent treatment of skin diseases such as psoriasis and vitiligo. From the 1970s onwards, PUVA therapy was used to treat a type of skin cancer known as cutaneous T-cell lymphoma.

Psoralens insert between the crucial building blocks (bases) of DNA, the hereditary molecule. When subjected to UV radiation, they bind to thymine - a specific DNA base - and thus cause irreversible damage to the hereditary molecule. This in turn triggers programmed cell death, ultimately destroying the diseased cell.

Researchers working with Prof. Dr. Peter Gilch from HHU's Institute of Physical Chemistry have now collaborated with Prof. Dr. Wolfgang Zinth's work group from LMU Munich to analyse the precise mechanism of this binding reaction. They used time-resolved laser spectroscopy for this purpose.

They found that - after the psoralen molecule has absorbed UV light - the reaction takes place in two stages. First, a single bond between the psoralen molecule and thymine forms. A second bond formation then yields a four-membered ring (cyclobutane) permanently connecting the two moieties (see figure). The researchers in Düsseldorf and Munich were also able to demonstrate that the first stage takes place within a microsecond, while the second needs around 50 microseconds. They compared this process with the damaging of the 'naked' DNA by UV light. That process also frequently results in cyclobutane rings, but the process takes place considerably faster than when psoralens are present.

Prof. Gilch explains the background to the research: "If we can understand how the reactions take place in detail, we can change the psoralens chemically in a targeted way to make PUVA therapy even more effective." Together with his colleague in organic chemistry, Prof. Dr. Thomas Müller, he wants to develop these high-performance psoralen molecules at HHU within the scope of a DFG project.

Credit: 
Heinrich-Heine University Duesseldorf

New research explores the use of New Psychoactive Substances by young people

A research study into New Psychoactive Substances (NPS) - formerly referred to as 'legal highs' - provides new evidence about why young people were attracted to the drugs, and the health and social risks associated with taking them.

The study was carried out by an interdisciplinary team of researchers from Queen's University Belfast. The research findings recommend support using existing evidence-based interventions among young people and high risk populations.

It follows official statistics released today by NHS Digital about smoking, drinking and drug use among young people. These figures show that 6% of 11-15 year olds said they were offered NPS and 1% said they had taken them in the last year (3). Office of National Statistics figures released last week reported 125 deaths from NPS, double that of the previous year (4).

This newly published longitudinal study about NPS was commissioned and funded by the National Institute for Health Research (NIHR), the nation's largest funder of health and care research, and published in the NIHR Journals Library.

This particular research leveraged data from 2,039 young people who were part of the larger Belfast Youth Development Study (BYDS), which tracked a group of young people from ages 11, and examined in detail how they used alcohol and drugs as they grew up.

In this NIHR report, serious side effects associated with NPS usage were reported by those who had taken this class of drugs, including significant mental health problems and heart, liver, stomach and bladder issues. The research team found that NPS were always used within a poly drug use context (using more than one drug at the same time) in a range of ways and with alcohol, e.g., with mephedrone most snorted it, some made it into capsules and swallowed it and a small number injected it. Examples of drugs taken alongside it were cocaine, alcohol and some with other stimulants like MDMA. In 10% of NPS users surveyed, there was also evidence of moving from synthetic cannabinoids to heroin and vice versa - something that has not previously been reported.

Chief Investigator, Dr Kathryn Higgins, Reader from the School of Social Sciences, Education and Social Work and the Centre for Evidence and Social Innovation at Queen's University, said: "Our research explored in detail the varied motives, characteristics and lived experiences of young people using NPS, ranging from experimental users who liked the buzz or the fact that they were cheaper than other drugs to those who had become dependent and needed help from health and social care services. We discovered that there was a lack of knowledge about the negative impacts of taking these drugs due to them being new and constantly changing as well as being marketed at the time as 'legal highs' and perceived as 'safe'."

NPS are synthetic alternatives to traditional illegal drugs. In the UK, most were 'legal' until they were banned in May 2016 under the Psychoactive Substances Act. They include drugs such as synthetic cannabinoids - sometimes referred to as 'spice' - and mephedrone - also known as 'meow meow'.

The researchers used the data from the BYDS and statistical models to examine if those who reported using NPS had any different risk factors than those who used other drugs. The models, using the data from the 2,039 participants, showed that those who used NPS were mostly the same as those who were polydrug users of any type. To investigate further, 84 people were then reviewed through a series of in-depth interviews to share their experiences growing up, the circumstances that led them to taking NPS, and the age they first tried the drugs. As well as members of the BYDS cohort, individuals in this portion of the study included young people in prisons and those recruited from drug and alcohol services.

The research team categorised groups ranging from non-NPS users, to those who used in a limited experimental way, and those who reported being dependent on NPS. They were able to identify contributing risk factors for each group related to use of NPS, such as problems at school, peer pressure, alcohol use, family breakdown, trauma and lack of parental supervision and support.

In the report, the researchers make some suggestions about how to best respond to NPS use, including the use of peer educators in developing national drug education programmes, the expansion of harm reduction techniques, and research into the effectiveness of psychosocial and psychological interventions. They also call for public health interventions in high risk populations, highlighting that Public Health England are already working to improve things in prisons.

Co-investigator, Dr Nina O'Neill, Research Fellow from the School of Nursing and Midwifery at Queen's commented: "We were also able to look beyond the reported physiological effects of the drugs and learn more about the wider impact of NPS use on the individual, including their physical, psychological and social wellbeing."

"Our findings help to clearly explain why people use NPS in the ways that they do. We hope that this will help experts on NPS to consider interventions which would be most helpful in preventing people from using NPS in the future and reducing harms for people who already use NPS in the interests of better health across society as a whole," added Co-investigator, Dr Anne Campbell, Senior Lecturer from the School of Social Sciences, Education and Social Work, and the Centre for Evidence and Social Innovation at Queen's.

Credit: 
National Institute for Health Research

Skeletal shapes key to rapid recognition of objects

In the blink of an eye, the human visual system can process an object, determining whether it's a cup or a sock within milliseconds, and with seemingly little effort. It's well-established that an object's shape is a critical visual cue to help the eyes and brain perform this trick. A new study, however, finds that while the outer shape of an object is important for rapid recognition, the object's inner "skeleton" may play an even more important role.

Scientific Reports published the research by psychologists at Emory University, showing that a key visual tool for object recognition is the medial axis of an object, or its skeletal geometry.

"When we think of an object's shape, we typically imagine the outer contours," explains Vladislav Ayzenberg, first author of the paper and an Emory PhD candidate in psychology. "But there is also a deeper, more abstract property of shape that's described by skeletal geometry. Our research suggests that this inner, invisible mechanism may be crucial to recognizing an object so quickly."

"You can think of it like a child's stick drawing of a person," adds Stella Lourenco, senior author of the study and an associate professor of psychology at Emory. "Using a stick figure to represent a person gives you the basic visual information you need to immediately perceive the figure's meaning."

The Lourenco lab researches human visual perception, cognition and development.

Visual perception of an object begins when light hits our eyes and the object is projected as a two-dimensional image onto the photoreceptor cells of the retina. "A lot of internal machinery is whirring between the eyes and brain to facilitate perception and recognition within 70 milliseconds," Ayzenberg says. "I'm fascinated by the neural computations that go into that process."

Although most people take it for granted, object recognition is a remarkable feat. "You can teach a two-year-old what a dog is by pointing out a real dog or showing the child a picture in a book," Lourenco says. "After seeing such examples a child can rapidly and with ease recognize other dogs as dogs, despite variations in their individual appearances."

The human ability at object recognition is robust despite changes in a class of objects such as outer contours, sizes, textures and colors. For the current paper, the researchers developed a series of experiments to test the role of skeletal geometry in the process.

In one experiment, participants were presented with paired images of 150 abstract 3D objects on a computer. The objects had 30 different skeletal structures. Each object was rendered with five different surface forms, to change the visible shape of the object, without altering the underlying skeleton. The participants were asked to judge whether each pair of images showed the same or different objects. The results found that skeletal similarity was a significant predictor for a correct response.

A second experiment, based on adaptations of three of the objects, tested the effects of proportional changes to the shape skeleton. Participants were able to accurately predict object similarity at a rate significantly above chance at every level of skeletal change.

A third experiment tested whether an object's skeleton was a better predictor of object similarity than its surface form. Participants successfully matched objects by their skeletal structure or surface forms when each cue was presented in isolation. They showed a preference, however, to match objects by their skeletons, as opposed to their surface forms, when these cues conflicted with one another.

The results suggest that the visual system is not only highly sensitive to the skeletal structure of objects, but that this sensitivity may play an even bigger role in shape perception than object contours.

"Skeletal geometry appears to be more important than previously realized, but it is certainly not the only tool used in object recognition," Lourenco says. "It may be that the visual system starts with the skeletal structure, instead of the outline of an object, and then maps other properties, such as textures and colors, onto it."

In addition to adding to fundamental knowledge of the human vision system, the study may give insights into improving capabilities for artificial intelligence (AI). Rapid and accurate object recognition, for example, is vital for AI systems on self-driving cars.

"The best model for a machine-learning system is likely a human-learning system," Ayzenberg says. "The human vision system has solved the problem of object recognition through evolution and adapted quite well."

Credit: 
Emory Health Sciences

A lack of self control during adolescence is not uniquely human

Impulsiveness in adolescence isn't just a phase, it's biology. And despite all the social factors that define our teen years, the human brain and the brains of other primates go through very similar changes, particularly in the areas that affect self-control. Two researchers review the adolescent brain across species on August 20 in the journal Trends in Neurosciences.

"As is widely known, adolescence is a time of heightened impulsivity and sensation seeking, leading to questionable choices. However, this behavioral tendency is based on an adaptive neurobiological process that is crucial for molding the brain based on gaining new experiences," says Beatriz Luna of the University of Pittsburgh, who co-authored the review with Christos Constantinidis of Wake Forest School of Medicine.

Structural, functional, and neurophysiological comparisons between us and macaque monkeys show that this difficulty in stopping reactive responses is similar in our primate counterparts--who during puberty, also show limitations in tests where they have to stop a reactive response. "The monkey is really the most powerful animal model that comes closest to the human condition," says Constantinidis. "They have a developed prefrontal cortex and follow a similar trajectory with the same patterns of maturation between adolescence and adulthood."

Taking risks and having thrilling adventures during this period isn't necessarily a bad thing. "You don't have this perfect inhibitory control system in adolescence, but that's happening for a reason. It has survived evolution because it's actually allowing for new experiences to provide information about the environment that is critical form optimal specialization of the brain to occur," Luna says. "Understanding the neural mechanisms that underlie this transitional period in our primate counterparts is critical to informing us about this period of brain and cognitive maturation."

Human neurological development during this time is characterized by changes in structural anatomy--there is an active pruning of redundant and un-used neural connections and a strengthening of white matter tracts throughout the brain that will determine the template for how the adult brain will operate. Specifically, by adolescence all foundational aspects of brain organization are in place and during this time they undergo refinements that will enable the most optimal way to operate to deal with the demands of their specific environment.

In particular, the development of neural activity patterns that allow for the preparation of a response seems to be a key element of this phase of development--and essential to successful performance on self-control tasks.

This all suggests that self-control isn't just about the ability, in the moment, to inhibit a behavior. "Executive function involves not only reflexive responses but actually being prepared ahead of time to create an appropriate plan. This is the change between the adolescent and adult brain and it is strikingly clear both in the human data and in the animal data," says Constantinidis.

Ultimately, the authors believe that this phase of development is essential to shaping the adult brain. "It is important for there to be a period where the animal or the human is actively encouraged to explore because gaining these new experiences will help mold what the adult trajectories are going to be," says Luna. "It's important to have this conversation and comparison between human and animal models so that we can understand the neural mechanisms that underlie vulnerability during this time for impaired development such as in mental illness, which often emerges in adolescence, but importantly to inform us in how to find ways to correct those trajectories."

Credit: 
Cell Press

Do hospital ads work?

New Marketing Science Study Key Takeaways:

Patients are positively influenced by hospital advertising.

Older patients and those with more restrictive forms of insurance are less sensitive to ads.

Wealthy patients and patients who live far from a hospital respond well to advertising.

A blanket ban on hospital advertising can lead to hospital readmissions.

CATONSVILLE, MD, August 20, 2019 - Should hospital advertising be banned? A few policymakers in Washington, D.C., have recently considered such an action based on a long-standing debate on whether it poses the spread of misinformation, and that it is not an effective or responsible use of an already limited healthcare budget. New research in the INFORMS journal Marketing Science studies the impact of a ban on hospital advertising, and whether those fears are justified.

Study authors, Tongil "TI" Kim and Diwas KC both of Emory University believe the results can help guide discussions about legislation pertaining to hospital advertising in the changing landscape of healthcare delivery.

The results show that patients are positively influenced by hospital advertising. Researchers say seeing a television ad for a hospital makes a patient more likely to choose that one.

"There are differences in patient response to ads based on insurance status, medical conditions and demographic factors like age, race and gender," said Kim, a professor in the Goizueta Business School at Emory. "Older patients and those with more restrictive forms of insurance are less sensitive to ads while wealthy patients and patients who live far from a hospital are more likely to respond to advertising."

The study looked at 220,000 patient visits between September 2008 and August 2010 in Massachusetts, observing characteristics like hospital type, geographic location and advertisement dollars spent.

But, does that advertising facilitate the spread of misinformation? Is it an effective use of hospital operating funds?

"Our research found that banning hospital advertising can negatively affect population health outcomes by increasing hospital readmissions within 30 days. A blanket ban on hospital advertising can lead to an additional 1.2 hospital readmissions for every 100 hospital discharges," said Kim.

Hospital readmission rate is a measure of hospital quality, and, according to the study, high quality (low readmission) hospitals tend to advertise more. Thus, advertising allows these hospitals to draw and treat more patients, leading to overall population level quality gains. A ban on advertising would then reduce the number of patients going to those hospitals and hence lower population level health outcome.

Kim added, "In short, we found that when you inhibit a hospital's ability to attract new patients, you also negatively impact patient flow, and you contribute to an increase in hospital readmissions."

Credit: 
Institute for Operations Research and the Management Sciences

Examining the link between caste and under-five mortality in India

In India, children that belong to disadvantaged castes face a much higher likelihood of not living past their fifth birthday than their counterparts in non-deprived castes. IIASA researchers examined the association between castes and under-five mortality in an effort to help reduce the burden of under-five deaths in the country.

The world has made remarkable progress in terms of child survival rates over the past few decades, with a marked decline in deaths among children under five seen across the world since the mid-2000s. Although India has also seen a consistent decline in child deaths in recent years, the under-five mortality rate is still high in a few selected states and among the scheduled caste (previously referred to as "untouchables") and scheduled tribe population (communities living in tribal areas), thus representing a major public health concern. Despite continuing efforts by the Indian government to redress the effects of the caste system, it remains a significant line of social division in the country and people in disadvantaged castes continue to experience adverse circumstances such as caste-based discrimination while accessing the health care system.

In their study published in the journal PLOS ONE, IIASA researchers examined the disparities in under-five mortality in the so-called high focus states of India - designated as such due to their persistently high child mortality rates and relatively poor socioeconomic and health indicators. The study examined the association between castes and under-five mortality in these regions using the most recent data from the Indian Demographic Health Survey conducted between 2015 and 2016. In addition, the authors attempted to quantify the relative contribution of socioeconomic determinants to under-five deaths by explaining the gap between socially disadvantaged castes and non-disadvantaged castes in these regions in an effort to help reduce the absolute and relative burden of under-five deaths in the country.

"The main questions we wanted to address were whether the association between caste and under-five mortality had been fading away in recent years due to the government's health programs, particularly after the implementation of the country's National Rural Health Mission (NRHM) in 2005. We also wanted to determine which factors contribute to caste-based gaps in under-five mortality in India," explains Jayanta Bora, lead author of the study and previously a researcher with the IIASA World Population Program. "The novelty of our work lies therein that, to our knowledge, this is the first study in India that provides district-level estimates of the under-five mortality rate while systematically investigating the factors explaining under-five deaths by caste groups using nationally representative data."

The results show that in the high focus Indian states, under-five deaths between well-off and deprived caste children have declined since the NRHM was implemented, indicating the program's positive impact in terms of reducing caste-based inequalities. Despite the reduction in under-five deaths, children belonging to the scheduled caste population however still experienced higher mortality rates than children belonging to the rest of the population between 1992 and 2016. Both district level mortality rates and individual analyses showed that children belonging to scheduled castes have the highest likelihood of dying before their fifth birthday. A further analysis of the data revealed that 83% of the caste-based gap in under-five deaths is related to the level of educational attainment of women and the difference in household wealth between the scheduled casts and tribes and the rest of the population. The study also found that program indicators such as place of birth and the number of antenatal care visits that women attended over the course of their pregnancy contributed significantly to widening caste-based gaps in the under-five death rate.

According to the researchers, there is still a lot that needs to be done to improve access to health care facilities for mothers and children belonging to deprived caste groups in India and continuous efforts to raise the level of maternal education and the economic status of people belonging to these groups should be pursued simultaneously.

"We strongly believe that this study is a crucial contribution to knowledge in the Indian context with respect to the UN's Sustainable Development Goal number three, which focuses on ensuring healthy lives and promoting wellbeing for all people at all ages. The findings of this study can help to understand the factors behind the pervasive gap in the under-five mortality rate between deprived and other caste groups in India. Creating awareness around preventive health care, maternal care, nutrition, awareness about infectious diseases, the benefits of hygiene and sanitation, and subsidized maternal health care services among the scheduled caste and scheduled tribe populations, should be increased through outreach programs," Bora concludes.

Credit: 
International Institute for Applied Systems Analysis

New immune system understanding may help doctors target cancer

image: Paul Norman, PhD, and colleagues identify a new way the immune system "sees" cancer

Image: 
University of Colorado Cancer Center

Your immune system's natural killer cells recognize and attack two major kinds of danger - cells infected by viruses and cells affected by cancer. When natural killer (NK) cells see a cancer cell, they kill it (naturally...). And a major research focus has been to define how NK cells do this "seeing." One way NK cells see cancer is by recognizing bits of mutated DNA displayed on "silver platters" made by human leukocyte antigen (HLA) genes.

In fact, there are two classes of HLA genes. HLA class 1 genes do exactly this task of making proteins that display a cell's DNA for examination and evaluation. But while HLA class 1 genes help to identify bad actors among the body's own cells, HLA class 2 genes help the body mark and target invaders from outside the body, rallying antibodies against things like bacteria.

So why has recent research shown that patients whose cancer cells are marked by high counts of HLA class 2 proteins have better outcomes? Sure, HLA class 1 brings NK cells to attack tumor tissue, but it's not like HLA class 2 interacts with NK cells, right?

Wrong.

A University of Colorado Cancer Center study recently published in the journal Nature Immunology shows that NK cells do, in fact, interact with HLA class 2. The implications may help researchers better harness the immune system to fight cancer, and, on the other side of the coin, may also help to calm the immune system's attack of healthy tissues in some autoimmune conditions.

"The understanding has been that NK cells interact with HLA class 1 but not class 2. The identification of a mechanism that class 2 uses to interact with NK cells changes our perception of NK cell biology," says study co-author Paul Norman, PhD, CU Cancer Center investigator and associate professor in the CU School of Medicine Division of Personalized Medicine, and Department of Microbiology and Immunology.

Basically, collaborators at the German Center for Infection Research screened many types of proteins created by HLA class 2 genes to see if any would activate NK cells.

"This is called ligand screening - test lots of potential ligands and see what sticks. But because it was established fact that HLA class 2 didn't interact with NK, nobody looked. They were smart enough to look when nobody else did," Norman says.

Specifically, the team found that a kind of HLA class 2 called HLA-DP401 does indeed activate NK cells - and HLA-DP401 is one of the most common variations (called alleles) found in Europeans.

"The problem is that not every one of these HLA class 2 molecules is recognized by NK cells. HLA is polymorphic and different individuals have different kinds. The next step is to identify which patients have the right combinations of HLA class 2 and NK cells - like we recently published for HLA class 1 in leukemia protection - and then we could potentially learn to engineer NK cells to interact with HLA class 2 alleles they don't interact with now," Norman says.

Norman's ongoing research hopes to further define the mechanism of interaction between the proteins created by HLA class 2 genes and NK cells.

"Eventually, this could be another way to direct the immune system toward cancer and away from healthy tissues," Norman says.

Credit: 
University of Colorado Anschutz Medical Campus