Culture

Free rides could lead to better health outcomes for seniors

image: This is Leslie Saxon, MD, executive director of the USC Center for Body Computing.

Image: 
Ricardo Carrasco III, Keck Medicine of USC

LOS ANGELES - Better access to on-demand transportation could help older adults miss fewer medical appointments as well as reduce their social isolation to improve their overall health, suggests a new study published in the Journal of mHealth by researchers at the USC Center for Body Computing (USC CBC) at the Keck School of Medicine of USC.

USC CBC researchers provided free and unlimited Lyft rides for three months to 150 older patients with chronic disease to see if better access to transportation could improve their access to medical centers and reduce their social isolation. The researchers also wanted to know its overall impact on their health.

Ninety-three percent of patients used the Lyft rides to access medical care and a significant number learned to use smartphone apps and a telephone concierge service to do so. While visiting their doctors counted for a third of all rides (31%), the seniors used the remaining rides to get to fitness, social and leisure activities, and reported that this improved their perceived quality-of-daily-life (92%).

"Access to transportation is more than getting from point A to point B; it encompasses multiple human facets of life," says Leslie Saxon, MD, executive director of the USC Center for Body Computing and principal investigator of the study. "This research underscores how ride-sharing platforms can provide a significant benefit to the well-being of older adults, empowering them to become active participants in their own care as well as in other areas of their lives."

The researchers concluded that while older adults can be willing, even enthusiastic novel technology adopters who are motivated to overcome barriers disrupting their own health, they needed education and support to be successful.

In the study, 150 English-speaking Keck Medicine of USC patients over the age of 60 (63% female and 54% Caucasian) with chronic diseases were screened for psychological factors and to ensure that access to transportation was a factor in missed appointments. The patients lived an average of 20 miles away from Keck Medical Center of USC and over 45% relied on others for transportation.

The patients were offered free rides to both medical and non-medical destinations for three months and received personalized training in summoning a ride from a ride-sharing app. They also had the option to schedule rides via the Lyft phone concierge platform. Each participant was assigned a wrist-worn daily activity tracker.

The majority of older adults reported that cost was the primary barrier preventing them from continuing to use the ride shares after the study concluded, as the average cost of a month of unlimited rides in Los Angeles is $500. "Because study participants expressed challenges with the costs associated with ride-sharing, our next steps will be to compare the long-term costs of using these apps for health care needs versus the opportunity costs of inconsistent access to transportation," Saxon says. "Our findings on the benefits to seniors could incentivize similar programs on a larger scale by cities and insurance companies."

The study was supported by a $1 million grant from the AARP Foundation as part of a broader collaboration with UnitedHealthcare to address the needs of seniors. Lyft provided the rides and transportation data as well as the app and telephone concierge platform.

"Access to reliable, affordable transportation is essential for older adults," says Lisa Marsh Ryerson, president of AARP Foundation. "When older adults can't get to medical appointments or social activities, the negative effect on well-being -- whether from untreated medical conditions or lack of social interaction -- is enormous. This study highlights the vital connection between good transportation and good health in our communities, especially for seniors who can no longer drive."

Credit: 
University of Southern California - Health Sciences

What's at the 'heart' of a heartbeat?

image: Dr. Katherina M. Alsina on the left, and Dr. Xander Wehrens.

Image: 
Baylor College of Medicine

In the confines of the thoracic chamber, a heart has lost its rhythm. Its two upper chambers, the atria, are beating out of sync with the two lower chambers, the ventricles. The resulting chaos is called atrial fibrillation and is a major concern because it prevents the heart from pumping effectively and is associated with serious complications including heart failure, dementia and a fivefold increase in the risk of stroke.

The laboratories of Dr. Xander Wehrens at Baylor College of Medicine and Dr. Stephan Lehnart at the University of Goettingen in Germany are making headway into understanding the molecular mechanisms that underlie this devastating rhythm disorder.

What is at the 'heart' of a heartbeat?

"At the molecular level, calcium is essential for maintaining a healthy heartbeat," said Wehrens, professor of molecular physiology and biophysics and the Juanita P. Quigley Endowed Chair in Cardiology at Baylor. "Proper contraction and relaxation of the heart depends on the coordinated flux of calcium ions in and out of individual cardiac muscle cells. Research in our lab focuses on understanding how these calcium dynamics are regulated in normal and diseased hearts."

Cardiac cells store calcium in a specialized compartment called the sarcoplasmic reticulum, or SR. "Upon electrical activation of the cell, calcium released from these SR compartments binds to the contractile system and causes contraction," said first author Dr. Katherina M. Alsina, who was a graduate student during the development of this project and is currently senior project manager in the Cardiovascular Research Institute. "Then, the same calcium that was released has to be channeled back into the SR so the cell can relax."

It has been well established that the activities of the SR calcium release and reuptake systems are altered in cells from patients with atrial fibrillation, and that the resulting defects in calcium cycling contribute to the pathogenesis of the disease. Until now though, these two processes, (SR calcium release and reuptake), were thought to be mediated by different and separate systems, but the work from Wehrens and colleagues is changing that model.

Lehnart, professor of cardiology and pulmonology and co-corresponding author of this study, explains that the group's findings shift the long-standing paradigm that SR calcium release and reuptake complexes are separate and discrete functional entities. "Assemblies of multiple proteins, also called protein complexes, are at the center of cardiac function, regulation and disease. Using an unbiased strategy for analysis, we identified a super-complex of two essential protein machines, a calcium release channel and a calcium reuptake pump, and found that both are actually regulated by the same molecular mechanism in heart cells."

A new piece of the puzzle

As Wehrens, Alsina, Lehnart and their colleagues explored the molecular players involved in regulating SR calcium cycling, they discovered a new piece of the puzzle that has changed the field's understanding of the molecular mechanisms leading to atrial fibrillation.

Using an unbiased technique called complexome profiling, Wehrens and colleagues found that a phosphatase regulatory subunit known as PPP1R3A binds to both the SR calcium release and reuptake complexes, forming one big super-complex that is present in atrial cells from both mice and humans. Furthermore, using a new imaging technique called 'STimulated Emission Depletion' (STED) super-resolution microscopy, the team visually confirmed that these two complexes are very close to each other.

"With traditional confocal microscopy we see blurry spots, but with STED we can clearly see individual molecules and estimate the distance between them," said Wehrens, who is the director of the Cardiovascular Research Institute. "We found that components of the calcium release complex overlap with components of the calcium reuptake complex, and that this super-complex is also present in tissue samples from human atria."

PPP1R3A seems to be at least one of the molecular regulators involved in maintaining the integrity of this newly identified calcium-handling super-complex. Genetic deletion of PPP1R3A in mice disrupted the formation of the super-complex, promoted abnormal SR calcium cycling and increased susceptibility to atrial fibrillation.

In addition, the researchers found that levels of PPP1R3A were decreased and the super-complex disrupted in atria of human patients with atrial fibrillation. These findings may have exciting implications for the future of atrial fibrillation treatments. As Wehrens explained, "a reduction in the amount of PPP1R3A may explain two defects in calcium handling in atrial fibrillation, opening the possibility of treating both defects at the same time."

Lehnart emphasized that these findings were only possible through the combined efforts of multiple labs across the world. "Our joint work, which originated through independent efforts and methodologies in different laboratories, also shows the importance of international collaboration to understand and eventually develop new therapies for complex diseases such as atrial fibrillation."

Credit: 
Baylor College of Medicine

New rechargeable CCNY aqueous battery challenges Lithium-ion dominance

image: The newly designed high voltage aqueous Zn anode batteries can challenge Li-ion's current dominance.

Image: 
Photo G.G. Yadav et al, <em>ACS Energy Lett.</em>, 2019, 4, 2144-2146.

A new rechargeable high voltage manganese dioxide zinc battery, exceeding the 2 V barrier in aqueous zinc chemistry, is the latest invention by City College of New York researchers. With a voltage of 2.45-2.8V, the alkaline MnO2|Zn battery, developed by Dr. Gautam G. Yadav and his group in the CCNY-based CUNY Energy Institute, could break the long dominance of flammable and expensive lithium (Li)-ion batteries in the market.

To break the previously daunting 2 V barrier in aqueous zinc chemistry, primary inventor Yadav and his team interfacially engineered two different aqueous electrolytes that deliver the theoretical capacity (308mAh/g) reversibly for many cycles.

"The voltage of current commercially available alkaline MnO2|Zn batteries is around 1.2-1.3V, and this has been considered low compared to Li-ion which has a voltage >3V," said Yadav.

Voltage has been Li-ion's greatest asset and has helped fuel its rise in an energy hungry world.

"Unfortunately it contains elements that are toxic and geopolitically sensitive with Asian countries having a monopoly on mining and manufacturing them," added Yadav.

"This has put the United States at a tremendous disadvantage and has lost its lead in energy storage industry, when in the past it was a world leader. With Mn and Zn being widely available elements, and with the U.S. being rich with them as well, it allows the U.S. to compete again. The manufacturing cost of these batteries will also be low, so it can kick start the growth of the energy storage industry in the U.S."

Credit: 
City College of New York

Discharge incentives in emergency rooms could lead to higher patient readmission rates

In an effort to address emergency department overcrowding, pay-for-performance (P4P) incentive programs have been implemented in various regions around the world, including hospitals in Metro Vancouver. But a new study from the UBC Sauder School of Business shows that while such programs can reduce barriers to access for admitted patients, they can also lead to patient discharges associated with return visits and readmissions.

The study looked at over 800,000 patient visits to the four major emergency departments in Metro Vancouver over a three-year period from April 1, 2013, to March 31, 2016. The study focused on patients with higher acuity levels (triage level 1, 2, or 3). During the first year of the study period, two P4P incentive programs were in effect, funded by the BC provincial government: emergency departments received a $100 compensation for each discharged patient with a length-of-stay (LOS) of less than four hours. Emergency departments also received a $600 compensation for admitted patients that spent less than 10 hours in the emergency department.

The BC government terminated both P4P programs on March 31, 2014, however the regional health authority governing all four emergency departments studied decided to internally fund the exact same $600 admission incentive scheme, which continued without interruption. Only the $100 discharge incentive completely disappeared post-government P4P policy termination.

"In the past, the extent to which these types of programs affected the length of stay of individual patients was not well understood, because previous studies have only examined aggregate performance metrics as they relate to length of stay," said Yichuan (Daniel) Ding, study co-author and assistant professor in the Operations and Logistics Division at the UBC Sauder School of Business. "Our study took a much more granular approach, where we focused specifically on patient discharges that took place within 20 minutes of the deadline for the incentive, because we wanted to know: were these patients discharged to catch the deadline?"

What the study found was that for those patients that were discharged home, there was a significant discontinuity around the four-hour mark, meaning that there was a significant number of patients that were discharged right before the four-hour mark. But after the four-hour mark, there was a decreasing likelihood that a patient would be discharged. This phenomenon was observed in only two of the four emergency departments; the other two did not exhibit this same discontinuity.

"Our study confirmed that this type of financial incentive altered system performance. And in the positive sense, that means that the program is effective, because it impacts length of stay, for both discharged and admitted patients," said Eric Park, study co-author and assistant professor in the Faculty of Business and Economics, University of Hong Kong. "But when we looked more granularly at the patients that were discharged within 20 minutes before the deadline, we found that one of the four emergency departments had a greater revisit and readmission rate within seven days - meaning that within seven days, those patients are more likely to come back and be admitted to hospital. It is possible that this is a signal of premature discharge."

"However, we cannot assert that discharge is premature using this metric alone, especially given that it was only observed in one of the four emergency departments; but it is a potentially worrisome finding," added Yuren Wang, study co-author with the National University of Defense Technology in Changsha, China.

The study also found that for the case of admitted patients at the 10-hour mark, the discontinuity was even more significant, and it applied to all four emergency departments, not just the two.

"Our recommendations based on this research are that setting an incentive for admitted patients improves length of stay, but the four-hour benchmark for discharged patients should be implemented with care," said Dr. Garth Hunte, study co-author and emergency physician at St. Paul Hospital in Vancouver. "There is no sense for an incentive to discharge patients that may require admission to hospital." This is consistent with what the hospitals are actually now doing, thanks to the regional health authorities' ongoing funding of the admission incentive.

Credit: 
University of British Columbia - Sauder School of Business

African elephants demonstrate movements that vary in response to ecological change

image: Results from this study bring new light to elephants' individuality, said Associate Professor George Wittemyer.

Image: 
Guillaume Bastille-Rousseau/Colorado State University

Wild African elephants, known for their intelligence, show markedly different movements and reactions to the same risks and resources. A new study led by Colorado State University and Save the Elephants reveals the magnitude and complexity of this variation in behavior and how it occurs in space and time, and among individual animals.

The findings, published in the September issue of Ecology Letters, indicate how elephants employ a diverse array of strategies that they adjust based on ecological changes. In particular, poaching causes elephants to switch their movements. The study results indicate that landscape conservation efforts should consider the needs of the different tactics elephants display.

The research team used GPS tracking data from more than 150 individual elephants followed over 17 years in the Samburu and Buffalo Springs National Reserve in northern Kenya as part of Save the Elephants' long-term monitoring project. The scientists evaluated individual behavior of elephants to identify how each animal used various food and water resources. They then developed new analytical approaches to understand what drives variations among individual elephant behaviors.

The authors found that many elephants were targeting a specific resource, while others were avoiding that same resource - an unintuitive result, given that movement of most species is driven by the same factors of food, security and social interactions. The variation in elephant behaviors was stronger during the resource-limited dry season, compared with the resource-rich wet season, suggesting a key driver of the different movement strategies was avoidance of competition with other elephants.

"The extent and complexity of the variation among individuals was greater than we anticipated and demonstrated much more diversity than that found in other species," said Guillaume Bastille-Rousseau, a post-doctoral fellow at CSU and lead author of the study.

George Wittemyer, an associate professor of fish, wildlife and conservation biology at CSU and co-author of the study, noted that elephants are hyper-social, with social interactions structuring everything in their lives, including their movements and space use.

"The results from this study bring new light to elephants' individuality, where even when using the same location and facing the same constraints, elephants do not conform to a single behavior," said Wittemyer, who also serves as the chairman of the scientific board of Save the Elephants. "We found this individuality was most clear in the manner by which elephants interact with humans, with some more willing to take risks than others."

Bastille-Rousseau said the team was amazed to see that the elephants shifted their tactics over time, apparently adjusting their strategies relative to changes in the landscape.

"The next step for our research will be to try to understand if individuals displaying a given tactic are more successful than individuals using a different tactic," he said.

Credit: 
Colorado State University

BRCA1/2 genetic testing recommendations still leave issues unresolved

PHILADELPHIA - The U.S. Preventive Services Task Force (USPSTF) has released a new Recommendation Statement for BRCA1/2 evaluation, urging the medical community to widen the parameters used to assess BRCA1 and BRCA2 mutation risks and increase the use of genetic counseling and testing for those with the highest risk. While the changes are beneficial, the recommendations still fail to address many persisting problems in the modern world of genetic testing, according to a new JAMA editorial co-authored by Susan Domchek, MD, executive director of the Basser Center for BRCA at the Abramson Cancer Center at the University of Pennsylvania.

"Genetic testing is an area of medicine that is progressing very quickly, which means providers need to be nimble in order to keep up," Domchek says. "The medical community needs to consider what genetic health data is truly helpful to a patient, strive to test those who may be genetically predisposed to an increased risk of cancer, and work to educate patients and providers on how to correctly and effectively use their test results to make better healthcare decisions."

Mutations in BRCA1 and BRCA2 have been linked to significantly increased risks of breast, ovarian, prostate, and pancreatic cancers, and there are many commercially available tests that can reliably show whether someone has a BRCA1 and/or BRCA2 mutation. Domchek, and co-author Mark Robson, MD, a medical oncologist and chief of Breast Medicine Service at Memorial Sloan Kettering Cancer Center, write that one important point not included in the new recommendations is the link between genetic testing and treatment plans. They note that BRCA1/2 status can impact surgical decision making for patients newly diagnosed with early stage breast cancer and influence treatment plans for certain advanced cancers, such as metastatic breast cancer. The USPSTF does not include newly diagnosed breast or ovarian cancer patients or advanced cancer patients in its recommendations.

Authors express other concerns, which are not addressed in the new recommendations, specifically relating to large-panel genetic tests that are available. Previous genetic tests analyzed a few specific genes at a time, but there are now tests that can sequence up to 80 genes at once. While that sounds like invaluable innovation, there are a plethora of genetic mutations with weak, questionable, or no links to cancer at all. Positive results for those types of mutations could create fear or distract from real genetic indicators like changes to BRCA1/2 genes. Additionally, the direct-to-consumer multi-panel tests one can do at home - such as those offered by companies like 23andMe - further remove people from genetic specialists trained to educate and evaluate how results may be more or less meaningful given an individual's health, history, and family history.

"We should think of genetic testing like the internet," Domchek says. "It's a tool, full of information, but there's nuance in making sense of that information and determining how to act on it."

Although the authors would have liked to see more from the new USPSTF recommendations, they say the two main changes to those recommendations are certainly valuable.

"The statement adds those who have previously been diagnosed with breast or ovarian cancer, but are now cancer free, to the list of those who should undergo careful genetic risk-assessment, which is a positive addition as finding a BRCA1/2 mutation in these patients could directly impact their medical care and have implications for their relatives. It also more explicitly includes ancestry as a risk factor," Domchek says.

The new recommendation urges more broad ancestry knowledge to be used when considering genetic testing, not just family history of cancer. Certain populations, specifically those with Ashkenazi Jewish heritage, have a higher prevalence of BRCA1/2 mutations.

While these expansions are positive, Domchek notes that many individuals at the highest risk of having a BRCA1 or BRCA2 mutation do not undergo genetic testing. In addition, racial and socioeconomic disparities in the uptake of genetic testing remain.

"It's the duty of all health care professionals to help our patients effectively employ genetic testing," Domchek says. "These updates are a positive step forward, but we need to continue advancing BRCA-related research and ensure that those at the highest risk have access to testing."

Credit: 
University of Pennsylvania School of Medicine

More children suffer head injuries playing recreational sport than team sport

image: Study finds children who do recreational sports like bike riding are more likely to suffer serious head injuries than children who play contact sport like AFL or rugby.

Image: 
Murdoch Children’s Research Institute

An Australian/ New Zealand study examining childhood head injuries has found that children who do recreational sports like horse riding, skate boarding and bike riding are more likely to suffer serious head injuries* than children who play contact sport like AFL or rugby.

Research**, conducted by the PREDICT research network, Murdoch Children's Research Institute (MCRI), published on Wiley and soon to be published in the Australian Medical Journal, examined the data of 8,857 children presenting with head injuries to ten emergency departments in Australian and New Zealand hospitals.

A third of the children, who were aged between five and 18 years, injured themselves playing sport. Of these children four out of five were boys.

Lead research author, MCRI's Professor Franz Babl, says the team looked at 'íntracranial' injuries in children because while there is a lot of interest about sport and concussion, less is understood about the severity of head injuries children suffer while playing sport.

"The study found that in children who presented to the emergency departments after head injury and participated in recreational sports like horse riding, skate boarding and bike riding were more likely to sustain serious head injuries than children who played contact sport like AFL, rugby, soccer or basketball," he says.

"We found that 45 of the 3,177 sports-related head injuries were serious and classified as clinically important Traumatic Brain Injury (ciTBI), meaning the patient required either neuro-surgery, at least two nights in hospital and/or being placed on a breathing machine. One child died as a result of head injuries."

Prof Babl says that the sports which resulted in the most frequent reason for presentation to emergency departments included bike riding (16 per cent), rugby (13 per cent), AFL (10 per cent), other football (9 per cent), and soccer (8 per cent).

The most frequent causes of serious injury included bike riding (44 per cent), skateboarding (18 per cent), horse riding (16 per cent), with AFL and rugby resulting in one serious head injury each and soccer resulting none.

A total of 524 patients with sports-related head injuries (16 per cent) needed CT imaging, and 14 children required surgery.

Credit: 
Murdoch Childrens Research Institute

Simple computational models can help predict post-traumatic osteoarthritis

Knee joint injuries, such as ligament rupture, are common in athletes. As the intact joint ligaments offer a precondition for joint stability, ligament injuries are often surgically reconstructed. However, in many cases these injuries or surgeries can lead to post-traumatic osteoarthritis. The articular cartilage, which serves to provide frictionless contact between bones, wears out completely, causing severe joint pain, lack of mobility and even social isolation. Currently, preventing the onset and development of osteoarthritis is still the best clinical course of action. Computational modelling can be used to predict locations susceptible to osteoarthritis; however, they are too complicated for clinical use and lack verification of predictions.

Researchers from the University of Eastern Finland, in collaboration with the University of California in San Francisco, Cleveland Clinic, the University of Queensland, the University of Oulu and Kuopio University Hospital, have developed a method to predict post-traumatic osteoarthritis in patients with ligament ruptures using a simplified computational model. The researchers also verified the model predictions against measured structural and compositional changes in the knee joint between follow-up times. The findings were reported in Clinical Biomechanics.

In this proof-of-concept study, computational models were generated from patient clinical magnetic resonance images and measured motion. Articular cartilage was assumed to degenerate due to excessive tissue stresses, leading to collagen fibril degeneration, or excessive deformations, causing proteoglycan loss. These predictions were then compared against changes in MRI-specific parameters linked to each degeneration mechanism.

"Our results suggest that a relatively simple finite element model, in terms of geometry, motion and materials, can identify areas susceptible to osteoarthritis, in line with measured changes in the knee joint from MRI. Such methods would be particularly useful in assessing the effect of surgical interventions or in evaluating non-surgical management options for avoiding or delaying osteoarthritis onset and/or progression," Researcher Paul Bolcos, a PhD student at the University of Eastern Finland, says.

The findings are significant and could provide pathways for patient-specific clinical evaluation of osteoarthritis risks and reveal optimal and individual rehabilitation protocols.

"We are currently working on adding more patients in order to help tune the degeneration parameters and ensure the sensitivity of the mechanical to MRI parameters. Later, this method could be combined with a fully automated approach for generating these computational models developed in our group, narrowing the gap between research and clinical application," Bolcos continues.

Credit: 
University of Eastern Finland

Embryology: a sequence of reflexive contractions triggers the formation of the limbs

image: Accelerated rolling of the future hindlimb (arrow) and abrupt formation of the amniotic sac (triangle) are observed. The lightning bolt represents the electrode.

Image: 
Fleury et al. / CNRS photo library

It normally takes about 21 days for chicken embryos to develop into chicks. By observing chicken hindlimb formation, a CNRS / Université de Paris research team (1) has just discovered that the mechanism at the origin of embryonic development consists of a sequence of reflexive contractions. The researchers were able to artificially recreate the same process and accelerate it by as much as a factor of 20. Their findings have been published in the European Physical Journal on August 15, 2019.

In its first days of life, a chicken embryo may be likened to a flat disc internally organised into concentric rings. During its development, the embryo stretches, rolls up and twists, this seggregates the concentric rings into as many folded tissues, which eventually give rise to various anatomical features. The scientists realised that during formation of the future chick's tail, one of these rings is stretched and mechanically deforms the posterior region of the embryo. This deformation sets off a series of reflexive contractions of the surrounding rings, exhibiting a domino effect. The contracting rings fold to yield the primitive contours of the hindlimbs.

In order to prove the physical nature of this phenomenon, the researchers designed an electric stimulator through which they administered brief low-intensity shocks (1 volt for 1-3 seconds) to the posterior portion of the embryo. These impulses mimicked the effect of a mechanical deformation like that produced during tail formation, triggered embryonic development in a cascading pattern, and even accelerated it up to 20-fold.

The scientists would like to pursue their research by investigating the technical limits of this discovery. Furthermore, this new method may be used outside the field of embryonic development, to study the effects certain diseases have on cells.

Credit: 
CNRS

Decades-old puzzle of the ecology of soil animals solved

image: Mealworms offered the mould Fusarium graminearum with Aurofusarin (right) and its mutant without Aurofusarin (left), prefer the mutant.

Image: 
Ruth Pilot

An international research team led by the University of Göttingen has deciphered the defence mechanism of filamentous fungi. Moulds are a preferred food source for small animals. As fungi cannot escape predation by running away, they produce defence metabolites, thereby rendering themselves toxic or unpalatable. After decades-long unsuccessful investigation, these defence compounds have now been identified. The results were published in Nature Communications.

Small soil animals such as worms, springtails, and mites constitute about 20% of the living biomass in soil. Since the 1980s, studies on fungal defence against animal predators have focused on mycotoxins. The toxicity of mycotoxins to insects has been documented in numerous studies; however, attempts to prove the ecological function of mycotoxins in defence against predation have failed. Researchers in Göttingen discovered that rather than mycotoxins, certain fungal pigments protect fungi from predation. These pigments are produced by many ascomycetes and belong to the class of dimeric naphthopyrones. Red pigment aurofusarin - which is produced by fungi of the genus Fusarium and by some tropical genera - was studied in detail.

Springtails and insect larvae recognised and avoided food modified to contain aurofusarin. Definitive proof of the ecological function of aurofusarin was obtained with the help of fungal mutants in which aurofusarin synthesis was disrupted by genetic engineering. Springtails, woodlice, and insect larvae accepted the mutants as food while avoiding fungal colonies with aurofusarin. Feeding experiments with different fungal species and mutants revealed that aurofusarin served as the major - or even only - defence compound in these fungi. Initial experiments with other bis-naphthopyrones, produced by the fungal genera Aspergillus and Penicillium, revealed that they also show antifeedant activity (ie they inhibit feeding).

Why do bis-naphthopyrones repel fungivores? According to the mycotoxin hypothesis, bis-naphthopyrones should be toxic. "We could not detect any toxicity when feeding springtails with food containing aurofusarin", explains Yang Xu, a PhD student in Göttingen and first author of the paper. "The animals survived feeding on aurofusarin for five weeks without apparent harm. Aurofusarin thus appears to be a non-toxic antifeedant."

Why has aurofusarin not lost its antifeedant effect after millions of years? Synthetic fungicides often lose inefficiency after a couple of years, and plant defence chemicals do not protect their producers from adapted herbivores. Why have soil animals not adapted to fungal defence chemicals? "An explanation may lie in the very large amounts of defence chemicals that accumulate in fungal cultures", explains Professor Petr Karlovsky, head of Molecular Phytopathology and Mycotoxin Research Lab. "Mutations leading to the inactivation of defence chemicals or that reduce their binding to (yet unknown) receptors would not abolish the effect of defence chemicals."

If this hypothesis is proven correct, aurofusarin would be the first example of a new phenomenon in chemical ecology: the prevention of the adaptation of target organisms due to extremely high concentrations of defence chemicals.

Credit: 
University of Göttingen

Centuries-old Japanese family firms make history relevant to today's business world

Strategy-makers in long-lived Japanese firms face a challenge to match generations of history and guidance with modern-day corporate challenges and change.

A study by researchers from Lancaster University, Politecnico di Milano, UCL and Aaalto University, published in the Strategic Management Journal, reveals that in many Japanese firms, foundational ka-kun - loosely translated as family mottos - remain relevant for decades, or even centuries.

Revered founders and leaders laid out the statements, such as family lessons, testaments and open letters, for their successors, articulating values for personal and business conduct and expressing principles that ensured past prosperity.

The researchers found strategy-makers grapple with this history to turn them from a potential source of inertia into a resource for change. Some ka-kun - in amended form - are still formally adhered to, despite changes within companies and their environments, while others are radically altered or no longer mentioned, reflecting the challenge of keeping them relevant many years after they were set down. Only one company - which had preserved the same core business, ownership within the family and scale - honoured the ancient motto in its original form.

"The ka-kun tend to become emotionally-laded symbols of historical commitments for these firms. When they are used effectively, they can create a shared sense of purpose, mobilise collective action and responsiveness to changing competitive conditions, and lay the groundwork for sustainable competitive advantages," said co-author Dr Innan Sasaki, of Lancaster University Management School.

"When they were first forged, these statements were future-oriented - looking at where the firms wanted to be, and channeling energy, effort and resource in that direction. However, the passage of time means many are no longer relevant, even though they have acquired symbolic status, charged with emotion and inextricably tied to the firms' collective sense of self and legacy.

"This creates a tension between looking to the future and recognising the past of the statements, a struggle which is likely to become more pronounced over time, presenting the challenge if what to do regarding the ka-kun."

Professor Davide Ravasi, of the UCL Management School, added: "Corporate leaders are using a variety of strategies to deal with the revered past when going through strategic change, which both address the need to maintain continuity with the past and strategic relevance now."

The researchers found three differing strategies in the usage of the ka-kun in the face of strategic change in modern Japan to establish a sense of continuity: elaborating, recovering and decoupling.

Elaborating sees the transfer of part of the content of the historical statement into a new one. This was seen with sake manufacturer Gekkeikan, who adapted ka-kun set out in 1933 both in 1955 and 1997.

Recovering forges a new statement based on the retrieval and re-use of historical references, such as with Tokyo Keizai University, who looked back to their 1902 foundation in new mottos in 1992 and 2006.

Decoupling allows the co-existence of the historical statement and a contemporary one with different values, as seen with Yamanaka Hyoemon Shouten, founded in 1718 to commercialize food and sake, and adapting a new motto with a newly-appointed CEO in 2016.

Firms in Japan use all three methods to recognise their past while looking to the future, with recovering and decoupling often triggered by significant changes to the organization and/or its strategy.

"Elaborating helps maintain a sense of continuity by explicitly linking part of the revised statement with the original," said Dr Sasaki. "Revised statements are often presented as a development or an update of previous iterations, highlighting continuity while also refocusing attention on values managers view as important to keep the organisation viable in the present.

"The recovering strategy rests on the search of written, oral or even material memory. References to legendary leaders or a glorious past are used to emotionally energise and rally the organisation around a new strategy.

"Managers redirect attention to values they consider relevant to inspire and legitimise strategic change. At the same time, they claim continuity by reusing texts produced in the distant past. The new statement focuses on emerging issues and justifies changes, while the historical statement maintains a reassuring anchor in the past.

"Decoupling allows the maintenance of historical statements as a reassuring anchor with the past, maintaining a sense of stability and continuity in times of change. Like in the case of recovering, the new statement is associated with organisational or strategic, however, decoupling is more frequently associated with emerging issues not addressed by historical statements."

Professor Eero Varra, of Aalto University Business School and Lancaster University Management School added: "All three strategies involve selective remembering and forgetting to varying degrees to bring the mottos into the modern business world. Change needs to be accommodated, but without threatening the integrity of the historical identity of companies, with values passed on from generation to generation through the ka-kun."

Credit: 
Lancaster University

Shedding light on the reaction mechanism of PUVA light therapy for skin diseases

image: Reaction stages when a psoralen molecule binds to DNA. The result is that the psoralen is permanently bound to the DNA via a cyclobutane ring. The cell is altered and thus damaged, and triggers the process of programmed cell death.

Image: 
ACS / Janina Diekmann

The term 'PUVA' stands for 'psoralen' and 'UV-A radiation'. Psoralens are natural plant-based compounds that can be extracted from umbelliferous plants such as giant hogweeds. Plant extracts containing psoralens were already used in Ancient Egypt for the treatment of skin diseases. Modern medical use began in the 1950s. From then on, they were applied for light-dependent treatment of skin diseases such as psoriasis and vitiligo. From the 1970s onwards, PUVA therapy was used to treat a type of skin cancer known as cutaneous T-cell lymphoma.

Psoralens insert between the crucial building blocks (bases) of DNA, the hereditary molecule. When subjected to UV radiation, they bind to thymine - a specific DNA base - and thus cause irreversible damage to the hereditary molecule. This in turn triggers programmed cell death, ultimately destroying the diseased cell.

Researchers working with Prof. Dr. Peter Gilch from HHU's Institute of Physical Chemistry have now collaborated with Prof. Dr. Wolfgang Zinth's work group from LMU Munich to analyse the precise mechanism of this binding reaction. They used time-resolved laser spectroscopy for this purpose.

They found that - after the psoralen molecule has absorbed UV light - the reaction takes place in two stages. First, a single bond between the psoralen molecule and thymine forms. A second bond formation then yields a four-membered ring (cyclobutane) permanently connecting the two moieties (see figure). The researchers in Düsseldorf and Munich were also able to demonstrate that the first stage takes place within a microsecond, while the second needs around 50 microseconds. They compared this process with the damaging of the 'naked' DNA by UV light. That process also frequently results in cyclobutane rings, but the process takes place considerably faster than when psoralens are present.

Prof. Gilch explains the background to the research: "If we can understand how the reactions take place in detail, we can change the psoralens chemically in a targeted way to make PUVA therapy even more effective." Together with his colleague in organic chemistry, Prof. Dr. Thomas Müller, he wants to develop these high-performance psoralen molecules at HHU within the scope of a DFG project.

Credit: 
Heinrich-Heine University Duesseldorf

New research explores the use of New Psychoactive Substances by young people

A research study into New Psychoactive Substances (NPS) - formerly referred to as 'legal highs' - provides new evidence about why young people were attracted to the drugs, and the health and social risks associated with taking them.

The study was carried out by an interdisciplinary team of researchers from Queen's University Belfast. The research findings recommend support using existing evidence-based interventions among young people and high risk populations.

It follows official statistics released today by NHS Digital about smoking, drinking and drug use among young people. These figures show that 6% of 11-15 year olds said they were offered NPS and 1% said they had taken them in the last year (3). Office of National Statistics figures released last week reported 125 deaths from NPS, double that of the previous year (4).

This newly published longitudinal study about NPS was commissioned and funded by the National Institute for Health Research (NIHR), the nation's largest funder of health and care research, and published in the NIHR Journals Library.

This particular research leveraged data from 2,039 young people who were part of the larger Belfast Youth Development Study (BYDS), which tracked a group of young people from ages 11, and examined in detail how they used alcohol and drugs as they grew up.

In this NIHR report, serious side effects associated with NPS usage were reported by those who had taken this class of drugs, including significant mental health problems and heart, liver, stomach and bladder issues. The research team found that NPS were always used within a poly drug use context (using more than one drug at the same time) in a range of ways and with alcohol, e.g., with mephedrone most snorted it, some made it into capsules and swallowed it and a small number injected it. Examples of drugs taken alongside it were cocaine, alcohol and some with other stimulants like MDMA. In 10% of NPS users surveyed, there was also evidence of moving from synthetic cannabinoids to heroin and vice versa - something that has not previously been reported.

Chief Investigator, Dr Kathryn Higgins, Reader from the School of Social Sciences, Education and Social Work and the Centre for Evidence and Social Innovation at Queen's University, said: "Our research explored in detail the varied motives, characteristics and lived experiences of young people using NPS, ranging from experimental users who liked the buzz or the fact that they were cheaper than other drugs to those who had become dependent and needed help from health and social care services. We discovered that there was a lack of knowledge about the negative impacts of taking these drugs due to them being new and constantly changing as well as being marketed at the time as 'legal highs' and perceived as 'safe'."

NPS are synthetic alternatives to traditional illegal drugs. In the UK, most were 'legal' until they were banned in May 2016 under the Psychoactive Substances Act. They include drugs such as synthetic cannabinoids - sometimes referred to as 'spice' - and mephedrone - also known as 'meow meow'.

The researchers used the data from the BYDS and statistical models to examine if those who reported using NPS had any different risk factors than those who used other drugs. The models, using the data from the 2,039 participants, showed that those who used NPS were mostly the same as those who were polydrug users of any type. To investigate further, 84 people were then reviewed through a series of in-depth interviews to share their experiences growing up, the circumstances that led them to taking NPS, and the age they first tried the drugs. As well as members of the BYDS cohort, individuals in this portion of the study included young people in prisons and those recruited from drug and alcohol services.

The research team categorised groups ranging from non-NPS users, to those who used in a limited experimental way, and those who reported being dependent on NPS. They were able to identify contributing risk factors for each group related to use of NPS, such as problems at school, peer pressure, alcohol use, family breakdown, trauma and lack of parental supervision and support.

In the report, the researchers make some suggestions about how to best respond to NPS use, including the use of peer educators in developing national drug education programmes, the expansion of harm reduction techniques, and research into the effectiveness of psychosocial and psychological interventions. They also call for public health interventions in high risk populations, highlighting that Public Health England are already working to improve things in prisons.

Co-investigator, Dr Nina O'Neill, Research Fellow from the School of Nursing and Midwifery at Queen's commented: "We were also able to look beyond the reported physiological effects of the drugs and learn more about the wider impact of NPS use on the individual, including their physical, psychological and social wellbeing."

"Our findings help to clearly explain why people use NPS in the ways that they do. We hope that this will help experts on NPS to consider interventions which would be most helpful in preventing people from using NPS in the future and reducing harms for people who already use NPS in the interests of better health across society as a whole," added Co-investigator, Dr Anne Campbell, Senior Lecturer from the School of Social Sciences, Education and Social Work, and the Centre for Evidence and Social Innovation at Queen's.

Credit: 
National Institute for Health Research

Skeletal shapes key to rapid recognition of objects

In the blink of an eye, the human visual system can process an object, determining whether it's a cup or a sock within milliseconds, and with seemingly little effort. It's well-established that an object's shape is a critical visual cue to help the eyes and brain perform this trick. A new study, however, finds that while the outer shape of an object is important for rapid recognition, the object's inner "skeleton" may play an even more important role.

Scientific Reports published the research by psychologists at Emory University, showing that a key visual tool for object recognition is the medial axis of an object, or its skeletal geometry.

"When we think of an object's shape, we typically imagine the outer contours," explains Vladislav Ayzenberg, first author of the paper and an Emory PhD candidate in psychology. "But there is also a deeper, more abstract property of shape that's described by skeletal geometry. Our research suggests that this inner, invisible mechanism may be crucial to recognizing an object so quickly."

"You can think of it like a child's stick drawing of a person," adds Stella Lourenco, senior author of the study and an associate professor of psychology at Emory. "Using a stick figure to represent a person gives you the basic visual information you need to immediately perceive the figure's meaning."

The Lourenco lab researches human visual perception, cognition and development.

Visual perception of an object begins when light hits our eyes and the object is projected as a two-dimensional image onto the photoreceptor cells of the retina. "A lot of internal machinery is whirring between the eyes and brain to facilitate perception and recognition within 70 milliseconds," Ayzenberg says. "I'm fascinated by the neural computations that go into that process."

Although most people take it for granted, object recognition is a remarkable feat. "You can teach a two-year-old what a dog is by pointing out a real dog or showing the child a picture in a book," Lourenco says. "After seeing such examples a child can rapidly and with ease recognize other dogs as dogs, despite variations in their individual appearances."

The human ability at object recognition is robust despite changes in a class of objects such as outer contours, sizes, textures and colors. For the current paper, the researchers developed a series of experiments to test the role of skeletal geometry in the process.

In one experiment, participants were presented with paired images of 150 abstract 3D objects on a computer. The objects had 30 different skeletal structures. Each object was rendered with five different surface forms, to change the visible shape of the object, without altering the underlying skeleton. The participants were asked to judge whether each pair of images showed the same or different objects. The results found that skeletal similarity was a significant predictor for a correct response.

A second experiment, based on adaptations of three of the objects, tested the effects of proportional changes to the shape skeleton. Participants were able to accurately predict object similarity at a rate significantly above chance at every level of skeletal change.

A third experiment tested whether an object's skeleton was a better predictor of object similarity than its surface form. Participants successfully matched objects by their skeletal structure or surface forms when each cue was presented in isolation. They showed a preference, however, to match objects by their skeletons, as opposed to their surface forms, when these cues conflicted with one another.

The results suggest that the visual system is not only highly sensitive to the skeletal structure of objects, but that this sensitivity may play an even bigger role in shape perception than object contours.

"Skeletal geometry appears to be more important than previously realized, but it is certainly not the only tool used in object recognition," Lourenco says. "It may be that the visual system starts with the skeletal structure, instead of the outline of an object, and then maps other properties, such as textures and colors, onto it."

In addition to adding to fundamental knowledge of the human vision system, the study may give insights into improving capabilities for artificial intelligence (AI). Rapid and accurate object recognition, for example, is vital for AI systems on self-driving cars.

"The best model for a machine-learning system is likely a human-learning system," Ayzenberg says. "The human vision system has solved the problem of object recognition through evolution and adapted quite well."

Credit: 
Emory Health Sciences

A lack of self control during adolescence is not uniquely human

Impulsiveness in adolescence isn't just a phase, it's biology. And despite all the social factors that define our teen years, the human brain and the brains of other primates go through very similar changes, particularly in the areas that affect self-control. Two researchers review the adolescent brain across species on August 20 in the journal Trends in Neurosciences.

"As is widely known, adolescence is a time of heightened impulsivity and sensation seeking, leading to questionable choices. However, this behavioral tendency is based on an adaptive neurobiological process that is crucial for molding the brain based on gaining new experiences," says Beatriz Luna of the University of Pittsburgh, who co-authored the review with Christos Constantinidis of Wake Forest School of Medicine.

Structural, functional, and neurophysiological comparisons between us and macaque monkeys show that this difficulty in stopping reactive responses is similar in our primate counterparts--who during puberty, also show limitations in tests where they have to stop a reactive response. "The monkey is really the most powerful animal model that comes closest to the human condition," says Constantinidis. "They have a developed prefrontal cortex and follow a similar trajectory with the same patterns of maturation between adolescence and adulthood."

Taking risks and having thrilling adventures during this period isn't necessarily a bad thing. "You don't have this perfect inhibitory control system in adolescence, but that's happening for a reason. It has survived evolution because it's actually allowing for new experiences to provide information about the environment that is critical form optimal specialization of the brain to occur," Luna says. "Understanding the neural mechanisms that underlie this transitional period in our primate counterparts is critical to informing us about this period of brain and cognitive maturation."

Human neurological development during this time is characterized by changes in structural anatomy--there is an active pruning of redundant and un-used neural connections and a strengthening of white matter tracts throughout the brain that will determine the template for how the adult brain will operate. Specifically, by adolescence all foundational aspects of brain organization are in place and during this time they undergo refinements that will enable the most optimal way to operate to deal with the demands of their specific environment.

In particular, the development of neural activity patterns that allow for the preparation of a response seems to be a key element of this phase of development--and essential to successful performance on self-control tasks.

This all suggests that self-control isn't just about the ability, in the moment, to inhibit a behavior. "Executive function involves not only reflexive responses but actually being prepared ahead of time to create an appropriate plan. This is the change between the adolescent and adult brain and it is strikingly clear both in the human data and in the animal data," says Constantinidis.

Ultimately, the authors believe that this phase of development is essential to shaping the adult brain. "It is important for there to be a period where the animal or the human is actively encouraged to explore because gaining these new experiences will help mold what the adult trajectories are going to be," says Luna. "It's important to have this conversation and comparison between human and animal models so that we can understand the neural mechanisms that underlie vulnerability during this time for impaired development such as in mental illness, which often emerges in adolescence, but importantly to inform us in how to find ways to correct those trajectories."

Credit: 
Cell Press