Culture

Using stardust grains, a new model for nova eruptions

What do tiny specks of silicon carbide stardust, found in meteorites and older than the solar system, have in common with pairs of aging stars prone to eruptions?

A collaboration between two Arizona State University scientists -- cosmochemist Maitrayee Bose and astrophysicist Sumner Starrfield, both of ASU's School of Earth and Space Exploration -- has uncovered the connection and pinpointed the kind of stellar outburst that produced the stardust grains.

Their study has just been published in The Astrophysical Journal.

The microscopic grains of silicon carbide -- a thousand times smaller than the average width of a human hair -- were part of the construction materials that built the Sun and planetary system. Born in nova outbursts, which are repeated cataclysmic eruptions by certain types of white dwarf stars, the silicon carbide grains are found today embedded in primitive meteorites.

"Silicon carbide is one of the most resistant bits found in meteorites," Bose said. "Unlike other elements, these stardust grains have survived unchanged from before the solar system was born."

Violent birth

A star becomes a nova -- a "new star" -- when it suddenly brightens by many magnitudes. Novae occur in pairs of stars where one star is a hot, compact remnant called a white dwarf. The other is a cool giant star so large its extended outer atmosphere feeds gas onto the white dwarf. When enough gas collects on the white dwarf, a thermonuclear eruption ensues, and the star becomes a nova.

Although powerful, the eruption doesn't destroy the white dwarf or its companion, so novae can erupt over and over, repeatedly throwing into space gas and dust grains made in the explosion. From there the dust grains merge with clouds of interstellar gas to become the ingredients of new star systems.

The Sun and solar system were born about 4.6 billion years ago from just such an interstellar cloud, seeded with dust grains from earlier stellar eruptions by many different kinds of stars. Almost all the original grains were consumed in making the Sun and planets, yet a tiny fraction remained. Today these bits of stardust, or presolar grains, can be identified in primitive solar system materials such as chondritic meteorites.

"The key that unlocked this for us was the isotopic composition of the stardust grains," Bose said. Isotopes are varieties of chemical elements that have extra neutrons in their nuclei. "Isotopic analysis lets us trace the raw materials that came together to form the solar system."

She added, "Each silicon carbide grain carries a signature of the isotopic composition of its parent star. This provides a probe of that star's nucleosynthesis -- how it made elements."

Bose collected published data on thousands of grains, and found that nearly all the grains grouped naturally into three main categories, each attributable to one kind of star or another.

But there were about 30 grains that couldn't be traced back to a particular stellar origin. In the original analyses, these grains were flagged as possibly originating in nova explosions.

But did they?

Making stardust

As a theoretical astrophysicist, Starrfield uses computer calculations and simulations to study various kinds of stellar explosions. These include novae, recurrent novae, X-ray bursts, and supernovae.

Working with other astrophysicists, he was developing a computer model to explain the ejected materials seen in the spectrum of a nova discovered in 2015. Then he attended a colloquium talk given by Bose before she had joined the faculty.

"I would not have pursued this if I hadn't heard Maitrayee's talk and then had our follow-up discussion," he said. That drew him deeper into the details of nova eruptions in general and what presolar grains could say about these explosions that threw them into space.

A problem soon arose. "After talking with her," Starrfield said, "I discovered our initial way of solving the problem was not agreeing with either the astronomical observations or her results.

"So I had to figure out a way to get around this."

He turned to multidimensional studies of classical nova explosions, and put together a wholly new way of doing the model calculations.

There are two major composition classes of nova, Starrfield said. "One is the oxygen-neon class which I've been working on for 20 years. The other is the carbon-oxygen class which I had not devoted as much attention to." The class designations for novae come from the elements seen in their spectra.

"The carbon-oxygen kind produce a lot of dust as part of the explosion itself," Starrfield said. "The idea is that the nova explosion reaches down into the white dwarf's carbon-oxygen core, bringing up all these enhanced and enriched elements into a region with high temperatures."

That, he said, can drive a much bigger explosion, adding, "It's really messy. It shoots out dust in tendrils, sheets, jets, blobs, and clumps."

Starrfield's calculations made predictions of 35 isotopes, including those of carbon, nitrogen, silicon, sulfur, and aluminum, that would be created by the carbon-oxygen nova outbursts.

It turned out that getting the right proportion of white dwarf core material and accreted material from the companion star was absolutely necessary for the simulations to work. Bose and Starrfield then compared the predictions with the published compositions of the silicon carbide grains.

This led them to a somewhat surprising conclusion. Said Bose, "We found that only five of the roughly 30 grains could have come from novae."

While this may seem a disappointing result, the scientists were actually pleased. Bose said, "Now we have to explain the compositions of the grains that didn't come from nova outbursts. This means there's a completely new stellar source or sources to be discovered."

And looking at the larger picture, she added, "We have also found that astronomical observations, computer simulations, and high-precision laboratory measurements of stardust grains are all needed if we want to understand how stars evolve. And this is exactly the kind of interdisciplinary science that the school excels at."

Credit: 
Arizona State University

Blood test could give two month warning of kidney transplant rejection

New research from the NIHR Guy's and St Thomas' Biomedical Research Centre has found a way to predict rejection of a kidney transplant before it happens, by monitoring the immune system of transplant patients.

The researchers have found that a signature combination of seven immune genes in blood samples can predict rejection earlier than current techniques. Monitoring these markers in transplant patients with regular blood tests could help doctors intervene before any damage to the organ occurs, and improve outcomes for patients.

A renal transplant offers the best treatment for patients whose kidneys have failed, with around 3,000 carried out annually in the UK. Acute rejection occurs when the body's immune system begins to attack the donated organ. This is a common complication in the first year after the transplant, affecting around 2 in 10 patients. It can affect the lifespan of the transplanted organ.

Currently, acute rejection can only be confirmed by taking a biopsy of the transplanted organ. While acute rejection can be treated, this can only be done when the organ is already affected and damage has already occurred.

Once the new technique is validated further, it has the potential to offer clinicians the use of a simple blood test to predict rejection. Being able to intervene before the event will help prevent damage to patients, and extend the life of the transplanted organ.

Dr Paramit Chowdhury, a consultant nephrologist at Guy's and St Thomas' and author on the paper said: "This advance could make a huge difference to our ability to monitor kidney transplant patients and treat rejection earlier. It may also save some patients from an unnecessary biopsy. It is a first step in getting a better insight into the status of a patient's immune system, allowing better tailoring of the patient's anti-rejection treatment.

"A big challenge at the moment is that even the best transplanted organ has a limited lifespan of up to 30 years. By being able to pick up signs of rejection early, we might increase the lifespan of the organ and help patients have a better quality of life, for longer."

The team recruited 455 patients who received a kidney transplant at Guy's Hospital and followed these patients over the first year of their transplant, collecting regular blood and urine samples. Using these samples and analysing the data over time, they developed a signature combination of seven genes that differentiated patients who developed rejection from those who did not.

They then tested for the signature via a blood test in a separate cohort of patients, and validated that it predicted transplant rejection.

The team also identified a six gene signature for a less common form of complication. BK-virus nephropathy can look clinically similar to acute rejection, but requires a very different therapy - reducing immunosuppression. Being able to distinguish between these complications would mean clinicians can ensure that patients receive the most appropriate treatment.

Dr Maria Hernandez Fuentes, visiting senior lecturer at King's College London and author on the study, said: "Biomarkers are naturally occurring genes or proteins that appear in the blood, which can tell us what is happening in the body. This is vital in determining the best course of treatment for patients. We were able to monitor the genes that were being expressed in transplant patients and map how these reflected their clinical outcomes.

"Being able to tell the difference between BK-virus nephropathy and acute rejection, which can look very similar in patients, just shows how we can use these molecular techniques to complement clinical practice.

"Further evaluation will be needed to fully validate the technique is reliable enough for clinical use, and it will be exciting to develop this research further."

The research is published in the journal EBioMedicine. It was supported by the National Institute for Health Research Biomedical Research Centre at Guy's and St Thomas' and King's College London, the Medical Research Council Centre for Transplantation and the EU (Framework Programme 7). Anonymised clinical data was also provided by the NIHR Health Informatics Collaborative.

Credit: 
NIHR Biomedical Research Centre at Guy’s and St Thomas’ and King’s College London

Could medical marijuana help grandma and grandpa with their ailments?

MINNEAPOLIS - Medical marijuana may bring relief to older people who have symptoms like pain, sleep disorders or anxiety due to chronic conditions including amyotrophic lateral sclerosis, Parkinson's disease, neuropathy, spinal cord damage and multiple sclerosis, according to a preliminary study released today that will be presented at the American Academy of Neurology's 71st Annual Meeting in Philadelphia, May 4 to 10, 2019. The study not only found medical marijuana may be safe and effective, it also found that one-third of participants reduced their use of opioids. However, the study was retrospective and relied on participants reporting whether they experienced symptom relief, so it is possible that the placebo effect may have played a role. Additional randomized, placebo-controlled studies are needed.

According to the Centers for Disease Control and Prevention, approximately 80 percent of older adults have at least one chronic health condition.

"With legalization in many states, medical marijuana has become a popular treatment option among people with chronic diseases and disorders, yet there is limited research, especially in older people," said study author Laszlo Mechtler, MD, of Dent Neurologic Institute in Buffalo, N.Y., and a Fellow of the American Academy of Neurology. "Our findings are promising and can help fuel further research into medical marijuana as an additional option for this group of people who often have chronic conditions."

The study involved 204 people with an average age of 81 who were enrolled in New York State's Medical Marijuana Program. Participants took various ratios of tetrahydrocannabinol (THC) to cannabidiol (CBD), the main active chemicals in medical marijuana, for an average of four months and had regular checkups. The medical marijuana was taken by mouth as a liquid extract tincture, capsule or in an electronic vaporizer.

Initially, 34 percent of participants had side effects from the medical marijuana. After an adjustment in dosage, only 21 percent reported side effects. The most common side effects were sleepiness in 13 percent of patients, balance problems in 7 percent and gastrointestinal disturbances in 7 percent. Three percent of the participants stopped taking the medical marijuana due to the side effects. Researchers said a ratio of one-to-one THC to CBD was the most common ratio among people who reported no side effects.

Researchers found that 69 percent of participants experienced some symptom relief. Of those, the most common conditions that improved were pain with 49 percent experiencing relief, sleep symptoms with 18 percent experiencing relief, neuropathy improving in 15 percent and anxiety improving in 10 percent.

Opioid pain medication was reduced in 32 percent of participants.

"Our findings show that medical marijuana is well-tolerated in people age 75 and older and may improve symptoms like chronic pain and anxiety," said Mechtler. "Future research should focus on symptoms like sleepiness and balance problems, as well as efficacy and optimal dosing."

Credit: 
American Academy of Neurology

The Lancet: Conceiving within a year of stillbirth does not increase risks for next pregnancy

Conceiving within a year of stillbirth is common and is not associated with increased risk of stillbirth, preterm birth, or small-for-gestational-age birth in the following pregnancy, compared with an interpregnancy interval of at least two years.

The results are from the first large-scale observational study to investigate the interval between stillbirth and subsequent pregnancy, including almost 14,500 births in women from Australia, Finland and Norway who had a stillbirth in their previous pregnancy. The findings are published in The Lancet.

The World Health Organization (WHO) recommends that women wait at least two years after a livebirth and at least 6 months after a miscarriage or induced abortion before conceiving again, but there is no guidance for the optimal interval after a stillbirth because there is limited evidence in this area.

"Our results consistently showed that an interpregnancy interval of less than one year was not associated with increased risk of adverse birth outcomes in the next pregnancy, compared with an interval of at least two years. Our findings provide valuable evidence for recommended pregnancy spacing after a stillbirth," says study author Dr Annette Regan, Curtin University, Australia. "Approximately 3.5 in every 1,000 births in high-income countries are stillborn, and there is limited guidance available for planning future pregnancies. We hope that our findings can provide reassurance to women who wish to become pregnant or unexpectedly become pregnant shortly after a stillbirth." [1]

The study used birth records spanning 37 years (1980-2016) from Finland, Norway, and Australia to investigate intervals between pregnancies and the risk of subsequent stillbirth, preterm birth, and small-for-gestational-age birth [2]. The authors note that these countries have access to universal health care and free antenatal care, and the populations are primarily white, so the findings might not be generalisable to low- or middle-income countries, countries without access to universal health care, or ethnic minority groups.

The study included singleton births only, and stillbirths following 22 or more weeks' gestation. The interpregnancy interval was calculated from the delivery date of the past birth or stillbirth and the start of the next pregnancy (delivery date of next pregnancy minus gestational age at birth), and was categorised as less than 6 months, 6-11 months, 12-23 months, 24-59 months, and more than 59 months.

Overall, the study included 14,452 births among mothers who had a stillbirth in their previous pregnancy (4,170 in Finland, 6,761 in Norway, and 3,521 in Australia). Results were compared with 1,654,289 births following a previous livebirth from the three countries (536,392 in Finland, 854,999 in Norway, and 262,898 in Australia).

Of the 14,452 births in women whose previous pregnancy ended in stillbirth, 14,224 (98%) were livebirths, 2,532 (18%) were preterm births, and 1,284 (9%) were small-for-gestational-age births. Of the 228 stillbirths (2% of the total births), 201 (88%) were preterm and 27 (12%) were stillborn at term.

For women who had experienced stillbirth in their last pregnancy, intervals shorter than 12 months were not associated with increased risk of subsequent stillbirth, preterm birth, or small-for-gestational-age birth, compared with an interpregnancy interval of 24-59 months.

This trend remained the same when adjusted for maternal age, number of previous births, and decade of delivery. The authors also noted no difference in the association between interpregnancy interval and birth outcomes based on the gestational length of the previous stillbirth.

Short interpregnancy intervals were more common after stillbirth than after livebirth - the median interpregnancy interval after a stillbirth was 9 months, compared with 25 months after a livebirth. After stillbirth, 9,109 (63%) women conceived their next child within 12 months, with 5,393 of those (37% of all births) conceived within 6 months.

The authors note the difference in optimal intervals following livebirth and stillbirth. Dr Regan explains: "Although the mechanism linking interpregnancy interval and perinatal health is unclear, previous research offers several hypotheses, including depleted nutrition from past pregnancy, cervical insufficiency, and breastfeeding-pregnancy overlap in closely spaced pregnancies. Without sufficient time to recover from a previous pregnancy, women may be at increased risk of entering a reproductive cycle with poor nutritional status, which has been linked to increased risk of foetal growth restriction and birth defects. Such nutritional depletion might not occur to the same extent after a pregnancy loss, and this may affect the optimal interpregnancy interval, explaining why it may be different after stillbirth and livebirth." [1]

The authors note that other factors that they could not study (such as maternal chronic medical conditions, pregnancy intention, use of assisted reproductive technology, cause of previous stillbirth, or socioeconomic status) may have affected their findings. They also add that women who conceive soon after a previous pregnancy might be healthier and more fertile than women who conceive later and therefore could be less prone to adverse birth outcomes.

Within the study, information on miscarriages or induced abortions was not available, which could have led to overestimation of interpregnancy interval in some women.

Lastly, the authors note that although this is largest study of its kind, only 228 women had recurrent stillbirths, which means the analyses for this group are limited due to small numbers. Replication of the study in a larger group would be informative.

Writing in a linked Comment, Mark A Klebanoff, The Research Institute at Nationwide Children's Hospital, USA, says: "The results of this study, in conjunction with results of studies of pregnancy interval after early loss and with findings of studies using new approaches to study interval after a livebirth, suggest that interpregnancy interval might be less important than previously assumed, at least in women in high-income regions. Rather than adhering to hard and fast rules, clinical recommendations should consider a woman's current health status, her current age in conjunction with her desires regarding child spacing and ultimate family size, and particularly following a loss, her emotional readiness to become pregnant again."

Credit: 
The Lancet

Newly identified drug targets could open door for esophageal cancer therapeutics

image: Graphical abstract of TGF-beta pathway activity during progression of esophageal adenocarcinoma.

Image: 
Case Western Reserve School of Medicine

Blocking two molecular pathways that send signals inside cancer cells could stave off esophageal adenocarcinoma (EAC), the most common esophageal malignancy in the United States, according to new research out of Case Western Reserve University School of Medicine. Researchers identified the pathways using advanced computational and genetic analyses of tumor biopsies from EAC patients. They found 80 percent of tumors had unusually active genes related to two specific pathways, and that exposing the cells to pathway inhibitors stymied EAC tumor growth in mice.

Results published in Gastroenterology point to two signaling pathways (controlled by JNK and TGF-beta proteins, respectively) as contributing to EAC tumors. The pathways represent molecular chain reactions that were overactive in patient tumor cells, but not in biopsies from patients with non-cancerous esophageal conditions, including Barrett's Esophagus. Harmful effects of these pathways could be reduced by turning down JNK or TGF-beta activity. "These findings suggest a rationale for testing JNK/TGF-beta-targeted therapies as a new treatment approach in this increasingly prevalent and lethal cancer," said senior author Kishore Guda, DVM, PhD, associate professor in the Case Comprehensive Cancer Center.

Only 20 percent of patients diagnosed with EAC survive five years, according to recent National Cancer Institute estimates. Patients struggle to swallow as tumors and cancer cells narrow their esophagus. Some require nasogastric feeding tubes in end stages of disease. Limited available treatments to shrink tumors include surgery, radiation, or chemotherapy, but the majority of EAC tumors are resistant.

"Targeted therapies are virtually non-existent," Guda said. "Treatment advancements are also slowed because we don't know exactly what molecular signals drive EAC pathogenesis."

In the new study, Guda and colleagues collected 397 biopsy specimens to find common mechanisms that underlie EAC tumor progression. They integrated computational and genetic analyses to identify signaling pathways highly active in EAC. They compared EAC biopsies to those collected from patients with conditions that often precede EAC, but who did not develop the cancer.

After finding JNK and TGF-beta pathways to be overactive only in EAC biopsies, they then incubated EAC tumor cells with therapeutic small molecules designed to block the pathways. Exposure to JNK or TGF-beta inhibitors reduced the ability of EAC cells to proliferate, migrate, or form tumors when transplanted into mice. Several mice had near total regression of tumor growth following treatment. Combining JNK and TGF-beta pathway inhibitor treatments further prevented cancer cell growth, but more studies are needed to understand synergy between the pathways during EAC progression.

EAC tumor cells' reliance on the TGF-beta pathway was unexpected given its widely recognized role as a cancer suppressor, said Guda's co-senior author on the study, Vinay Varadan, PhD, assistant professor in the Case Comprehensive Cancer Center. The difference, Varadan said, potentially lies in different roles for TGF-beta in different stages of EAC development.

"In normal esophageal cells, TGF-beta acts as a gatekeeper by inhibiting uncontrolled cell growth," Varadan said. "As EAC develops, TGF-beta switches from a growth suppressor to a growth promoter. This is unlike its function in other cancers such as those arising in the colon." Varadan added, "Our unique application of advanced mathematical modelling that we developed allowed us to tease out these intricate mechanisms, which would have otherwise been missed."

The results open a new targeted therapeutic avenue for EAC, and lay the foundation for studies in humans. Said Guda, "We are in the process of initiating a human clinical trial using an oral TGF-beta pathway inhibitor in EAC patients. Ultimately, we'll use findings from that trial to guide development of TGF-beta inhibitors as a potential new targeted treatment option for EAC patients."

Credit: 
Case Western Reserve University

Over 40 percent of GPs intend to quit within five years: New survey

A new survey of GPs has revealed that over 40% intend to leave general practice within the next five years, an increase of nearly a third since 2014.

The survey of 929 GPs conducted by the University of Warwick has revealed that recent national NHS initiatives are failing to address unmanageable workloads for GPs and left them unconvinced that the NHS can respond to the increasing challenges facing general practice.

The survey conducted in the Wessex region follows up a similar survey in the same region in 2014, allowing the researchers to identify changes in attitude over time.

Published today (28 February) in the journal BMJ Open, it reveals that 42.1% of GPs intend to leave or retire from NHS general practice within the next five years compared to 31.8% of those surveyed in the same region in 2014, an increase of almost a third.

Workload was identified as the most significant issue with 51% of GPs reporting that they were working longer hours than in 2014. This has been linked to the size of the GP workforce not keeping pace with the growing healthcare needs associated with the changing age profile of the UK population, with more people living with complex long-term conditions such as diabetes, hypertension and stroke. In addition, as community and social care services are being cut back or stretched, more pressure is put on general practice as patients have fewer options to turn to.

The researchers argue that the survey paints a picture of GPs feeling increasingly demoralised and looking towards either reducing their hours or retiring altogether.

Lead author Professor Jeremy Dale, from Warwick Medical School, said: "GP morale and job satisfaction has been deteriorating for many years, and we have known that this leading to earlier burnout with GPs retiring or leaving the profession early. What this survey indicates is that this is continuing and growing despite a number of NHS measure and initiatives that had been put in place to address this over the last few years. Many GPs clearly feel that this is 'too little, too late' and have failed to experience any benefit from these initiatives and are unable to sustain working in NHS general practice.

"Intensity of workload, and volume of workload were the two issues that were most closely linked to intentions to leave general practice, followed by too much time being spent on unimportant bureaucratic and administrative tasks.

"There's a worsening crisis in general practice. The situation is bad, it is getting worse and GPs are feeling increasingly overworked and increasingly negative about the future."

Their paper highlights a number of national policy initiatives that since 2014 have sought to relieve pressures on general practice through targets such as recruiting large numbers of doctors from overseas, changes to governance such as the Quality and Outcomes Framework, an expanded role for allied health professionals and the streamlining of services through measures such as sustainability and transformation plans (STPs).

The NHS also launched its Long Term Plan in January 2019, with increased investment and support for primary care, a reduction in bureaucracy, and 22,000 proposed new allied health professionals and support staff working in general practice.

Professor Dale said: "Views from our survey would suggest that many of the changes in the Long Term Plan, such as greater funding for general practice, increasing the GP workforce, and increasing clinical and support staff in general practice, are desperately needed. But in the context of low and worsening morale and job satisfaction, the question is can these be introduced quickly enough now to stem the flow of GPs who are bringing forward their plans to leave the NHS.

"Recent NHS schemes to recruit more GPs haven't paid dividends and the consequence is that GPs are still saying that their workload is getting more intense and increasingly difficult to cope with. It's not perceived that the NHS has taken seriously the crisis facing general practice, and that some policy-led changes in themselves are actually making the workload within general practice less sustainable.

"The point that came through repeatedly in the survey was that GPs felt that we've gone a long way down the road of insufficient investment and insufficient reward. Turning this around will be a mammoth task. The initiatives that were thought most likely to bring benefit included greater investment in practice nursing, closer working with and support from hospital specialists, investment in technology, expansion of the GP workforce, and streamlining CQC practices."

The survey received responses from 929 GPs working in in the Wessex area and is broadly representative of the demographic of GPs working in the NHS, with a slightly larger proportion of responses from older GPs.

Professor Dale added: "A number of recent surveys have shown similar issues to be prevalent across the whole country. Even in an area like Wessex, which in the past would have been considered an attractive place for GPs to work, we can see the effects of chronic under-investment in general practice, and how this is driving GPs to want to retire or reduce their hours of work."

Professor Helen Stokes-Lampard, Chair of the Royal College of GPs, said: "GPs are under intense strain - our workload has escalated in recent years, both in terms of volume and complexity, but we have fewer GPs than we did two years ago.

"There is some great work ongoing to increase recruitment into general practice, and we now have more GPs in training than ever before - but when more family doctors are leaving the profession than entering it we are fighting a losing battle.

"The NHS long-term plan has aspirations that will be good for patients - but we will need the workforce to deliver it. The forthcoming NHS workforce strategy for England must contain measures to help retain GPs in the workforce for longer - steps to reduce workload to make working in general practice more sustainable and removing incentives to retire early for GPs who might not necessarily want to would both be sensible places to start."

Credit: 
University of Warwick

Family businesses should prepare for the unexpected if next generation to succeed

Family businesses looking to the next generation to take over need to prepare themselves for unexpected events - such as Brexit - according to researchers at the University of East Anglia (UEA).

Rather than trying to protect firms from the outside world and excluding non-family members from taking up senior roles, modern family businesses should open themselves up to collaboration and external expertise.

They should prioritize equipping all their members for the "unexpected, the erratic and the external", rather than for the pursuit of longevity and amicable internal relationships. The authors argue that this is particularly important when rapid social and economic changes are taking place outside the business.

Passing a business down to the next generation is commonly referred to as succession planning. Using a family-run construction company in Scotland as a case study, Dr Zografia Bika and Dr Fahri Karakas from UEA, and Prof Peter Rosa of University of Edinburgh Business School, looked at the succession process over three generations.

As part of the worldwide STEP (Successful Transgenerational Entrepreneurship Practices) project, they investigated how entrepreneurial family values, knowledge and resources emerged or were transferred from the founders to subsequent generations. The findings, published in the journal Family Business Review, show how the traditional succession process has changed.

Where once they were taught 'on the job' from a young age by the older generation, referred to as 'internal socialization', younger members now network with external stakeholders and peers, known as 'interactive socialization', to bring with them new skills and knowledge. This can often involve working elsewhere in a different or related industry before joining the family firm.

A third process, 'experiential socialization', sees younger family members using their external experiences and their own participation in the business to better inform how to reproduce its values.

Lead author Dr Bika, associate professor in entrepreneurship at UEA's Norwich Business School, said Brexit was an example of an unusual external event that could affect family businesses.

"When the change taking place outside the business is fast, your new experiences and the way you experience the context become very important.

"Events such as Brexit could make family businesses concerned about succession, the older generation might think the younger generation can deal with it better and bring forward succession. For a business to adjust to something like Brexit could take years, but some businesses will only have had months."

Dr Bika added: "While an entrepreneurial mindset can be 'nurtured', that is gradually developed over time, or 'transmitted' through traditional socialization processes, it can also be nurtured organically through peer interaction and experiential learning.

"Instead of departmental boundaries, ground rules and training tools, we suggest that modern family businesses need more open spaces and collaborative events bringing together diverse stakeholders and recognizing a range of personal experiences, shifting roles and emergent strategies in a flexible and changing context."

The authors say the findings have implications for family business planning training, which should no longer be seen as an internal process revolving around systematically transferring values and knowledge from the older to the younger generation, but rather include peers, mentors, minority shareholders, professional advisers and non-family managers, who may not be driven by shared objectives or constitute a successor team.

They add that where change is rapid the older generation can benefit from 're-socialization'.

"This study provides a rationale for introducing more formal re-socialization training and mentorship for family business leaders," said Dr Bika. "The younger generation can teach the older family members what they need to know and do in the new business context. Indeed, in times of rapid change, attitudes, knowledge and skills of the older generation, conscientiously passed on to their children, may be active contributors to business failure.

"In our case study, re-socialization has become a conscious strategy in the business. A fast moving board of family and non-family directors, a less self-sufficient growth strategy, a proactive approach to the creation of entrepreneurial opportunities, such as a new induction and other tailored programmes for apprentices, and adoption of new 'modern' managerial practices, such as new open plan offices, are among a raft of recent changes that have reversed years of more traditional family management practices.

"This is best illustrated by attempts to involve shareholders more in the running of the company, requiring many to reconsider and abandon older cherished assumptions."

'Multi-layered socialization processes in transgenerational family firms', Zografia Bika, Peter Rosa and Fahri Karakas, is published in Family Business Review.

Credit: 
University of East Anglia

Transcendental Meditation reduces compassion fatigue and improves resilience for nurses

image: Results showed statistically significant improvements in resilience and all three subscales of the compassion fatigue questionnaire after 4 months of Transcendental Meditation practice. In addition, resilience showed a large statistically significant inverse relationship with burnout and a moderate direct relationship with compassion satisfaction.

Image: 
Transcendental Meditation for Women

The Transcendental Meditation® technique helped to reduce "compassion fatigue" and burnout in a group of 27 nurses while also improving resilience according to a study published today in Journal for Nurses in Professional Development.

Standardized assessments showed a significant improvement after four months of practice.

"For years I watched nurses struggle to care for their patients and themselves," said lead author Jennifer Bonamer, PhD, RN-BC, Nursing Professional Development Specialist at Sarasota Memorial Health Care System, Sarasota, FL. "Working with people who are suffering trauma eventually takes a toll and produces what's come to be called 'compassion fatigue.'"

Study included mostly Registered Nurses

Dr. Bonamer searched the literature for self-care methods that could help nurses cope with burnout and hypothesized that Transcendental Meditation would help relieve compassion fatigue in nurses and improve their ability to bounce back from the challenges of work.

Most of the 27 nurses in the study were Registered Nurses working directly with patients. They had been working as nurses 15.7 mean years, and in their current practice area for an average of 6.5 years.

Standardized assessments quantify benefits

The researchers used the Professional Quality of Life Scale, which includes a 30-item survey that measures compassion satisfaction and compassion fatigue on a 5-point scale. After four months of practicing Transcendental Meditation, the nurses experienced a 9.2% increase in compassion satisfaction and 18% reduction in burnout.

Resilience was measured via the Connor-Davidson Resilience Scale, a 25-item survey with statements that reflect resilient perspectives. It also uses a 5-point scale. Again, after four months of Transcendental Meditation, the nurses experienced a 16.9% increase in resilience.

"These surveys are widely used with demonstrated validity and reliability," Dr. Bonamer said. "They demonstrated quantitatively what the nurses reported: they felt better and enjoyed their work more."

Increasing importance of self-care techniques in nursing

There is an increasing trend toward appreciating the necessity of helping nurses in their careers by taking active steps to use self-care techniques to build resilience.

"We need to invest in our nursing staff and ensure that they have rewarding careers while also providing the best possible care for their patients," Dr. Bonamer said. "The Transcendental Meditation technique is one step that we could take. A variety of studies have shown its effectiveness in reducing stress and promoting health and well-being."

The nurses learned Transcendental Meditation from two certified teachers over a four-day period. They then practiced it for 20 minutes twice a day, though their demanding schedules sometimes made it challenging to fit it in. The technique is typically practiced once in the morning and then again in the late afternoon. In this video, Nourishing the Caregiver from Within, nurses describe the benefits they are receiving from their TM Program.

Previous qualitative study also found a benefit

The present study is the second of two that have used the Transcendental Meditation technique as a modality to improve the well-being of nurses. A study published in 2018 in International Journal for Human Caring reported the experience of RNs in graduate school who practiced Transcendental Meditation for four months. The qualitative study entailed the students keeping a journal and then the researchers used Giorgi's descriptive phenomenological method to examine their journals.

The results showed that graduate students were more present and balanced, and experienced enhanced job performance. They also enjoyed greater feelings of bliss, peace, and integrity.

Credit: 
Transcendental Meditation for Nurses

New treatment offers potentially promising results for the possibility of slowing, stopping, or even reversing Parkinson's disease

Amsterdam, NL, February 27, 2019 - A pioneering clinical trials program that delivered an experimental treatment directly to the brain offers hope that it may be possible to restore the cells damaged in Parkinson's disease. The study investigated whether boosting the levels of a naturally-occurring growth factor, Glial Cell Line Derived Neurotrophic Factor (GDNF), can regenerate dying dopamine brain cells in patients with Parkinson's and reverse their condition, something no existing treatment can do. Potentially promising results of the third arm of the trials, an open-label extension study, are reported in the Journal of Parkinson's Disease.

The three-part multimillion-pound GDNF study was funded by Parkinson's UK with support from The Cure Parkinson's Trust and in association with the North Bristol NHS Trust.

Six patients took part in the initial pilot study to assess the safety of the treatment approach. A further 35 individuals then participated in the nine-month double blind trial, in which half were randomly assigned to receive monthly infusions of GDNF and the other half placebo infusions. After the initial nine months on GDNF or placebo, the open-label extension study took place, which explored the effects and safety of continued exposure to GDNF for another 40 weeks in the patients previously receiving GDNF (80 weeks in total) and the effects of 40 weeks of open label GDNF in those subjects who had previously received placebo for the first 40 weeks. All 41 patients randomized and treated in the parent study (prior GDNF and placebo patients) were enrolled and completed the open label extension study.

A specially designed delivery system was implanted using robot-assisted neurosurgery. This delivery system allowed high flow rate infusions to be administered every four weeks and enabled so called Convection Enhanced Delivery (CED) of the study drug. Four tubes were carefully placed into each patient's brain, which allowed GDNF to be infused directly to the affected areas with pinpoint accuracy via a skull-mounted transcutaneous port behind the ear. After implantation and over the following several years the trial team administered, more than 1000 brain infusions, once every four weeks over 18 months to study participants. The high compliance rate (99.1%) in participants recruited from throughout the UK has potentially demonstrated that this new administration process for repeated brain infusion is clinically feasible and tolerable.

After nine months, there was no change in the PET scans of those who received placebo, whereas the group who received GDNF showed an improvement of 100% in a key area of the brain affected in the condition, offering hope that the treatment was starting to reawaken and restore damaged brain cells.

"The spatial and relative magnitude of the improvement in the brain scans is beyond anything seen previously in trials of surgically delivered growth-factor treatments for Parkinson's," explained principal investigator Alan L. Whone, PhD, FRCP, Translational Health Sciences, Bristol Medical School, University of Bristol, and Neurological and Musculoskeletal Sciences Division, North Bristol NHS Trust, Bristol, UK. "This represents some of the most compelling evidence yet that we may have a means to possibly reawaken and restore the dopamine brain cells that are gradually destroyed in Parkinson's."

By 18 months, when all participants had received GDNF, both groups showed moderate to large improvements in symptoms compared to before they started the study and that GDNF was safe when administered over this length of time. However, no significant differences between the groups (placebo followed by GDNF versus GDNF for the entire study period) in the primary and secondary clinical endpoints were seen.

The question of whether clinical benefits lag behind biological changes seen in PET scans during disease reversal or need a longer period of repeated exposure to the drug to develop cannot be answered definitively on the basis of the extension study results. However, the integrated results of the two studies suggest that:

Attending on an out-patient basis over 18 months, to receive infusions every four weeks via a skull-mounted port, is feasible.

This treatment regimen and novel method of drug administration are well tolerated.

Further testing of GDNF in a larger-scale study and including the use of higher doses are required to definitively determine whether GDNF has a future role as a neurorestorative treatment for Parkinson's.

According to Steven Gill, MB, MS(Lond.), FRCS, lead neurosurgeon and designer of the CED device, of the Neurological and Musculoskeletal Sciences Division, North Bristol NHS Trust, Bristol, and Renishaw plc, New Mills, Wotton-under-Edge, Gloucestershire, UK, "This trial has shown that we can safely and repeatedly infuse drugs directly into patients' brains over months or years. This is a significant breakthrough in our ability to treat neurological conditions, such as Parkinson's, because most drugs that might work cannot cross from the blood stream into the brain due to a natural protective barrier."

"It's essential to continue research exploring this treatment further - GDNF continues to hold potential to improve the lives of people with Parkinson's," commented Dr. Whone.

I believe that this approach could be the first neuro-restorative treatment for people living with Parkinson's, which is, of course, an extremely exciting prospect," added Dr. Gill.

Credit: 
IOS Press

Biologists find the long and short of it when it comes to chromosomes

A team of biologists has uncovered a mechanism that determines faithful inheritance of short chromosomes during the reproductive process. The discovery, reported in the journal Nature Communications, elucidates a key aspect of inheritance--deviation from which can lead to infertility, miscarriages, or birth defects such as Down syndrome.

The research centers on how short chromosomes can secure a genetic exchange. Genetic exchanges are critical for chromosome inheritance, but are in limited supply.

How short chromosomes ensure a genetic exchange is of great interest to scientists given the vulnerability of short chromosomes.

"Short chromosomes are at a higher risk for errors that can lead to genetic afflictions because of their innate short lengths and therefore have less material for genetic exchange," explains Viji Subramanian, a post-doctoral researcher at New York University and the paper's lead author. "However, these chromosomes acquire extra help to create a high density of genetic exchanges--but it hadn't been understood as to how short chromosomes received this assistance."

To explore this question, the researchers, who also included Andreas Hochwagen, an associate professor in NYU's Department of Biology, studied this process in yeast--a model organism that shares many fundamental processes of chromosome inheritance with humans.

Overall, they found that vast regions near the ends of both short and long chromosomes are inherently primed for a high density of genetic exchanges--the scientists labeled these end-adjacent regions (EARs). Of particular note, a high density of genetic exchanges in EARs is conserved in several organisms, including birds and humans.

Significantly, the researchers noted that EARs are of similar size on all chromosomes. This means that EARs only occupy a limited fraction of long chromosomes but almost the entirety of short chromosomes. This difference drives up the density of genetic exchanges, specifically on short chromosomes, and does so without cells having to directly measure chromosome lengths.

Credit: 
New York University

Pipistrellus nathusii: A Batmobile with cruise control

image: This is Nathusius' bat (Pipistrellus nathusii).

Image: 
Christian Giese

Aerial migration is the fastest, yet most energetically demanding way of seasonal movements between habitats. A new study led by scientists at the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) investigated the energy requirements and travel speeds of migrating Nathusius' bats (Pipistrellus nathusii). Using a wind tunnel experiment to determine the exact energy demands of different flying speeds and a field study to record actual travel speeds of migrating bats, the scientists demonstrated that bats travel at the speed where their range reaches a maximum, enabling them to cover long distances with a minimum amount of energy. How the researchers tracked down this cruise control is published in the Journal of Experimental Biology.

For many taxa, and bats in particular, scientists still lack a clear understanding of the energy requirements for migration. A team of scientists lead by Sara Troxell and Christian Voigt from the Leibniz-IZW designed an ambitious experimental study to make substantial progress on this question. The first part of the study was a wind tunnel experiment combined with measurements in a respirometry chamber. The chamber allowed the scientists to precisely track the CO2 enrichment in the air from the breath of the bats, from which they calculated the metabolic rate during flight. By repeating these measurements directly before and after one-minute flights at various speeds in the wind tunnel, the scientists recorded flight metabolic rate in relation to air speed and then calculated the flight speed with the best energy to distance ratio. The second part of the study was conducted at a migratory corridor along the Baltic Sea coast in Latvia. Using the echolocation calls of migrating Nathusius' bats, the scientists established the flight trajectories of these bats which allowed them to measure the actual speed of migration. "Our study confirms that the observed flight speeds are consistent with the expectation that migratory bats practice optimal flight speeds for covering the largest distance with the least amount of energy," Troxell and Voigt concluded. This speed is around 7.5 meters per second, equivalent to 27 kilometres (16 miles) per hour.

The field study also facilitated the comparison of the flight speed of migrating bats with the speed of bats foraging for insects. Foraging bats fly at significantly lower speeds than the most efficient speed determined in the wind tunnel experiments. "When foraging in a dune forest, bats performed sharp turns in order to catch insects," Troxell explains. "These tight turns require slower flight speeds and the overall speed might be reduced in anticipation of such turns." Previous studies in less confined habitats revealed average foraging speeds that were much closer to the calculated ideal speed.

Data of migratory flight speed and flight energy expenditure make it possible to estimate energetic requirements of trans-continental migration in small-sized bats. "However, it is important to realise that our insights into the migratory behaviour of bats are still in their infancy," Voigt explains. Extrapolation of the energy needed by a Nathusius' bat travelling a distance of 2,000 kilometres from northeastern Europe to hibernacula sites in western or southern France result in an estimated total energy demand of almost 300 kilojoules. A journey of this length needs at least 12 days to complete when flying in a straight line. Currently, the exact routes, flying hours and distances flown per night are still unknown, so need to be investigated in more detail.

Credit: 
Forschungsverbund Berlin

How listening to music 'significantly impairs' creativity

video: The popular view that music enhances creativity has been challenged by researchers who say it has the opposite effect.
Psychologists investigated the impact of background music on performance by presenting people with verbal insight problems that are believed to tap creativity.
They found that background music 'significantly impaired' people's ability to complete tasks testing verbal creativity -- but there was no effect for background library noise.

Image: 
Lancaster University

The popular view that music enhances creativity has been challenged by researchers who say it has the opposite effect.

Psychologists from the University of Central Lancashire, University of Gävle in Sweden and Lancaster University investigated the impact of background music on performance by presenting people with verbal insight problems that are believed to tap creativity.

They found that background music "significantly impaired" people's ability to complete tasks testing verbal creativity - but there was no effect for background library noise.

For example, a participant was shown three words (e.g., dress, dial, flower), with the requirement being to find a single associated word (in this case "sun") that can be combined to make a common word or phrase (i.e., sundress, sundial and sunflower).

The researchers used three experiments involving verbal tasks in either a quiet environment or while exposed to:

Background music with foreign (unfamiliar) lyrics

Instrumental music without lyrics

Music with familiar lyrics

Dr Neil McLatchie of Lancaster University said: "We found strong evidence of impaired performance when playing background music in comparison to quiet background conditions."

Researchers suggest this may be because music disrupts verbal working memory.

The third experiment - exposure to music with familiar lyrics- impaired creativity regardless of whether the music also boosted mood, induced a positive mood, was liked by the participants, or whether participants typically studied in the presence of music.

However, there was no significant difference in performance of the verbal tasks between the quiet and library noise conditions.

Researchers say this is because library noise is a "steady state" environment which is not as disruptive.

"To conclude, the findings here challenge the popular view that music enhances creativity, and instead demonstrate that music, regardless of the presence of semantic content (no lyrics, familiar lyrics or unfamiliar lyrics), consistently disrupts creative performance in insight problem solving."

Credit: 
Lancaster University

How young adults experience pain affects self-injury

Teen-agers and young adults who intentionally hurt themselves engage in such behavior based, in part, on how they experience pain and their emotional distress, according to a Rutgers study.

The study, which examines physical pain in non-suicidal self-injuries, appeared online ahead of in print in the March 2019 issue of the journal Clinical Psychological Science.

The Centers for Disease Control and Prevention report that non-suicidal self-injury is relatively common in adolescents, with more than 10 percent of teen-age boys and about 25 percent of teen-age girls doing so each year.

"The experience of pain during non-suicidal self-injury remains a mystery and can be difficult for clinicians and families to understand because it challenges our assumption that people want to avoid or minimize pain," said author Edward Selby, an associate professor in psychology and a faculty member at the Rutgers Institute for Health, Health Care Policy and Aging Research. "However, people who engage in this behavior intentionally and repeatedly inflict physical injury on themselves despite -- or perhaps because of -- the physical pain it elicits."

People experience pain in different ways during non-suicidal self-injury: some experience little or no pain; however, others experience pain, which may be used to distract themselves from emotional distress.

The researchers studied 47 young adults between ages 15 and 21 who regularly hurt themselves and did so at least twice in the previous two weeks. None of the participants were at risk for suicide or had been diagnosed with a psychotic disorder, life-threatening anorexia or developmental delays. Nearly 70 percent were female, which reflects the higher rate of females versus males who self-injure.

Using a smartphone app designed at Rutgers specifically for this study, the researchers questioned the participants five times a day for two weeks. Each time, the participants were asked if they had thought about hurting themselves and if they had done so since the last assessment.

They rated the duration of each injury episode and described the behavior, such as cutting, biting, punching, hair pulling, head banging or burning. They also rated their physical pain on a scale of zero (no pain) to 10 (extremely painful) and the extent to which they were experiencing one of 21 emotions -- such as being overwhelmed, sad, angry, anxious, lonely --before, during and immediately after hurting themselves. The study tracked both the number and types of self-injuries.

In the 143 episodes tracked, most participants reported significant pain when they started to hurt themselves. Those who had high negative emotions at the start and experienced less pain reported repeated self-injuries during that episode. Those who had high negative emotions and felt more pain were more likely to have more overall episodes over the two-week period.

"These findings suggest that the individuals who had high emotional distress and instability sought to use physical pain from self-injury more frequently to relieve their emotional distress," said Selby. "It also shows that an absence of pain sensation during self-injury may arise as the behavior worsens and can lead these individuals to be less motivated to seek help."

Selby said the study shows people who hurt themselves experience pain differently and that clinicians should examine their experiences with pain to understand why they started injuring and predict how frequently they may hurt themselves in the future.

Credit: 
Rutgers University

Radiation-resistant E. coli evolved in the lab give view into DNA repair

image: This graph depicts when different mutations (colored lines) entered and left one of the four E. coli populations in the study, all the way up to cycle 50. Lines that make it all the way to the top are then present in 100% of the population. Some mutations don't quite make it there and are then outcompeted by different mutations that do a better job at conferring resistance.

Image: 
Michael Cox Lab

MADISON -- Scientists in the University of Wisconsin-Madison Department of Biochemistry are watching evolution happen in real time.

In a recent study published online in the Journal of Bacteriology, biochemistry professor Michael Cox and his team describe blasting E. coli bacteria with ionizing radiation once a week, causing the bacteria to become radiation resistant. In doing so, they have uncovered genetic mutations and mechanisms underlying this resistance.

The findings reveal ways to possibly engineer radiation-resistant bacteria to use for various applications in the future, including environmental clean-up and to protect beneficial gut microbes during cancer radiation therapy.

Cox's lab has long been interested in DNA repair, a cellular process by which all organisms are able to piece back together bits of DNA that are broken by stresses like ionizing radiation. This type of radiation is high energy in nature and is associated with nuclear radiation and elements like uranium and plutonium. Astronauts are also exposed to this form of radiation in space. It is encountered at lower doses in some cancer therapies and in some medical imaging, such as x-rays.

A few organisms, mostly bacteria, are naturally resistant to high levels of ionizing radiation. However, many are notoriously difficult to study and little is known about them, especially compared to E. coli. This led Cox's team to turn to another way to study radiation-resistant bacteria -- evolve their own.

Their experiment in "directed" evolution is simple. Lead author and postdoctoral researcher Steven Bruckbauer split a population of E. coli into four groups. Once a week, he and a team of undergraduate researchers use equipment in the Department of Medical Physics to hit each population with ionizing radiation until 99 percent of the cells are dead. They then grow the survivors -- the one percent that best resisted the radiation -- in culture. Most of the new bacteria that grow from these carry beneficial mutations for radiation resistance.

"There are many mutations that pop up that might not be relevant to conferring the resistance and are just along for the ride," Cox explains. "We split the population into four so we could compare mutations across multiple populations and check for patterns and try to identify the main drivers of radiation resistance."

Over time, as the bacteria have become resistant, the researchers have been able to increase the level of radiation to which they expose the bacteria. The study takes a deep dive into the mutations they have accumulated after 50 cycles (meaning the bacteria were irradiated 50 times).

Sequence analysis conducted by the UW-Madison team's collaborators at the Joint Genome Institute, a facility at the U.S. Department of Energy, showed mutations in several genes that are responsible for more efficient DNA repair in E. coli, which can help confer resistance. Another mutation was in RNA polymerase, the enzyme responsible for transcribing RNA into DNA, which is ultimately used to make important proteins.

While the overall mechanisms, such as enhanced DNA repair, are the same as in naturally resistant bacteria, many of the mutations that caused those changes have never been seen before. Bruckbauer adds that beyond DNA repair and changes to RNA polymerase, there are entirely new ways of being resistant that could arise.

"These mechanisms for conferring resistance are just the ones we've seen," he says. "It's exciting to think about the novel possibilities we haven't identified or that haven't even evolved yet. There are some other mechanisms seen in nature that we expect to pop up eventually but then new ones might start evolving."

The group is currently passing cycle 125 of selection and it plans to look at the genetics of future milestone cycles to see how resistant the bacteria can become.

Though radiation-resistant bacteria do exist in nature, their usefulness for a variety of applications may depend on the particular mutations they harbor. Cox explains that trying to engineer such a bacterium without knowing these mutations and how they function to enhance cells' DNA repair systems would be extremely difficult.

Radiation-resistant bacteria could potentially be administered as probiotics to help alleviate some of the side effects of cancer therapies and could aid clean up at nuclear waste sites. Additionally, NASA is concerned about astronauts' exposure to radiation in space and Cox's work might uncover a mechanism by which they could be better protected.

"We've found that E. coli and other radiation-sensitive organisms possess this latent ability to become highly radiation resistant with modifications to a few existing DNA repair proteins," Bruckbauer says. "To our knowledge no one has made something this radiation resistant in the lab. It's a great example of how life is adaptable."

Credit: 
University of Wisconsin-Madison

Researcher finds data-driven evidence on warrior vs. guardian policing

image: Assistant Professor Kyle McLean

Image: 
FSU Photo/Bruce Palmer

The pros and cons of policing methods have been heavily debated for decades in the United States.

Now, a Florida State University-led team of researchers has created a model to measure the differences between two distinct approaches to policing -- the warrior approach and the guardian approach.

Assistant Professor Kyle McLean said the concepts -- which attracted interest after the release of former President Barack Obama's Task Force on 21st Century Policing report in May 2015 -- had largely been theory up until now. The findings were published in Justice Quarterly.

"The warrior vs. guardian concept always seemed like a cool idea," McLean said. "It sounded really good on paper, but we didn't know if it was true. We didn't have data to back it up. So, we decided to survey some officers to see if this was something real, that we could measure in an empirical way, and we did."

The warrior concept is associated with the idea of militarizing policing and is consistent with the traditional view of police work -- to search, chase and capture. However, the newer concept of guardian policing emphasizes social service, valuing community partnerships and establishing positive contacts.

For the study, researchers surveyed officers from police departments in Fayetteville, North Carolina, and Tucson, Arizona. Survey participants received nine questions and were asked to indicate their agreement to a given survey item from 1, strongly disagree, to 5, strongly agree. For instance, one survey item asked, "as a police officer, I see myself primarily as a civil servant."

Officers were also provided a hypothetical scenario of a suspicious person walking in a park at night and asked about how they would respond.

The team found that warrior and guardian models are two distinct approaches to policing. However, officers were able to adopt both mentalities. They also found officers who scored higher on the guardian measure were more likely to value communication, while higher scores on the warrior measure revealed greater importance of physical control and more favorable attitudes toward excessive use of force.

McLean said the warrior mentality often leads to more use of force, making it more likely that the officer or the citizen gets injured.

"Research has shown the guardian mentality has very positive outcomes," McLean said. "While we recognize that you can hold a guardian and a warrior mentality at the same time, if you're not already emphasizing guardianship in some aspects of your work, you're not doing it to the best of your ability and possibly to the detriment of community relationships and well-being."

McLean said future research could explore positive outcomes of the warrior mentality and whether guardian and warrior mentalities and behavior can change over time.

"My hope is that this research will inform the debate and shift the focus to how policing can be changed," McLean said. "As we come to understand these officer mentalities, we can better improve officer training and police-citizen encounters in the field, which ultimately provides better police-community relations."

Credit: 
Florida State University