Culture

Making xylitol and cellulose nanofibers from paper paste

image: Figure 1. The quadruple advantage when using modified yeast.

Image: 
Kobe University

The ecological bio-production of xylitol and cellulose nanofibers using modified yeast cells, from material produced by the paper industry has been achieved by a Japanese research team. This discovery could contribute to the development of a greener and more sustainable society. The findings were published on March 4, in Green Chemistry.

The research was carried out by a group led by Assistant Professor Gregory Guirimand-Tanaka, Professor Tomohisa Hasunuma and Professor Akihiko Kondo from the Graduate School of Science, Technology and Innovation and the Engineering Biology Research Center of Kobe University.

In his effort to develop innovative processes to achieve a sustainable society, Professor Kondo has focused on a variety of bio-compounds such as xylitol, a highly valuable commodity chemical, which is widely used in both the food and pharmaceutical industries (for example, as a sugar substitute in chewing gum).

Professor Kondo's group is also interested in innovative nanomaterials such as cellulose nanofibers, which present huge economic potential due to the properties of nanocellulose (mechanical properties, film-forming properties, viscosity etc.), and significant applications in food, hygiene, absorbent, medical, cosmetic and pharmaceutical products.

The worldwide demand for both xylitol and cellulose nanofibers is constantly growing, and the cost and environmental impact of their industrial production remain very high.

The industrial production of xylitol and cellulose nanofibers from purified D-xylose and cellulose fibers respectively involve costly and polluting processes. In order to solve these issues and realize a sustainable and environmentally-conscious society, we must make use of renewable biomass such as paper paste (Kraft pulp) and develop innovative processes.

Biotechnological production of xylitol and cellulose nanofibers using Kraft pulp, deriving from the paper industry, could be an advantageous option, as this material is abundant, contains reasonable amounts (17%) of D-xylose, and can be converted into highly valuable commodity compounds and nanomaterials.

To release the D-xylose contained in Kraft pulp, we usually need to add a large amount of commercial enzymes (CE), which are very costly. Therefore, we decided to use microorganisms such as modified yeast, which is capable of producing these enzymes by itself, in order to reduce the amount of CE initially required. The modified yeast cells developed are carrying these enzymes directly on their own cell surface, and we call this strategy "cell surface display" technology.

In this study, xylitol and cellulose nanofibers were co-produced from Kraft pulp by using a modified strain of baker's yeast (Saccharomyces cerevisiae YPH499 strain) expressing three different enzymes (β-D-glucosidase (BGL), xylosidase (XYL) and xylanase (XYN)) co-displayed on the cell surface.

By using this strategy, we were not only able to produce xylitol and cellulose nanofibers, but also to considerably increase the purity of the cellulose itself and the cost efficiency of the process by reducing the amount of CE initially required (figure 1).

Last but not least, our team was able to successfully perform these experiments in larger volumes by using 2-liter jar fermenters, enabling us to further scale up bio-refinery industrial production of xylitol and cellulose nanofibers from Kraft pulp (figure 2).

Based on these findings, the team will continue to look for ways to increase the sustainable bio-production of xylitol and cellulose nanofibers through genetic engineering of yeast cells.

Credit: 
Kobe University

Study shows pressure induces unusually high electrical conductivity in polyiodide

Polyiodides exhibit useful electrochemical properties such as charge-carrier transportation, high electrolyte energy density, high redox reaction reversibility and a wide range of electrical conductivity, all depending on the forces exerted by the organic counter ions--chemical pressure. For this reason, polyiodides have been used in technical applications in electronic and electrochemical devices such as flow batteries, fuel cells, dye sensitized solar cells and optical devices.

In this study, researchers led by Prof. Piero Macchi, experimental group leader in MARVEL's Design and Discovery Project 4 and head of the laboratory of chemical crystallography at the Department of Chemistry and Biochemistry of the University of Bern, and Dr. Nicola Casati, group leader of the Materials Science group at PSI, used powder and single-crystal X-ray diffraction, electrical conductivity, and first principle calculations to investigate the response of one polyiodide, tetraethylammonium di-iodine triiodide (TEAI), to compression achieved by mechanical pressure.

Compared with the chemical pressure, external mechanical pressure affects the crystal inter- and intramolecular landscape more substantially--a huge lattice strain may induce phase transformations and even chemical reactions. Using diamond anvil cells, it is possible to achieve pressure on the order of tens of gigapascals, a pressure that significantly changes the Gibbs energy, increasing internal energy. Similarly large energy changes are not possible through temperature alteration in solids.

Though complementary I3- and I2 units are clearly separated and interact mainly electrostatically at ambient pressure, the researchers found that compression stimulates their approach--theoretical calculations show that the covalent contribution increases when the material is compressed. Ultimately, this leads to the formation of CT chains, and drastically increased conductivity.

These features make TEAI a tunable pressure-sensitive electric switch. Structural studies at high pressure can rationalize the synthesis and search for future organic and hybrid semiconductors based on PI. The study results indicate that solid PI may be used as solid electrolytes in dye-sensitized solar cells, eliminating the need for organic-based gelators and ionic liquids in general.

Credit: 
National Centre of Competence in Research (NCCR) MARVEL

New model IDs primate species with potential to spread Zika in the Americas

image: White-faced capuchin (Cebus capucinus) were among the primate species with Zika risk scores over 90%. They are well adapted to living among people.

Image: 
Carol Blyberg

(Millbrook, NY) In the Americas, primate species likely to harbor Zika - and potentially transmit the virus - are common, abundant, and often live near people. So reports a new study published today in Epidemics. Findings are based on an innovative model developed by a collaborative team of researchers from Cary Institute of Ecosystem Studies and IBM Research through its Science for Social Good initiative.

Lead author Barbara Han, a disease ecologist at Cary Institute, explains: "When modeling disease systems, data gaps can undermine our ability to predict where people are at risk. Globally, only two primate species have been confirmed positive for Zika virus. We were interested in how a marriage of two modeling techniques could help us overcome limited data on primate biology and ecology - with the goal of identifying surveillance priorities."

The recent Zika epidemic in the Americas was one of the largest outbreaks in modern times, infecting over half a million people. Like other mosquito-borne flaviviruses, Zika circulates in the wild. Primates can serve as disease reservoirs of spillover infection in regions where mosquitoes feed on both primates and people.

By analyzing data on flaviviruses and the primate species known to carry them, and comparing these traits to 364 primate species that occur globally, the model identified known flavivirus carriers with 82% accuracy and assigned risk scores to additional primate species likely to carry Zika virus. The end product includes an interactive map that takes into account primate geographic ranges to identify hotspots where people are most at risk of Zika spillover.

Primate species in the Americas with Zika risk scores over 90% included: the tufted capuchin (Cebus apella), the Venezuelan red howler (Alouatta seniculus), and the white-faced capuchin (Cebus capucinus) - species adapted to living among people in developed areas. Also on the list: white-fronted capuchins (Cebus albifrons), commonly kept as pets and captured for live trade, and spider monkeys (Saimiri boliviensis), which are hunted for bushmeat in parts of their range.

"These species are geographically widespread, with abundant populations that live near human population centers. They are notorious crop raiders. They're kept as pets. People display them in cities as tourist attractions and hunt them for bushmeat. In terms of disease spillover risk, this is a highly alarming result," says coauthor Subho Majumdar.

Adding to the concern: the mosquito species most likely to spread Zika are commonly found near humans, and are able to thrive in natural and altered landscapes.

The model
To overcome data gaps, the team combined two statistical tools - multiple imputation and Bayesian multi-label machine learning - to assign primate species with a risk score indicating their potential for Zika positivity.

The pathogens
Traits of six mosquito-borne diseases were assessed: yellow fever, dengue fever, Japanese encephalitis, St. Louis encephalitis, Zika virus, and West Nile virus. Three of these had known primate reservoirs.

The primates
Biological and ecological traits of the 18 primate species that have tested positive for any mosquito-borne flavivirus were compared to the traits of 364 primate species that occur globally. 33 features were assessed - including things like metabolic rate, gestation period, litter size, and behavior. Features were weighted for importance in predicting Zika positivity.

Han explains: "Like all pathogens, Zika virus has unique requirements for what it needs in an animal host. To determine which species could harbor Zika, we need to know what these traits are, which species have these traits, and which of these species can transmit the pathogen to humans. This is a lot of information, much of which is unknown."

A statistical method called Multiply Imputed Chained Equations (MICE) was used to overcome data limitations. MICE sets computer algorithms to the task of searching through datasets of organism traits to draw connections between organisms with similar or related traits. When the algorithm encounters a missing data entry, it uses these connections to infer the missing information and fill the 'blanks' in the dataset.

Machine learning was applied to this 'filled in' dataset to predict primate species most likely to carry Zika virus. The model produced a risk score for each species by combining flavivirus infection history and biological traits to predict the likelihood of Zika positivity.

This method could help improve forecasting models for other disease systems, beyond Zika. Senior author Kush Varshney from IBM Research explains, "Data gaps are a reality, especially in infectious diseases that originate from wild animal hosts. Models like the one we developed can overcome some of these gaps and help pinpoint species of concern to fine-tune surveillance, forecast spillover events, and help guide efforts by the public health community."

With Varshney adding, "Conducting machine learning on small-sized, incomplete, and noisy datasets to support critical decision making is a challenge shared across many industries and sectors. We will surely use the experience gained from this project in many different application areas."

Han concludes, "This research was made possible by innovations provided by the broader scientific community. We relied on primate and pathogen data collected by hundreds of field researchers, and the base machine learning and imputation methods that we adapted in this research already existed. Partners at IBM Research took on a lion's share of the math and coding. It was an incredibly successful interdisciplinary collaboration - the kind we need more of if we want to find new solutions to complex problems."

Credit: 
Cary Institute of Ecosystem Studies

Ultrasound provides precise, minimally invasive way to measure heart function in children

Currently, a practical, precise, minimally invasive way to measure cardiac output or heart function in children undergoing surgery does not exist. New research published in the Online First edition of Anesthesiology, the peer-reviewed medical journal of the American Society of Anesthesiologists (ASA), illustrates how a novel minimally invasive method using catheter-based ultrasound to measure heart function performed with similar precision to a traditional highly invasive device.

Cardiac output, the volume of blood pumped by the heart per minute, is a crucial component of vital signs monitored in surgical patients. Evaluation by physical examination of critically ill children is often imprecise. Most devices used to monitor cardiac output are adapted from adult patients with limited use in children, due to differences in size, technical limitations, and risk of complications. Physicians have almost no available alternatives to manage and measure how a pediatric patient's heart is responding to different therapies, since there are no practical and precise minimally invasive ways to measure cardiac output in infants and young children.

"This new technology is less invasive than earlier technologies and can be used while patients are awake, which makes it more clinically practical for young children," said Theodor S. Sigurdsson, M.D., pediatric anesthesiologist, at Children's Hospital, University Hospital of Lund in Sweden. "Our results demonstrate that this technology was not only easily adaptable in young children but also very accurate and precise. It could aid further validation of the next generation of non-invasive hemodynamic monitors in the intensive care setting."

In the study, researchers used ultrasound sensors to produce precise measurements that were comparable to those obtained using a more traditional method of placing a probe around the patient's aorta to measure heart function. Forty-three children between the ages of one and 44 months scheduled for corrective cardiac surgery were studied. Researchers measured heart function using both the invasive perivascular flow probe and the new minimally invasive ultrasound technology. After administering a saline injection, researchers were able to detect blood dilution levels using ultrasound sensors attached to an arteriovenous loop connected to catheters in the patient's internal jugular vein and radial artery. The process is minimally invasive because it uses existing catheters and does not require additional invasive procedures.

After surgery, five consecutive repeated cardiac measurements were performed using both methods simultaneously, for a total of 215 cardiac output measurements. The ultrasound sensors showed a statistically similar precision for measuring cardiac output when compared to the results obtained using the periaortic flow probe.

In an accompanying editorial, Christine T. Trieu, M.D., physician anesthesiologist, at the Ronald Reagan UCLA Medical Center, noted there are very few commercially available, precise cardiac output monitoring devices for infants and young children.

"Despite the encouraging results from this study, there are still many challenges in developing the ideal cardiac output monitor for pediatric patients," said Dr. Trieu. "This is the reason why we welcome and applaud the study by Dr. Sigurdsson et al; it offers the possibility of a simple and reliable method that uses arterial line and central line to measure cardiac output in children of all sizes."

Credit: 
American Society of Anesthesiologists

Prenatal testosterone linked to long-term effects in females who share womb with male twin

EVANSTON, Ill. --- Women who shared their mother's womb with a male twin are less likely to graduate from high school or college, have earned less by their early 30s, and have lower fertility and marriage rates when compared with twins who are both female, according to new Northwestern University research.

In the largest and most rigorous study of its kind, Northwestern University and Norwegian School of Economics researchers examined data on all twin births in Norway over a 12-year period to find that females exposed in utero to a male twin experienced adverse educational and labor outcomes along with altered patterns of marriage and fertility as adults.

"Nobody has been able to study how male twins impact their twin sisters at such a large scale," said study corresponding author Krzysztof Karbownik, an economist and research associate at Northwestern University's Institute for Policy Research (IPR). "This is the first study to track people for more than 30 years, from birth through schooling and adulthood, to show that being exposed in utero to a male twin influences important outcomes in their twin sister, including school graduation, wages and fertility rates."

The study, "Evidence that prenatal testosterone transfer from male twins reduces the fertility and socioeconomic success of their female co-twins," will be published the week of March 18 in the Proceedings of the National Academy of Sciences (PNAS).

The researchers used data on 13,800 twin births between 1967 and 1978 to show that females exposed in utero to a male twin are less likely to graduate from high school (-15.2 percent), to complete college (-3.9 percent) or to get married (-11.7 percent). They also have lower fertility rates (-5.8 percent) and life-cycle earnings (-8.6 percent.)

The study supports the "twin testosterone-transfer hypothesis," which posits that females in male-female twin pairs are exposed to more testosterone in utero via the amniotic fluid or through the mother's bloodstream that they share with their twin brother. One explanation for the long-term effects the researchers discovered is changes in behavior, which have previously been demonstrated in girls with male twins. Unlike the females, the researchers found that male twins do not experience long-term consequences of being exposed to a female twin in utero.

"This is a story about the biology of sex differences," said co-author David Figlio, Dean of Northwestern's School of Education and Social Policy and IPR fellow. "We are not showing that exposed females are necessarily more 'male-like,' but our findings are consistent with the idea that passive exposure to prenatal testosterone changes women's education, labor market, and fertility outcomes."

During sensitive developmental periods in utero, steroids produced by the ovaries and testes, including testosterone, help establish biological differences between males and females. Previous, smaller studies have suggested that such exposure to opposite-sex hormones can lead to lasting changes in behavior and other traits. On the other hand, it has also been noted that socialization effects - or being a female raised alongside a twin brother - could likewise explain the different behaviors and outcomes shown by past studies.

To separate the effects of fetal testosterone from postnatal socialization, the research team repeated their analyses focusing only on female twins whose twin sibling -- either twin sister or twin brother -- died shortly after birth, and thus they were raised as singletons. The results were unchanged in this sample, providing strong evidence that the long-term effects that the study documents are due to prenatal exposure, rather than postnatal socialization.

The near-doubling of twinning rates in many countries since 1980 -- a result of women conceiving later in life and increased reliance on in vitro fertilization (IVF) -- means that an increasing number of females worldwide are exposed to prenatal testosterone from their male twin.

The researchers caution that they lack information on all possible outcomes, and it is possible that some positive effects from testosterone exposure also exist. Moreover, the long-term impacts of prenatal testosterone exposure, which likely involve behavioral changes, may shift as societal norms surrounding gender change.

"It is important to emphasize that our findings apply to Norwegian society during the timeframe of the study, but may not apply equally across other societies or cultural settings. For instance, if gender norms change within a society, acceptance of a wider array of behaviors could minimize later effects on outcomes like school completion or entering a marriage" said study co-author Christopher Kuzawa, professor of anthropology and an IPR fellow, whose research focuses on the roles that the intrauterine and early postnatal environments have on development and long-term health.

"Basically we find that there are some very interesting long-term biological effects of being a sister to a twin brother," Kuzawa said. "But whether we view those effects as 'positive' or 'negative' may be culturally dependent."

"While we found moderate effects at the national level, these results reflect mean differences," Karbownik said. "Not everyone will be affected in the same way, and some female twins may not be affected at all. And, these effects are highly unlikely to result from any individual fertility decision made by a woman or couple, given that twins are a small subset of births and male-female twin pairs even rarer yet."

"We certainly do not advocate against delayed reproduction or the use of IVF, which are complex decisions made by individuals balancing a range of personal factors," Karbownik said.

Caveats aside, "our results suggest that in utero testosterone transfer could present a hidden impact of practices that increase multiple zygote implantation, and provide long-term perspectives concerning the risks and returns of these fertility decisions," the researchers wrote.

Credit: 
Northwestern University

Does 'pay-to-play' put sports, extracurricular activities out of reach for some students?

image: The more likely parents were to perceive activities as too expensive for the return, the less likely their kids were to participate.

Image: 
C.S. Mott Children's Hospital National Poll on Children's Health at the University of Michigan

ANN ARBOR, Mich. -- From choir and cheerleading to soccer and student council, extracurricular school activities keep students engaged - but cost may be among barriers that prevent some children from participating, a new national poll suggests.

Eighteen percent of middle and high school-age children are not involved in any extracurricular activities this school year, according to the C.S. Mott Children's Hospital National Poll on Children's Health at the University of Michigan. And students from lower income households ($100,000 a year or less) experience twice the rate of non-participation than peers from families with higher incomes.

"Extracurricular school activities have been shown to boost educational achievement, personal development and social opportunities," says poll co-director Sarah Clark. "But barriers to participation prevent some children from enjoying the benefits that these experiences offer."

About half of students are participating in school sports, ranging from intramurals to varsity teams, during the 2018-19 school year, while more than 40 percent are involved with arts activities such as music, theater, or dance. About half also participate in a club or other activity, including afterschool clubs and more formal groups such as student council.

But these activities often come with a cost. Required school participation fees average $161 for sports, $86 for arts, and $46 for clubs and other activities. For sports, 18 percent of students had school participation fees of $200 or more, compared to 12 percent for arts and 5 percent for clubs and other groups.

When combining participation fees with other expenses, such as equipment and travel, the total cost averaged $408 for sports, $251 for arts, and $126 for other activities.

And the more likely parents were to perceive activities as too expensive for the return, the less likely their kids were to participate, the poll suggests. Twenty-nine percent of parents say the cost of school extracurricular activities is higher than they expected and 10 percent felt the benefits of activities are not worth the cost - including three times as many lower-income parents.

"Parent views about the cost of school activities is linked to the discrepancy in non-participation among families," Clark says. "Children of parents who didn't perceive benefits outweighing the cost were least likely to being involved in sports, arts or clubs."

"Most schools strive to offer a range of activities, including some that do not require participation fees," Clark adds. "Many schools also offer waivers or scholarships to make activities accessible to all students. Despite these efforts, we are still seeing lower participation among students whose parents perceive the cost is out of reach - perceptions that may be inaccurate."

Two thirds of students participating in arts or clubs had no participation fees, compared to only 46 percent for sports.

"For required participation fees, as well as total costs, school sports are on average more expensive for families than other types of activities," says Clark.

And parents may not know how to alleviate these costs. Just 7 percent of parents have ever requested a waiver or scholarship for participation fees. Nineteen percent didn't know how and 5 percent weren't comfortable requesting assistance.

"Parents concerned about the cost of school activities may not be aware of no-cost options, or strategies that would lower or eliminate fees," Clark says. "These missed opportunities for assistance can impede children from pursuing their interests."

The nationally representative Mott poll report is based on responses from 961 parents who answered questions about 1,323 children in middle or high school.

Over 80 percent of middle and high school-age children were expected to participate in at least one type of school activity for the 2018-19 school year.

The income gap in student non-participation is consistent with findings from previous Mott poll reports, and corresponds with income-related attitudes of parents.

Non-participation was also higher among boys than girls (21 percent versus 15 percent.) Top reasons for non-participation among boys were cost, transportation, and having a job while not being interested in activities was cited more often for girls.

"School officials may consider increasing awareness about no-cost activities and waivers as well as emphasizing the benefits of these experiences to hesitant parents," Clark says. "We should do our best to help every child have the opportunity to explore different interests, make new friends, develop skills and enjoy a well-rounded school experience."

Credit: 
Michigan Medicine - University of Michigan

Researchers create hydrogen fuel from seawater

image: A prototype device used solar energy to create hydrogen fuel from seawater.

Image: 
Courtesy of H. Dai, Yun Kuang, Michael Kenney

Stanford researchers have devised a way to generate hydrogen fuel using solar power, electrodes and saltwater from San Francisco Bay.

The findings, published March 18 in Proceedings of the National Academy of Sciences, demonstrate a new way of separating hydrogen and oxygen gas from seawater via electricity. Existing water-splitting methods rely on highly purified water, which is a precious resource and costly to produce.

Theoretically, to power cities and cars, "you need so much hydrogen it is not conceivable to use purified water," said Hongjie Dai, J.G. Jackson and C.J. Wood professor in chemistry at Stanford and co-senior author on the paper. "We barely have enough water for our current needs in California."

Hydrogen is an appealing option for fuel because it doesn't emit carbon dioxide, Dai said. Burning hydrogen produces only water and should ease worsening climate change problems.

Dai said his lab showed proof-of-concept with a demo, but the researchers will leave it up to manufacturers to scale and mass produce the design.

Tackling corrosion

As a concept, splitting water into hydrogen and oxygen with electricity - called electrolysis - is a simple and old idea: a power source connects to two electrodes placed in water. When power turns on, hydrogen gas bubbles out of the negative end - called the cathode - and breathable oxygen emerges at the positive end - the anode.

But negatively charged chloride in seawater salt can corrode the positive end, limiting the system's lifespan. Dai and his team wanted to find a way to stop those seawater components from breaking down the submerged anodes.

The researchers discovered that if they coated the anode with layers that were rich in negative charges, the layers repelled chloride and slowed down the decay of the underlying metal.

They layered nickel-iron hydroxide on top of nickel sulfide, which covers a nickel foam core. The nickel foam acts as a conductor - transporting electricity from the power source - and the nickel-iron hydroxide sparks the electrolysis, separating water into oxygen and hydrogen. During electrolysis, the nickel sulfide evolves into a negatively charged layer that protects the anode. Just as the negative ends of two magnets push against one another, the negatively charged layer repels chloride and prevents it from reaching the core metal.

Without the negatively charged coating, the anode only works for around 12 hours in seawater, according to Michael Kenney, a graduate student in the Dai lab and co-lead author on the paper. "The whole electrode falls apart into a crumble," Kenney said. "But with this layer, it is able to go more than a thousand hours."

Previous studies attempting to split seawater for hydrogen fuel had run low amounts of electric current, because corrosion occurs at higher currents. But Dai, Kenney and their colleagues were able to conduct up to 10 times more electricity through their multi-layer device, which helps it generate hydrogen from seawater at a faster rate.

"I think we set a record on the current to split seawater," Dai said.

The team members conducted most of their tests in controlled laboratory conditions, where they could regulate the amount of electricity entering the system. But they also designed a solar-powered demonstration machine that produced hydrogen and oxygen gas from seawater collected from San Francisco Bay.

And without the risk of corrosion from salts, the device matched current technologies that use purified water. "The impressive thing about this study was that we were able to operate at electrical currents that are the same as what is used in industry today," Kenney said.

Surprisingly simple

Looking back, Dai and Kenney can see the simplicity of their design. "If we had a crystal ball three years ago, it would have been done in a month," Dai said. But now that the basic recipe is figured out for electrolysis with seawater, the new method will open doors for increasing the availability of hydrogen fuel powered by solar or wind energy.

In the future, the technology could be used for purposes beyond generating energy. Since the process also produces breathable oxygen, divers or submarines could bring devices into the ocean and generate oxygen down below without having to surface for air.

In terms of transferring the technology, "one could just use these elements in existing electrolyzer systems and that could be pretty quick," Dai said. "It's not like starting from zero - it's more like starting from 80 or 90 percent."

Credit: 
Stanford University

Experimental blood test accurately spots fibromyalgia

video: For the first time, researchers at The Ohio State University have evidence that fibromyalgia can be reliably detected in blood samples -- work they hope will pave the way for a simple, fast diagnosis.

Image: 
The Ohio State University Wexner Medical Center

COLUMBUS, Ohio - For the first time, researchers have evidence that fibromyalgia can be reliably detected in blood samples - work they hope will pave the way for a simple, fast diagnosis.

In a study that appears in the Journal of Biological Chemistry, researchers from The Ohio State University report success in identifying biomarkers of fibromyalgia and differentiating it from a handful of other related diseases.

The discovery could be an important turning point in care of patients with a disease that is frequently misdiagnosed or undiagnosed, leaving them without proper care and advice on managing their chronic pain and fatigue, said lead researcher Kevin Hackshaw, an associate professor in Ohio State's College of Medicine and a rheumatologist at the university's Wexner Medical Center.

Identification of biomarkers of the disease - a "metabolic fingerprint" like that discovered in the new study - could also open up the possibility of targeted treatments, he said.

To diagnose fibromyalgia, doctors now rely on patient-reported information about a multitude of symptoms and a physical evaluation of a patient's pain, focusing on specific tender points, he said. But there's no blood test - no clear-cut, easy-to-use tool to provide a quick answer.

"We found clear, reproducible metabolic patterns in the blood of dozens of patients with fibromyalgia. This brings us much closer to a blood test than we have ever been," Hackshaw said.

Though fibromyalgia is currently incurable and treatment is limited to exercise, education and antidepressants, an accurate diagnosis has many benefits, Hackshaw said. Those include ruling out other diseases, confirming for patients that their symptoms are real and not imagined, and guiding doctors toward disease recognition and appropriate treatment.

"Most physicians nowadays don't question whether fibromyalgia is real, but there are still skeptics out there," Hackshaw said.

And many undiagnosed patients are prescribed opioids - strong, addictive painkillers that have not been shown to benefit people with the disease, he said.

"When you look at chronic pain clinics, about 40 percent of patients on opioids meet the diagnostic criteria for fibromyalgia. Fibromyalgia often gets worse, and certainly doesn't get better, with opioids."

Hackshaw and co-author Luis Rodriguez-Saona, an expert in the advanced testing method used in the study, said the next step is a larger-scale clinical trial to determine if the success they saw in this research can be replicated.

The current study included 50 people with a fibromyalgia diagnosis, 29 with rheumatoid arthritis, 19 who have osteoarthritis and 23 with lupus.

Researchers examined blood samples from each participant using a technique called vibrational spectroscopy, which measures the energy level of molecules within the sample. Scientists in Rodriguez-Saona's lab detected clear patterns that consistently set fibromyalgia patients' blood sample results apart from those with other, similar disorders.

First, the researchers analyzed blood samples from participants whose disease status they knew, so they could develop a baseline pattern for each diagnosis. Then, using two types of spectroscopy, they evaluated the rest of the samples blindly, without knowing the participants' diagnoses, and accurately clustered every study participant into the appropriate disease category based on a molecular signature.

"These initial results are remarkable. If we can help speed diagnosis for these patients, their treatment will be better and they'll likely have better outlooks. There's nothing worse than being in a gray area where you don't know what disease you have," Rodriguez-Saona said.

His lab mostly concerns itself with using the metabolic fingerprinting technology for food-related research, focusing on issues such as adulteration of milk and cooking oils and helping agriculture companies figure out which plants are best suited to fight disease.

The chance to partner with medical experts to help solve the problem of fibromyalgia misdiagnosis was exciting, said Rodriguez-Saona, a professor of food science and technology at Ohio State.

Rodriguez-Saona said for the next study he'd like to examine 150 to 200 subjects per disease group to see if the findings of this research are replicable in a larger, more-diverse population.

Hackshaw said his goal is to have a test ready for widespread use within five years.

Fibromyalgia is the most common cause of chronic widespread pain in the United States, and disproportionately affects women. The U.S. Centers for Disease Control and Prevention estimates that about 2 percent of the population - around 4 million adults - has fibromyalgia. Other organizations estimate even higher numbers.

About three in four people with fibromyalgia have not received an accurate diagnosis, according to previous research, and those who do know they have the disease waited an average of five years between symptom onset and diagnosis. Common symptoms include pain and stiffness all over the body, fatigue, depression, anxiety, sleep problems, headaches and problems with thinking, memory and concentration.

Eventually, this work could lead to identification of a particular protein or acid - or combination of molecules - that is linked to fibromyalgia, Rodriguez-Saona said.

"We can look back into some of these fingerprints and potentially identify some of the chemicals associated with the differences we are seeing," he said.

In addition to identifying fibromyalgia, the researchers also found evidence that the metabolic fingerprinting technique has the potential to determine the severity of fibromyalgia in an individual patient.

"This could lead to better, more directed treatment for patients," Hackshaw said.

Credit: 
Ohio State University

Reattaching to work is just as important as detaching from work, study finds

Research has increasingly shown that an employee's ability to mentally detach from work and recoup during non-work hours is important for their well-being. But a new study co-authored by a Portland State University professor suggests the opposite is just as important: employees who mentally reattach to work in the morning are more engaged at work.

The study, published in the Journal of Management, showed that planning and mentally simulating the upcoming workday triggers work-related goals. During reattachment, employees think about what will happen during the day, the tasks that have to be accomplished, any potential challenges that might arise, as well as the support and resources they might need to accomplish their goals.

"We know that detachment from work during non-work hours is important because it creates positive outcomes like higher life satisfaction and lower burnout," said Charlotte Fritz, a co-author and associate professor of industrial-organizational psychology in PSU's College of Liberal Arts and Sciences. "Now we need to think about helping people mentally reconnect to work at the beginning of their work shift or day so they can create positive outcomes during their work day and be immersed in their work. It's not enough to just show up."

The study surveyed 151 participants from a broad range of industries, including finance, the energy sector, public administration, information and communication and health sector.

Fritz said an employee's reattachment to work can vary from day to day and will depend on the person and their job. For example, she said employees can think about specific tasks that need to be done over breakfast or in the shower, mentally go over a conversation with a supervisor during their commute, or run through their to-do list while standing in line for a coffee.

"Through reattachment, employees are able to activate work-related goals, which then further creates positive experiences which allow people to be more engaged at work," Fritz said. "Engagement is a sense of energy, sense of feeling absorbed, feeling dedicated to work, and those are all very important motivational experiences that translate to positive outcomes for both employees and organizations. They're more satisfied with work, more committed to work, enjoy work tasks more, perform better and help out more with extra tasks."

The researchers suggest that organizations develop norms and routines that help employees reattach to work and support them in smoothly transitioning into the workday. It could be allowing them a few quiet minutes at the start of the day, initiating a short planning conversation about the upcoming workday, encouraging them to prioritize their most important goals, offering short checklists, or even providing them with more autonomy on the job to complete specific tasks.

"Organizations need employees who are highly engaged, and reattachment is key," Fritz said.

Credit: 
Portland State University

Commonly used heart drug associated with increased risk of sudden cardiac arrest

Lisbon, Portugal - 17 March 2019: A drug commonly used to treat high blood pressure and angina (chest pain) is associated with an increased risk of out-of-hospital sudden cardiac arrest, according to results from the European Sudden Cardiac Arrest network (ESCAPE-NET) presented today at EHRA 2019.1

Sudden cardiac arrest causes around half of cardiac deaths in Europe and one in five natural deaths. The heart stops pumping after a cardiac arrhythmia (ventricular fibrillation/tachycardia); this is lethal in minutes if untreated. ESCAPE-NET was set up to find the causes of these arrhythmias, so they can be prevented.

Dr Hanno Tan, ESCAPE-NET project leader and cardiologist, Academic Medical Centre, Amsterdam, the Netherlands, urged caution when interpreting these results. He said: "The findings need to be replicated in other studies before action should be taken by doctors or patients."

The study examined if nifedipine and amlodipine, dihydropyridines widely used for high blood pressure and angina, are linked with out-of-hospital cardiac arrest. The nifedipine doses most often used and studied in this investigation are 30 mg and 60 mg (90 mg is available but infrequently used) and the amlodipine doses are 5 mg and 10 mg. Standard practice is to start with a lower dose, then give the higher dose if blood pressure or chest pain are not sufficiently reduced.

The analysis was done using data from the Dutch Amsterdam Resuscitation Studies registry (ARREST, 2005-2011) and confirmed in the Danish Cardiac Arrest Registry (DANCAR, 2001-2014), both part of ESCAPE-NET. Patients with out-of-hospital cardiac arrest due to ventricular fibrillation/tachycardia were enrolled, plus up to five controls per patient matched for age and sex. Controls were from the Dutch PHARMO Database Network and the general population in Denmark. In total, the study included 2,503 patients and 10,543 controls in the ARREST analysis and 8,101 patients and 40,505 controls in the DANCAR analysis.

Current use of high-dose (?60 mg/day), but not low-dose (

The results were supported by a study in human cardiac cells. Dihydropyridines act by blocking L-type calcium channels. The laboratory study showed that at the dosages studied in this investigation, both drugs blocked these ion channels, thereby shortening the action potential of the cardiac cell. A shorter action potential can facilitate the occurrence of the fatal arrhythmias that cause sudden cardiac arrest. High-dose nifedipine (60 mg) caused more shortening of the action potential than high-dose amlodipine (10 mg).

"Nifedipine and amlodipine are often used by many cardiologists and other physicians, and the choice often depends on the prescriber's preference and personal experience," said Dr Tan. "Both drugs are generally considered to be equally effective and safe and neither has been associated with sudden cardiac arrest. This study suggests that high-dose nifedipine may increase the risk of sudden cardiac arrest due to fatal cardiac arrhythmia while amlodipine does not. If these findings are confirmed in other studies, they may have to be taken into account when the use of either drug is considered."

These findings may be surprising given that both drugs have been in use for many years and in many patients. A possible explanation why this discovery has only been made now is that out-of-hospital cardiac arrest is very difficult to study due to its rapid course, and requires dedicated datasets collected specifically for this purpose. Until now, there were insufficient patient records to test the impact of medications. ESCAPE-NET has made this possible by linking large cohorts across Europe, including ARREST and DANCAR.

The work of ESCAPE-NET will be discussed in two sessions at EHRA 2019, highlighting the importance of working across Europe2 and advances in preventing sudden cardiac arrest during sports.3

Dr Tan said: "As a European consortium we can validate our findings in different populations, and we bring together different expertise. For example, sudden cardiac arrest during sports is 19 times more common in men than women and the network enables us to comprehensively evaluate the potential biological (sex) and behavioural (gender) reasons."

ESCAPE-NET is backed by the European Heart Rhythm Association (EHRA) of the European Society of Cardiology (ESC) and the European Resuscitation Council (ERC).

Credit: 
European Society of Cardiology

World's oldest semen still viable

video: Sheep semen thawed after 50 years under a microscope.

Image: 
University of Sydney

Semen stored since 1968 in a laboratory in Sydney has been defrosted and successfully used to impregnate 34 Merino ewes, with the resulting live birth rate as high sperm frozen for just 12 months.

"This demonstrates the clear viability of long-term frozen storage of semen. The results show that fertility is maintained despite 50 years of frozen storage in liquid nitrogen," said Associate Professor Simon de Graaf from the Sydney Institute of Agriculture and School of Life and Environmental Sciences at the University of Sydney.

"The lambs appear to display the body wrinkle that was common in Merinos in the middle of last century, a feature originally selected to maximise skin surface area and wool yields. That style of Merino has since largely fallen from favour as the folds led to difficulties in shearing and increased risk of fly strike," Associate Professor de Graaf said.

His colleague on this project, Dr Jessica Rickard, said: "We believe this is the oldest viable stored semen of any species in the world and definitely the oldest sperm used to produce offspring."

Associate Professor de Graaf said that it was the reproductive biology and genetic aspects of these as-yet unpublished findings that were of most interest to him.

"We can now look at the genetic progress made by the wool industry over past 50 years of selective breeding. In that time, we've been trying to make better, more productive sheep," he said. "This gives us a resource to benchmark and compare."

Dr Rickard is a post-doctoral McCaughey Research Fellow in the Sydney Institute of Agriculture. She is continuing the strong animal reproduction research tradition in veterinary and biological sciences at the University of Sydney through her work in the Animal Reproduction Group.

Dr Rickard did the original work to determine if the stored semen was viable for artificial insemination. This involved thawing the semen, which is stored as small pellets in large vats of liquid nitrogen at -196 degrees. She and her colleagues then undertook in vitro tests on the sperm quality to determine the motility, velocity, viability and DNA integrity of the 50-year-old sperm.

"What is amazing about this result is we found no difference between sperm frozen for 50 years and sperm frozen for a year," Dr Rickard said.

Out of 56 ewes inseminated, 34 were successfully impregnated. This compares to recently frozen semen from 19 sires used to inseminate 1048 ewes, of which 618 were successfully impregnated. This gives a pregnancy rate of 61 percent for the 50-year-old semen against 59 percent for recently frozen sperm, a statistically equivalent rate.

The original semen samples were donated in the 1960s from sires owned by the Walker family. Those samples, frozen in 1968 by Dr Steven Salamon, came from four rams, including 'Sir Freddie' born in 1963, owned by the Walkers on their then property at Ledgworth.

The Walkers now run 8000 sheep at 'Woolaroo', at Yass Plains, and maintain a close and proud relationship with the animal breeding program at the University of Sydney.

Credit: 
University of Sydney

Calcium in arteries is shown to increase patients' imminent risk of a heart attack

image: New research study shows that identifying the presence or absence of coronary artery calcium (CAC) in a patients' arteries can help determine their future risk of a heart attack.

Image: 
Intermountain Healthcare Heart Institute

About six million people come into an emergency department every year with chest pain, but not all of them are having a heart attack -- and many are not even at risk or are at very low risk for having one.

Now, a new research study presented at the American College Cardiology Scientific Sessions from the Intermountain Healthcare Heart Institute in Salt Lake City shows that identifying the presence or absence of coronary artery calcium (CAC) in a patients' arteries can help determine their future risk.

"Through these results, we're seeing more clearly that the presence of coronary artery calcium can help us to predict who is more likely to have a cardiac event, not only later in life, but when symptoms are present, in the near future and hopefully, medically intervene in time to stop it," said Viet T. Le, PA-C, principal investigator and researcher at the Intermountain Healthcare Heart Institute in Salt Lake City.

Results of the study were presented at the American College of Cardiology Scientific Sessions in Atlanta on March 16, 2019.

For the study, researchers identified 5,547 patients without a history of coronary artery disease who came to Intermountain Medical Center with chest pain between April 2013 and June 2016.

These patients had undergone PET/CT scans to assess for ischemia, a disruption of normal blood flow through the heart arteries to the muscle tissues of the heart. This scan also looks for the presence of CAC, which are calcium deposits on the walls of the heart's arteries, indicating atherosclerosis, or plaque, the hallmark of heart disease. The researchers then examined patients' medical outcomes for up to the next four years.

Researchers found that patients whose scans revealed CAC were at higher risk of having a heart event within 90 days compared with patients whose PET/CT showed they had no CAC. Researchers also found that patients with CAC were also more likely over the following years to have high-grade obstructive coronary artery disease, revascularization surgery, and/or other major adverse cardiac events than patients who had no coronary artery calcium.

The findings can be used in two different ways, said Le.

First, testing for CAC can help emergency departments quickly identify those patients with chest pain, but are not in acute distress as being at risk for a future heart event from those who may have non-heart related symptoms and should follow up with their primary care physician to identify the true source of the chest pain, which may be as simple as a pulled muscle. These CAC scans are non-invasive, use only as much radiation as a mammogram, and are relatively cheap, especially compared to PET/CT stress tests, Le said.

Second, CAC isn't easily visually identifiable at low or moderate levels in the arteries without a formal scan. Checking patients who are not actively found to be experiencing a heart event but who have suspicious symptoms when they come to the ED can help physicians identify who is at risk for a future event. This allows for early initiation of risk reducing lifestyle changes in those found to have CAC to avoid future events.

"We can have that discussion about improving their lifestyle a little sooner this way because they may not be having an acute event but they're looking down the barrel of one, so let's see if we can move that barrel away," said Le.

Future studies are needed to demonstrate whether a CAC first strategy in these symptomatic patients will better identify those who should have further stress testing as well as improve patient education and early implementation of risk reducing strategies.

Credit: 
Intermountain Medical Center

Patients with irreparable rotator cuff tears may have another surgical option

LAS VEGAS, NV - The arthroscopic superior capsule reconstruction (SCR) surgical technique offers patients with irreparable rotator cuff tears restored shoulder function and the opportunity to return to sports and physically-demanding work, according to research presented today at the AOSSM/AANA Specialty Day in Las Vegas, Nevada. The study, which examined patient outcomes up to five years after surgery, built upon earlier research which examined short-term patient results.

"We studied 30 patients who were treated with arthroscopic SCR, and consistently saw improvements in outcomes related to shoulder function and the daily lives of those treated," noted lead researcher Teruhisa Mihata, MD, PhD, from Osaka Medical College in Osaka, Japan. "The technique allowed 11 of 12 patients who had previously worked to return full-time at five-year follow-up, and all eight who had participated in sports to return to play."

The study measured both American Shoulder and Elbow Surgeons (ASES) and Japanese Orthopaedic Association (JOA) scores in patients, which improved significantly at both one and five years after surgery. The average ASES scores rose from 29.0 before surgery to 83.0 at one year and 92.3 at five years after surgery, with JOA scores rising from 51.5 before surgery to 85.9 and 91.4, respectively. Active elevation increased from 85 degrees prior to surgery to 151 degrees 5 years after the operation. Out of 30 patients followed for five years, only three (10%) experienced graft tears. Those who demonstrated graft healing also showed no sign of glenohumeral osteoarthritis during the five-year period.

"Our latest research shows continued promise for the arthroscopic SCR technique, particularly to restore a patient's shoulder function and allow them to return to work or sports if they so choose," said Mihata. "We plan to continue studying longer-term outcomes for patients, focusing on continued function and the appearance of osteoarthritis."

Credit: 
American Orthopaedic Society for Sports Medicine

Blood flow restriction therapy may protect against bone loss following ACL reconstruction

LAS VEGAS, NV - Anterior Cruciate Ligament (ACL) reconstruction patients often face bone and muscle loss immediately following the procedure. Researchers presenting their work today at the AOSSM/AANA Specialty Day note that combining blood flow restriction (BFR) therapy with traditional rehabilitation efforts may slow bone loss and reduce return to function time.

"Providing BFR as part of the rehabilitation efforts following ACL surgery, appears to help preserve the bone, recover muscle loss and improve function quicker, according to our research," said lead author, Bradley Lambert, PhD - Orthopedic Biomechanics Research Laboratory, Department of Orthopedics & Sports Medicine, Houston Methodist Hospital)

Dr. Lambert and his colleagues are presenting new results of a randomized prospective study initiated and directed by Dr. Patrick McCulloch, MD (PI & Chair of Research for the Department) whereby 23 active young patients (Mean age 23) were studied following ACL reconstruction. Participants were divided into two groups. Both groups received the same rehab protocol, however during select exercises the BFR group exercised with an 80% arterial limb occlusion using an automated tourniquet. Bone mineral density, bone mass, and lean muscle mass were measured using DEXA. The addition of BFR therapy to standard rehab exercises was found to prevent muscle mass loss in the whole leg and thigh in the post-operative limb compared to rehab alone. Intriguingly, the addition of BFR was also observed to minimize losses in bone mineral content and preserve bone density in the limb compared to standard rehab alone. These findings coincided with improved functional outcomes observed by Dr. Corbin Hedt, DPT who oversaw the therapy sessions.

"BFR is a suitable additive therapy to ACL rehabilitation for the purposes of minimizing the loss, and enhancing the recovery of muscle, bone, and physical function. While further research is needed to fully illuminate the physiologic mechanisms responsible for our results, these findings likely have wide ranging implications for fields outside of ACL rehab alone such as injury prevention, age-related muscle and bone loss, military rehabilitation, and potentially space flight," said Lambert.

Credit: 
American Orthopaedic Society for Sports Medicine

Early sports specialization tied to increased injury rates in college athletes

LAS VEGAS, NV -Sixty million kids participate in organized athletics each year with ever increasing amounts of children specializing in one sport before the age of 14 with hopes of a college scholarship or professional career on the line. However, researchers presenting their work at the AOSSM/AANA Specialty Day today reveal that this early intense participation might come at the cost of increased injuries during their athletic careers.

"Our research indicated that athletes who specialized in their varsity sport before the age of 14 were more likely to report a history of injuries and multiple college injuries during the course of their athletic career," said author, Brian M. Cash, MD from the Department of Orthopaedic Surgery at the University of California at Los Angeles.

Cash and his colleagues sent a voluntary survey to 652 athletes who participated in athletics at a single institution. Participants were asked about demographics, scholarship status, reasons for sports specialization, age of specialization, training volume, and injury/surgical history. A total of 202 surveys were available for analysis after some were excluded due to incomplete or incorrect survey completion. Injuries were defined as those which kept an athlete out of participation for more than one week. High training volume was defined as greater than 28 hours per week during pre-high school years. 86.9% vs. 74% of individuals who specialized early reported a history of injury, (64.6% vs. 49.4%) reported multiple injuries and these athletes were held out of sport participation an average of 15.2 vs. 7.0 weeks in those that did not specialize early. However, early specializers were also more likely to receive a college scholarship (92.9 vs 83.1%). Full-scholarship athletes were more likely to report multiple surgical injuries (11.7 vs 3.5%).

In addition, those who trained more than 28 hours per week in their varsity sport before high school were more likely to report multiple injuries (90.0 vs. 56.7%). Individuals with a pre-high school training volume greater than 28 hours/week were not more likely to be recruited (90.0 vs. 89%) or receive a scholarship (80% vs. 74.5%).

"Sports participation is an excellent way for kids to maintain their health and possibly even receive a college scholarship. However, our research further highlights that avoiding sports specialization before the age of 14 and minimizing training time to less than 28 hours per week, may significantly minimize a child's injury chances and promote long-term, athletic college or even elite success," said Cash.

Credit: 
American Orthopaedic Society for Sports Medicine