Culture

Outpatient treatment for cancer condition offers effective new approach for patients

A novel approach to treating fluid build-up around the lungs of cancer patients could deliver a more effective home-based treatment for thousands of people who might be approaching the end of their lives, according to a new study led by the University of Bristol and North Bristol NHS Trust.

In patients with all types of cancer excess fluid can start to collect between the thin layers of tissue lining the outside of the lung and the wall of the chest cavity. This phenomenon, called a "malignant pleural effusion" and which is particularly prevalent in lung and breast cancer patients, is estimated to affect at least 50,000 people in the UK each year with numbers increasing as both cancer survival and number of cancer diagnoses increases year-on-year.

As the lung becomes compressed by the surrounding fluid, patients will usually experience breathlessness and a dramatic reduction in quality of life. The commonest treatment for malignant effusions involves inserting a temporary tube between the ribs to drain the fluid, which allows the lung to expand. Before it is removed, medical talcum powder can be inserted down the tube to try to "glue" the lung to the inside of the chest wall to prevent further build-up of fluid.

However, while this treatment is relatively effective, the fact that it must be delivered in hospital over a number of days means that patients can experience additional distress because they unable to be at home or with family.

The alternative "indwelling pleural catheter", or IPC method, has become increasingly popular in the last two decades and is now offered by many hospitals in the UK. This approach involves patients only being in hospital for a few hours to have a long-term drainage tube tunnelled under a short section of skin in the chest. After this, their fluid can be drained at home as often as needed, rather than in the hospital environment. The main downside to this method, however, is that the tube may need to stay in place for many months or longer because, unlike talc, the method is not designed to prevent fluid formation.

In a recent study, published today [Wednesday 4 April] in the New England Journal of Medicine, researchers from the University of Bristol and North Bristol NHS Trust spent four years working with patients in 18 UK hospitals to assess whether an alternative treatment approach, which combined talc with an IPC, could be delivered to patients who preferred to remain at home rather than be admitted to hospital for their malignant pleural effusion.

One hundred and fifty four patients were randomly treated as outpatients with either an IPC alone, or with an IPC in combination with talc. The study showed that those patients given talc through their IPC were more than twice as likely to have their fluid dry up than those who were just treated as standard, with an IPC alone.

Dr Rahul Bhatnagar, Clinical Lecturer in Respiratory Medicine, who co-ordinated the trial, and Nick Maskell, Professor of Respiratory Medicine, who led the study, from the Bristol Medical School (THS) at the University of Bristol, said: "This could change how malignant effusions are treated around the world.

"Our study shows that by combining two common but previously separate treatments, those with cancer-related fluid around the lung can be more effectively managed at home than previously thought.

"For those who would prefer not to spend any time in hospital, this combination is at least twice as good as any previous option. Most patients who are having an IPC should now be considered for talc treatment as well."

The researchers plan to continue to find ways to minimise the impact of malignant effusions on cancer patients, particularly focusing on what patients feel is most important to them. The cost implications to the NHS of such a new treatment are also being investigated.

Credit: 
University of Bristol

An international study is the first large survey on epilepsy

An international research consortium used neuroimaging techniques to analyze the brains of more than 3,800 volunteers in different countries. The largest study of its kind ever conducted set out to investigate anatomical similarities and differences in the brains of individuals with different types of epilepsy and to seek markers that could help with prognosis and treatment.

Epilepsy's seizure frequency and severity, as well as the patient's response to drug therapy, vary with the part of the brain affected and other poorly understood factors. Data from the scientific literature suggests that roughly one-third of patients do not respond well to anti-epileptic drugs. Research has shown that these individuals are more likely to develop cognitive and behavioral impairments over the years.

The new study was conducted by a specific working group within an international consortium called ENIGMA, short for Enhancing NeuroImaging Genetics through Meta-Analysis, established to investigate several neurological and psychiatric diseases. Twenty-four cross-sectional samples from 14 countries were included in the epilepsy study.

Altogether, the study included data for 2,149 people with epilepsy and 1,727 healthy control subjects (with no neurological or psychiatric disorders). The Brazilian Research Institute for Neuroscience and Neurotechnology (BRAINN), which participated in the multicenter study, was the center with the largest sample, comprising 291 patients and 398 controls. Hosted in Brazil, at the State University of Campinas (UNICAMP), BRAINN is a Research, Innovation and Dissemination Center (RIDC http://cepid.fapesp.br/en/home/) supported by the Sao Paulo Research Foundation - FAPESP.

"Each center was responsible for collecting and analyzing data on its own patients. All the material was then sent to the University of Southern California's Imaging Genetics Center in the US, which consolidated the results and performed a meta-analysis," said Fernando Cendes, a professor at UNICAMP and coordinator of BRAINN.

A differential study

All volunteers were subjected to MRI scans. According to Cendes, a specific protocol was used to acquire three-dimensional images. "This permitted image post-processing with the aid of computer software, which segmented the images into thousands of anatomical points for individual assessment and comparison," he said.

According to the researcher, advances in neuroimaging techniques have enabled the detection of structural alterations in the brains of people with epilepsy that hadn't been noticed previously.

Cendes also highlighted that this is the first epilepsy study built on a really large number of patients, which allowed researchers to obtain more robust data. "There were many discrepancies in earlier studies, which comprised a few dozen or hundred volunteers."

The patients included in the study were divided into four subgroups: mesial temporal lobe epilepsy (MTLE) with left hippocampal sclerosis, MTLE with right hippocampal sclerosis, idiopathic (genetic) generalized epilepsy, and a fourth group comprising various less common subtypes of the disease.

The analysis covered both patients who had had epilepsy for years and patients who had been diagnosed recently. According to Cendes, the analysis - whose results were published in the international journal Brain - aimed at the identification of atrophied brain regions in which the cortical thickness was smaller than in the control group.

First analysis

The researchers first analyzed data from the four patient subgroups as a whole and compared them with the controls to determine whether there were anatomical alterations common to all forms of epilepsy. "We found that all four subgroups displayed atrophy in areas of the sensitive-motor cortex and also in some parts of the frontal lobe," Cendes said.

"Ordinary MRI scans don't show anatomical alterations in cases of genetic generalized epilepsy," Cendes said. "One of the goals of this study was to confirm whether areas of atrophy also occur in these patients. We found that they do."

This finding, he added, shows that in the case of MTLE, there are alterations in regions other than those in which seizures are produced (the hippocampus, parahippocampus, and amygdala). Brain impairment is, therefore, more extensive than previously thought.

Cendes also noted that a larger proportion of the brain was compromised in patients who had had the disease for longer. "This reinforces the hypothesis that more brain regions atrophy and more cognitive impairment occurs as the disease progresses."

The next step was a separate analysis of each patient subgroup in search of alterations that characterize each form of the disease. The findings confirmed, for example, that MTLE with left hippocampal sclerosis is associated with alterations in different neuronal circuits from those associated with MTLE with right hippocampal sclerosis.

"Temporal lobe epilepsy occurs in a specific brain region and is therefore termed a focal form of the disease. It's also the most common treatment-refractory subtype of epilepsy in adults," Cendes said. "We know it has different and more severe effects when it involves the left hemisphere than the right. They're different diseases."

"These two forms of the disease are not mere mirror-images of each other," he said. "When the left hemisphere is involved, the seizures are more intense and diffuse. It used to be thought that this happened because the left hemisphere is dominant for language, but this doesn't appear to be the only reason. Somehow, it's more vulnerable than the right hemisphere."

In the GGE group, the researchers observed atrophy in the thalamus, a central deep-lying brain region above the hypothalamus, and in the motor cortex. "These are subtle alterations but were observed in patients with epilepsy and not in the controls," Cendes said.

Genetic generalized epilepsies (GGEs) may involve all brain regions but can usually be controlled by drugs and are less damaging to patients.

Future developments

From the vantage point of the coordinator for the FAPESP-funded center, the findings published in the article will benefit research in the area and will also have future implications for the diagnosis of the disease. In parallel with their anatomical analysis, the group is also evaluating genetic alterations that may explain certain hereditary patterns in brain atrophy. The results of this genetic analysis will be published soon.

"If we know there are more or less specific signatures of the different epileptic subtypes, instead of looking for alterations everywhere in the brain, we can focus on suspect regions, reducing cost, saving time and bolstering the statistical power of the analysis. Next, we'll be able to correlate these alterations with cognitive and behavioral dysfunction," Cendes said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Smart ink adds new dimensions to 3-D printing

image: An example from the research shows how a 3-D-printed object composed of hydrogel (G1) can change size after printing. While this example serves to demonstrate the result, other objects can be used as filters or storage devices.

Image: 
Chenfeng Ke

HANOVER, N.H. - April 4, 2018- Researchers at Dartmouth College have developed a smart ink that turns 3D-printed structures into objects that can change shape and color. The innovation promises to add even more functionality to 3D printing and could pave the way to a new generation of printed material.

The advancement in the area of form-changing intelligent printing - also known as 4D printing - provides a low-cost alternative to printing precision parts for uses in areas ranging from biomedicine to the energy industry.

"This technique gives life to 3D-printed objects," said Chenfeng Ke, an assistant professor of chemistry at Dartmouth. "While many 3D-printed structures are just shapes that don't reflect the molecular properties of the material, these inks bring functional molecules to the 3D printing world. We can now print smart objects for a variety of uses."

Many 3D printing protocols rely on photo-curing resins and result in hard plastic objects with rigid, but random molecular architectures. The new process allows designers to retain specific molecular alignments and functions in a material and converts those structures for use in 3D printing.

By using a combination of new techniques in the pre-printing and post-printing processes, researchers were able to reduce printed objects to 1 percent of their original sizes and with 10-times the resolution. The 3D printed objects can even be animated to repeatedly expand and contract in size through the use of supramolecular pillars. With fluorescent trackers, the objects can be made to change color in response to an external stimulus such as light.

The ability to reduce the size of an object after printing while preserving functional features and increasing resolution allows inexpensive printers to print high-resolution objects that were once only possible with much more sophisticated printers.

According to the study, which was selected as a VIP paper by Angewandte Chemie, the journal of the German Chemical Society, the smart ink can print at a rough, 300-micron resolution, but the end product would feature a much finer line width of 30 microns.

"This process can use a $1,000 printer to print what used to require a $100,000 printer," said Ke. "This technique is scalable, widely adaptable and can dramatically reduce costs."

To create the smart ink, researchers used a polymer-based "vehicle" that integrates intelligent molecular systems into printing gel and allows for the transformation of their functions from the nanosacle to the macroscale.

While most materials are readily hardened during the 3D printing process, the new process introduces a series of post-printing reactions which lock the active ingredients together and retain the form of the molecular structure throughout the printing process.

The result is a printed object with a molecular design that is programmed to transform itself: If you provide it with chemical fuel, it changes shape. If you shine a light on it, it can change color.

"This is something we've never seen before. Not only can we 3D print objects, we can tell the molecules in those objects to rearrange themselves at a level that is viewable by the naked eye after printing. This development could unleash the great potential for the development of smart materials," Ke said.

While researchers believe the technology is still far away from intelligent 3D systems that can dynamically change their configuration, current uses for the technology could be to print precision filters and storage devices. Over time, researchers expect that the process could result in a new class of macroscale 3D printed objects that can be used to deliver medicine or produce high resolution bone replacements.

According to the research team involved in the study: "We believe this new approach will initiate the development of small molecule-based 3D printing materials and greatly accelerate the development of smart materials and devices beyond our current grasp that are capable of doing complex tasks in response to environmental stimuli."

In the immediate term, researchers expect the smart inks to be useful to materials chemists, 3D printing engineers and others interested in bringing functional materials into 3D printing.

Credit: 
Dartmouth College

Pulling valuable metals from e-waste makes financial sense

Electronic waste -- including discarded televisions, computers and mobile phones -- is one of the fastest-growing waste categories worldwide. For years, recyclers have gleaned usable parts, including metals, from this waste stream. That makes sense from a sustainability perspective, but it's been unclear whether it's reasonable from an economic viewpoint. Now researchers report in ACS' journal Environmental Science & Technology that recovering gold, copper and other metals from e-waste is cheaper than obtaining these metals from mines.

Projections indicate that about 50 million tons of e-waste will be discarded around the world in 2018, according to the United Nations' Global E-waste Monitor report. This type of waste contains a surprising amount of metal. For example, a typical cathode-ray tube TV contains almost a pound of copper and more than half a pound of aluminum, though it only holds about 0.02 ounces of gold. Xianlai Zeng, John A. Mathews and Jinhui Li obtained data from eight recycling companies in China to calculate the cost for extracting such metals from e-waste, a practice known as "urban mining." Expenses included the costs for waste collection, labor, energy, material and transportation, as well as capital costs for the recyclers' equipment and buildings. These expenses are offset by government subsidies and by revenue from selling recovered materials and components. The researchers conclude that with these offsets, it costs 13 times more to obtain these metals from ore than from urban mining. The researchers also draw implications for the economic prospects of urban mining as an alternative to virgin mining of ores, based on the "circular economy," or recirculation of resources.

Credit: 
American Chemical Society

Who's smarter in the classroom -- men or women?

image: In the college biology classroom, men perceive themselves as smarter, even when compared to women whose grades prove they are just as smart. And, ASU researchers were surprised to find that women were far more likely to underestimate their own intelligence than men.

Image: 
Sandra Leander/ASU

If you believe it, you can achieve it.

You've probably heard this motivational phrase more than once. But what if your beliefs about your own intelligence compared to others come down to your gender?

A first-of-its-kind study shows that in the college biology classroom, men perceive themselves as smarter, even when compared to women whose grades prove they are just as smart. The study, published April 4 in the journal Advances in Physiology Education, shows that gender greatly impacts students' perceptions of their own intelligence, particularly when they compare themselves to others.

Katelyn Cooper, a doctoral student in the Arizona State University School of Life Sciences and lead author of the study, has talked with hundreds of students as an academic advisor and those conversations led to this project.

"I would ask students about how their classes were going and I noticed a trend," shared Cooper. "Over and over again, women would tell me that they were afraid that other students thought that they were 'stupid.' I never heard this from the men in those same biology classes, so I wanted to study it."

The ASU research team asked college students enrolled in a 250-person biology course about their intelligence. Specifically, the students were asked to estimate their own intelligence compared to everyone in the class and to the student they worked most closely with in class.

The researchers were surprised to find that women were far more likely to underestimate their own intelligence than men. And, when comparing a female and a male student, both with a GPA of 3.3, the male student is likely to say he is smarter than 66 percent of the class, and the female student is likely to say she is smarter than only 54 percent of the class.

In addition, when asked whether they are smarter than the person they worked most with in class, the pattern continued. Male students are 3.2 times more likely than females to say they are smarter than the person they are working with, regardless of whether their class partners are men or women.

A previous ASU study has shown that male students in undergraduate biology classes perceive men to be smarter than women about course material, but this is the first study to examine undergraduate student perceptions about their own intelligence compared to other people in the class.

Is this a problem?

"As we transition more of our courses into active learning classes where students interact more closely with each other, we need to consider that this might influence how students feel about themselves and their academic abilities," shared Sara Brownell, senior author of the study and assistant professor in the school. "When students are working together, they are going to be comparing themselves more to each other. This study shows that women are disproportionately thinking that they are not as good as other students, so this a worrisome result of increased interactions among students."

Brownell added that in a world where perceptions are important, female students may choose not to continue in science because they may not believe they are smart enough. These false perceptions of self-intelligence could be a negative factor in the retention of women in science.

Cooper said: "This is not an easy problem to fix. It's a mindset that has likely been engrained in female students since they began their academic journeys. However, we can start by structuring group work in a way that ensures everyone's voices are heard. One of our previous studies showed us that telling students it's important to hear from everyone in the group could be enough to help them take a more equitable approach to group work."

Credit: 
Arizona State University

New 'NanoZymes' use light to kill bacteria

image: A 3-D rendering of dead bacteria after it has come into contact with the NanoZymes.

Image: 
Dr. Chaitali Dekiwadia/ RMIT Microscopy and Microanalysis Facility

Researchers from RMIT University have developed a new artificial enzyme that uses light to kill bacteria.

The artificial enzymes could one day be used in the fight against infections, and to keep high-risk public spaces like hospitals free of bacteria like E. coli and Golden Staph.

E. coli can cause dysentery and gastroenteritis, while Golden Staph is the major cause of hospital-acquired secondary infections and chronic wound infections.

Made from tiny nanorods - 1000 times smaller than the thickness of the human hair - the "NanoZymes" use visible light to create highly reactive oxygen species that rapidly break down and kill bacteria.

Lead researcher, Professor Vipul Bansal who is an Australian Future Fellow and Director of RMIT's Sir Ian Potter NanoBioSensing Facility, said the new NanoZymes offer a major cutting edge over nature's ability to kill bacteria.

"For a number of years we have been attempting to develop artificial enzymes that can fight bacteria, while also offering opportunities to control bacterial infections using external 'triggers' and 'stimuli'," Bansal said. "Now we have finally cracked it.

"Our NanoZymes are artificial enzymes that combine light with moisture to cause a biochemical reaction that produces OH radicals and breaks down bacteria. Nature's antibacterial activity does not respond to external triggers such as light.

"We have shown that when shined upon with a flash of white light, the activity of our NanoZymes increases by over 20 times, forming holes in bacterial cells and killing them efficiently.

"This next generation of nanomaterials are likely to offer new opportunities in bacteria free surfaces and controlling spread of infections in public hospitals."

The NanoZymes work in a solution that mimics the fluid in a wound. This solution could be sprayed onto surfaces.

The NanoZymes are also produced as powders to mix with paints, ceramics and other consumer products. This could mean bacteria-free walls and surfaces in hospitals.

Public toilets -- places with high levels of bacteria, and in particular E. coli -- are also a prime location for the NanoZymes, and the researchers believe their new technology may even have the potential to create self-cleaning toilet bowls.

While the NanoZymes currently use visible light from torches or similar light sources, in the future they could be activated by sunlight.

The researchers have shown that the NanoZymes work in a lab environment. The team is now evaluating the long-term performance of the NanoZymes in consumer products.

"The next step will be to validate the bacteria killing and wound healing ability of these NanoZymes outside of the lab," Bansal said.

"This NanoZyme technology has huge potential, and we are seeking interest from appropriate industries for joint product development."

The NanoZyme breakthrough has recently been published in the journal ACS Applied Nano Materials.

Credit: 
RMIT University

Obesity impacts liver health in kids as young as 8 years old

New York, NY [April 4, 2018]--A new study published today in the Journal of Pediatrics is the first to show that weight gain may have a negative impact on liver health in children as young as 8 years old. The study found that bigger waist circumference at age 3 raises the likelihood that by age 8, children will have markers for nonalcoholic fatty liver disease.

"With the rise in childhood obesity, we are seeing more kids with nonalcoholic fatty liver disease in our pediatric weight management practice," said Jennifer Woo Baidal, MD, MPH, assistant professor of pediatrics at Columbia University Vagelos College of Physicians and Surgeons and lead author of the paper. "Many parents know that obesity can lead to type 2 diabetes and other metabolic conditions, but there is far less awareness that obesity, even in young children, can lead to serious liver disease."

Nonalcoholic fatty liver disease occurs when too much fat accumulates in the liver and triggers inflammation, causing liver damage. The condition affects an estimated 80 million people in the U.S. and is the most common chronic liver condition in children and adolescents. While the disease is generally symptomless, progression of nonalcoholic fatty liver disease can lead to cirrhosis (scarring) of the liver and, in some instances, liver cancer.

Previous studies have focused on fatty liver disease in adolescents and young adults. In the current study, Woo Baidal and colleagues looked for fatty liver risk factors in younger children.

The researchers measured blood levels of a liver enzyme called ALT--elevated ALT is a marker for liver damage and can occur in individuals with nonalcoholic fatty liver disease and other conditions that affect the liver--in 635 children from Project Viva, an ongoing prospective study of women and children in Massachusetts.

By age 8, 23 percent of children in the study had elevated ALT levels. Children with a bigger waist circumference (a measure of abdominal obesity) at age 3 and those with greater gains in obesity measures between ages 3 and 8 were more likely to have elevated ALT. Approximately 35 percent of 8-year-olds with obesity had elevated ALT versus 20 percent of those with normal weight.

"Some clinicians measure ALT levels in at-risk children starting at around 10 years old, but our findings underscore the importance of acting earlier in a child's life to prevent excess weight gain and subsequent liver inflammation," says Woo Baidal, who is also director of pediatric weight management and a pediatric gastroenterologist in the Center for Adolescent Bariatric Surgery at NewYork-Presbyterian Morgan Stanley Children's Hospital. "Currently, the best way for kids and adults to combat fatty liver disease is to lose weight, by eating fewer processed foods and getting regular exercise. We urgently need better ways to screen, diagnose, prevent, and treat this disease starting in childhood."

Credit: 
Columbia University Irving Medical Center

Relaxation response may reduce blood pressure by altering expression of a set of genes

image: This is Towia Libermann, Ph.D., Director of the Genomics, Proteomics, Bioinformatics, and Systems Biology Center at BIDMC pictured with Manoj Bhasin, Ph.D., Co-Director of the Genomics, Proteomics, Bioinformatics, and Systems Biology Center at BIDMC.

Image: 
Beth Israel Deaconess Medical Center

BOSTON - High blood pressure -- or hypertension -- is a major risk factor for heart attack and stroke that affects as many as 100 million Americans and 1 billion people worldwide. Decades of research have demonstrated that the relaxation response -- the physiological and psychological opposite of the well-known fight-or-flight stress response that can be achieved through relaxation techniques like yoga or mediation - can reduce blood pressure in people with hypertension. Exactly how these interventions act on the body to lower blood pressure remains unclear.

A new study led by investigators at Beth Israel Deaconess Medical Center (BIDMC), Massachusetts General Hospital (MGH), and the Benson-Henry Institute for Mind Body Medicine at MGH identified genes associated with the body's response to relaxation techniques and sheds light on the molecular mechanisms by which these interventions may work to lower blood pressure. The findings were published today in the Journal of Alternative and Complementary Medicine.

"Traditionally, hypertension is treated with pharmacologic therapy, but not all patients respond to drug therapy, and many experience treatment-limiting side effects," said co-senior author Randall Zusman, MD, Director of the Division of Hypertension at MGH's Corrigan Minehan Heart Center. "In these patients, alternative strategies are invaluable. In this study, we found that the relaxation response can successfully help reduce blood pressure in hypertensive patients who are not taking medication."

Towia Libermann, PhD, Director of the Genomics, Proteomics, Bioinformatics, and Systems Biology Center at BIDMC said, "To our knowledge, this is the first study to test such a mind-body intervention for a population of unmedicated adults with carefully documented, persistent hypertension, and this is the first study to identify gene expression changes specifically associated with the impact of a mind-body intervention on hypertension. Our results provide new insights into how integrative medicine - especially mind-body approaches -- influences blood pressure control at the molecular level."

First described more than four decades ago by Herbert Benson, MD, Director Emeritus of the Benson Henry Institute and a co-author of the current study, the relaxation response is characterized by a set of measurable changes to the body, including decreased respiration rate and heart rate, all of which can be induced by mind-body techniques including meditation and yoga. Long-term relaxation response practice has been associated with increased brain cortical thickness and specific changes in gene expression.

In this study, Libermann, Zusman and colleagues enrolled 58 people with Stage 1 essential hypertension - defined as having a systolic (top number) blood pressure between 140-159mm Hg and diastolic (bottom number) between 90-104mm Hg. Participants were either not taking medications to control their blood pressure or had tapered off them for five weeks prior to the outset of the study. Participants also filled out standardized questionnaires about stress, depression and anxiety.

Over the next eight weeks, participants attended eight weekly training sessions at which they were guided through mind-body interventions designed to elicit the relaxation response -- including diaphragmatic breathing, mantra repetition and mindfulness meditation -while passively ignoring intrusive thoughts. Participants were also given an audio CD that guided them through the same sequence for use at home once a day.

After the eight weeks of training, patients filled out the same stress, depression and anxiety questionnaires and had blood drawn for gene expression testing along with blood pressure measurement. Overall, 13 of the 24 participants who completed the eight-week intervention experienced a clinically relevant drop in blood pressure -- that is, specific reductions in both systolic and diastolic blood pressure readings that moved participants below 140/90 mm Hg, the clinical definition of stage 1 hypertension.

Patients who demonstrated significant reductions in both systolic and diastolic blood pressure -- enough so that their blood pressure was below the definition of Stage I essential hypertension -- were classified as "responders." Those whose blood pressure still fell within the definition of Stage I hypertension -- and those who did not see reduction in both numbers - were classified as "non-responders."

When Libermann and colleagues ran gene expression analyses comparing blood samples from the two groups, they found that specific gene expression changes had occurred in the responders over the course of the eight-week relaxation response intervention that were not observed in the non-responders. Specifically, among responders the expression of 1,771 genes differed between the baseline blood tests and those taken after the eight weeks of relaxation response practice. Further, Libermann and colleagues determined that the reduction in blood pressure was correlated with genes linked to immune regulatory pathways, metabolism and glucose metabolism, cardiovascular system development and circadian rhythm.

"Interactive network analysis of the gene signature identified several molecules, particularly immune system-linked genes, as critical molecules for blood pressure reduction," said first author Manoj Bhasin, PhD, Co-Director of the Genomics, Proteomics, Bioinformatics, and Systems Biology Center at BIDMC.

"Our results suggest that the relaxation response reduced blood pressure - at least in part -- by altering expression of genes in a select set of biological pathways," co-first author John Denninger, MD, PhD, Director of Research at the Benson-Henry Institute, noted. "Importantly, the changes in gene expression associated with this drop in blood pressure are consistent with the physical changes in blood pressure and inflammatory markers that one would anticipate and hope to observe in patients successfully treated for hypertension."

Credit: 
Beth Israel Deaconess Medical Center

Scholarly snowball: Deep learning paper generates big online collaboration

MADISON-- Bioinformatics professors Anthony Gitter and Casey Greene set out in summer 2016 to write a paper about biomedical applications for deep learning, a hot new artificial intelligence field striving to mimic the neural networks of the human brain.

They completed the paper, but also triggered an intriguing case of academic crowdsourcing. Today, the paper has been massively written and revised with the help of more than 40 online collaborators, most of whom contributed enough to become co-authors.

The updated study, "Opportunities and obstacles for deep learning in biology and medicine," was published April 4, 2018 in the Journal of the Royal Society Interface.

Gitter, of the Morgridge Institute for Research and University of Wisconsin-Madison; and Greene, of the University of Pennsylvania; both work in the application of computational tools to solve big challenges in health and biology. They wanted to see where deep learning was making a difference and where the untapped potential lies in the biomedical world.

Gitter likened the process to how the open source software community works.

"We are basically taking a software engineering approach to writing a scholarly paper," he says. "We're using the GitHub website as our primary writing platform, which is the most popular place online for people to collaborate on writing code."

Adds Gitter: "We also adopted the software engineering mentality of getting a big team of people to work together on one product, and coordinating what needs to be done next."

The new authors frequently provided examples of how deep learning is impacting their corner of science. For example, Gitter says one scientist contributed a section on cryo-electron microscopy, a new must-have tool for biology imaging, that is using deep learning techniques. Others rewrote portions to make it more accessible to non-biologists or provided ethical background on medical data privacy.

Deep learning is part of a broader family of machine learning tools that has made breakthrough gains in recent years. It uses the structure of neural networks to feed inputs into multiple layers to train the algorithm. It can build ways to identify and describe recurring features in data, while also being able to predict some outputs. Deep learning also can work in "unsupervised" mode, where it can explain or identify interesting patterns in data without being directed.

One famous example of unsupervised deep learning is when a Google-produced neural network identified that the three most important components of online videos were faces, pedestrians and cats -- without being told to look for them.

Deep learning has transformed programs like face recognition, speech patterns and language translation. Among the scores of clever applications is a program that learns the signature artistic traits of famous painters, and then transforms everyday pictures into a Van Gogh, Picasso or Monet.

Greene says deep learning has not yet revealed the "hidden cats" in healthcare data, but there are some promising developments. Several studies are using deep learning to better categorize breast cancer patients by disease subtype and most beneficial treatment option. Another program is training deep learning on huge natural image databases to be able to diagnose diabetic retinopathy and melanoma. These applications surpassed some of the state of the art tools.

Deep learning also is contributing to better clinical decision-making, improving the success rates of clinical trials, and tools that can better predict the toxicity of new drug candidates.

"Deep learning tries to integrate things and make predictions about who might be at risk to develop certain diseases, and how we can try to circumvent them early on," Gitter says. "We could identify who needs more screening or testing. We could do this in a preventative, forward thinking manner. That's where my co-authors and I are excited. We feel like the potential payoff is so great, even if the current technology cannot meet these lofty goals."

Credit: 
Morgridge Institute for Research

Magnetic hot spots on neutron stars survive for millions of years

image: A tightly wound-up magnetic field used as initial state in the simulation.

Image: 
K. Gourgouliatos, R. Hollerbach, U. Durham, U. Leeds

A study of the evolution of magnetic fields inside neutron stars shows that instabilities can create intense magnetic hot spots that survive for millions of years, even after the star's overall magnetic field has decayed significantly. The results will be presented by Dr Konstantinos Gourgouliatos of Durham University at the European Week of Astronomy and Space Science (EWASS) in Liverpool on Wednesday, 4th April.

When a massive star consumes its nuclear fuel and collapses under its own gravity in a supernova explosion, it can result in a neutron star. These very dense objects have a radius of about 10 kilometres and yet are 1.5 times more massive than the Sun. They have very strong magnetic fields and are rapid rotators, with some neutron stars spinning more than 100 times per second round their axis. Neutron stars are typically modelled with a magnetic field that has a north and south magnetic pole, like the Earth's. However, a simple 'dipole' model does not explain puzzling aspects of neutron stars, such as why some parts of their surface are much hotter than their average temperature.

Gourgouliatos and Rainer Hollerbach, of the University of Leeds, used the ARC supercomputer at the University of Leeds to run numerical simulations to understand how complex structures form as the magnetic field evolves inside a neutron star.

Gourgouliatos explains: "A newborn neutron star does not rotate uniformly - various parts of it spin with different speeds. This winds up and stretches the magnetic field inside the star in a way that resembles a tight ball of yarn. Through the computer simulations, we found that a highly wound magnetic field is unstable. It spontaneously generates knots, which emerge from the surface of the neutron star and form spots where the magnetic field is much stronger than the large-scale field. These magnetic spots produce strong electric currents, which eventually release heat, in the same way heat is produced when an electric current flows in a resistor."

The simulations show that it is possible to generate a magnetic spot with a radius of a few kilometres and a magnetic field strength in excess of 10 billion Tesla. The spot can last several million years, even if the total magnetic field of the neutron star has decayed.

The study may have wide implications for our understanding of neutron stars. Even neutron stars with weaker overall magnetic fields could still form very intense magnetic hot spots. This could explain the strange behaviour of some magnetars, for example the exotic SGR 0418+5729, which has an unusually low spin rate and a relatively weak large-scale magnetic field but erupts sporadically with high-energy radiation.

Credit: 
Royal Astronomical Society

UCLA scientists merge statistics, biology to produce important new gene computational tool

image: UCLA researchers Wei 'Vivian' Li, left, and Jingyi 'Jessica' Li have designed statistical analysis software called 'scImpute,' which is more precise and reliable than previous tools.

Image: 
Reed Hutchinson/UCLA

The cells in our bodies express themselves in different ways. One cell might put a chunk of genetic code to work, while another cell ignores the same information entirely. Understanding why could spur new stem cell therapies, or lead to a more fundamental understanding of how organisms develop. But zeroing in on these cell-to-cell differences can be challenging.

Now, two UCLA researchers have come up with a computational tool that increases the reliability of measuring how strongly genes are expressed in an individual cell, even when the cell is barely reading certain genes. The research was published last month in the journal Nature Communications.

"The DNA sequence is the same in a brain cell, a liver cell and a heart cell," said Jingyi "Jessica" Li , the study's corresponding author and a UCLA assistant professor of statistics. "Why do those cells look so different? The key thing is gene expression."

DNA encodes the information needed to create and operate an organism. But the task of reading and acting on that information falls to RNA, long strands of mobile molecules that transport genetic instructions to other parts of a cell. By tallying the various RNA molecules in a cell, researchers can tell which genes are active -- or "expressed" -- and to what degree.

However, if RNA molecules are present only in trace amounts, analysis tools can be fooled into thinking that the corresponding genes aren't active at all. Unless corrected for, these "dropouts" can paint a misleading picture about actual differences between cells.

"If you want to obtain useful biological information at the individual cell level, then you need to do some statistical inferences," said Li, who is also head of the Junction of Statistics and Biology laboratory. "Otherwise your conclusions may be wrong."

Li and Wei "Vivian" Li, a doctoral candidate in the UCLA department of statistics, have designed statistical analysis software for handling dropouts in RNA sequencing. Their tool, called "scImpute," estimates which genes in a cell are most likely to drop out based on studying all individual cells in an experiment. The tool then uses information from similar cells to make an educated guess about what the level of gene expression should be.

Utilizing estimates isn't new. But available tools are either too broad -- swapping out all gene expressions of one cell with another -- or hyper-specialized for a particular type of study. The advantages of scImpute are "flexibility and universality," Jessica Li said. The tool acts with surgical precision to replace only abundances that have most likely dropped out and can be used in any type of single-cell gene-expression analysis.

In Vivian Li's comprehensive tests on both simulated and actual data -- some of which provide empirical evidence for actual levels of gene expression -- scImpute is more accurate than other methods. The software reliably distinguishes dropout genes from those that aren't expressed at all, and it provides accurate estimates of the actual abundances.

The open-source software is available for free online as an add-on for a widely used scientific computing platform for statistical analysis known as the R programming environment.

The two researchers have proven that scImpute works well in small groups of cells when dropout rates are low. But in large populations, dropout rates can exceed 90 percent of the genes. Their next goal is to make the tool just as reliable in those situations. By borrowing information from other genes -- not just other cells -- and from online databases, they believe that scImpute can become a robust tool for all situations.

Credit: 
University of California - Los Angeles

Consumers who engage with trends may be less open to advertising than others

CATONSVILLE, MD, April 2, 2018 - One common assumption in digital marketing is that individuals who are mindful of what's trending on social media, and propagate these trends, will be responsive to social media advertising and marketing, thus sharing branded messages with their network on a wide scale. As a result, firms increasingly try to mesh their brand or product with an emerging trend to get the attention of those who propagate these trends. A new study conducted by researchers from London Business School, MIT Sloan School of Management, and Cass Business School at City, University of London, may change that assumption.

The study "Advertising to Early Trend Propagators: Evidence from Twitter," which will be published in the INFORMS journal Marketing Science, is co-authored by Anja Lambrecht of London Business School, Catherine Tucker of MIT Sloan School of Management, and Caroline Wiertz of Cass Business School.

The researchers found after extensive testing that, "early propagators of trends are less responsive to advertising than consumers who embrace trends later."

The study centered on how firms target marketing and advertising to consumers who are identified as embracing and propagating the spread of new information on emerging and "tending" topics on social media. What the researchers sought to clarify is whether those early trend propagators are truly responsive to firm-sponsored messages or advertising.

"We define early trend propagators as individuals predisposed to participate in an online conversation on a topic that is about to, or has just started, trending on social media," said Caroline Wiertz.

They used data from two field tests that were conducted by a charity and a fashion firm to target ads at consumers who embraced a Twitter trend early in its lifecycle by tweeting about it. The researchers then compared the behavior of these 'early trend propagators' against that of consumers who posted about the same topic later in the trending lifecycle.

"In both field tests, we targeted ads in the form of 'promoted tweets' to Twitter users who had posted messages containing phrases related to trending topics," said Catherine Tucker. "We then continued targeting ads to users who posted on the same topic when it was no longer trending. We then compared the response of both groups to identical ads."

In their field studies, the researchers operationalized early trend propagators as those who post on Twitter using a keyword or hashtag that is trending on a given day. "Throughout our field tests, engagement with the ads was lowest when targeting early trend propagators and higher when targeting individuals who embraced the trend on subsequent days. Engagement on Twitter mainly refers to clicks and retweets so these results mean that early trend propagators are less likely - and not more likely - to respond to advertising messages than other individuals and also less likely to share them" added Anja Lambrecht.

"It is plausible that early posting related to emerging trends may be driven by a desire to provide content that leads to recognition and acclaim from followers," said Caroline Wiertz. "Twitter users who engage with trending topics may be extrinsically motivated and particularly care about status rewards. They use trending discussions to conspicuously present themselves as the rapid pace of Twitter makes being on top of the latest trends one way to signal that they are 'in the know.' Early trend propagators engage with and propagate content that serves this purpose and therefore have little reason to engage with advertising."

What this means to marketers is that targeting advertising to early trend propagators, that is to those individuals who engage with a trend when the trend originates, may be less effective in spreading advertising messages than marketers previously thought.

Credit: 
Institute for Operations Research and the Management Sciences

Non-toxic filamentous virus helps quickly dissipate heat generated by electronic devices

image: This image shows (a) phage and (b) hexagonally assembled structures of the phages in the film.

Image: 
<i>Scientific Reports</i>

The researcher team of Tokyo Institute of Technology (Tokyo Tech) discovered that the film constructed by assembling a nontoxic filamentous virus functions as a heat dissipation material, and that can be simply prepared by drying the virus aqueous solution at room temperature. This discovery is expected to elucidate the mechanism of new heat transport in electronics.

Organic polymeric materials generally have low thermal conductivity and are not suitable for rapid heat dissipation of electric and electronic equipment in the past. In order to improve its thermal conductivity, it has been considered effective to heat transfer through a covalent bond by "orientation processing" in which molecules are aligned in the same direction, or to composite with an inorganic material.

A research team led by Assistant Professor Toshiki Sawada and Professor Takeshi Serizawa is focusing on the capability to form regularly assembled structures in a wide scale from nano to macro (so called hierarchical assembly[1]) observed in the natural systems and the hierarchically assembled structures prepared in this way, the phenomenon where molecules accumulate around the perimeter as an aqueous solution in which molecules are dissolved evaporates (coffee ring effect[2]) was utilized to assemble a filamentous virus for the film preparation. As a result, it was found that the thermal diffusivity at the edge of the film drastically enhanced to a value comparable to that of inorganic glass, and that facilitates the utilization of the hierarchically assembled biomacromolecule[3]. This helps future development of electric and electronic devices composed of not only viruses but also various naturally derived molecules.

Until now, orientation processing and compositing with inorganic materials have been considered effective for the high thermal conductivity of organic polymeric materials. However, since this virus film can be prepared by evaporating an aqueous solution of a filamentous virus at room temperature, it is expected to lead to the establishment of a method for easily constructing heat dissipation materials under mild conditions that do not require special operations.

Credit: 
Tokyo Institute of Technology

Bowhead whales, the 'jazz musicians' of the Arctic, sing many different songs

image: A bowhead whale surfaces in Fram Strait, to the northwest of Norway.

Image: 
Kit Kovacs/Norwegian Polar Institute

Spring is the time of year when birds are singing throughout the Northern Hemisphere. Far to the north, beneath the ice, another lesser-known concert season in the natural world is just coming to an end.

A University of Washington study has published the largest set of recordings for bowhead whales, to discover that these marine mammals have a surprisingly diverse, constantly shifting vocal repertoire.
The study published April 4 in Biology Letters, a journal of the United Kingdom's Royal Society, analyzed audio recordings gathered year-round east of Greenland. This population of bowhead whales was hunted almost to extinction in the 1600s and was recently estimated at about 200 animals. Audio recordings gathered from 2010 to 2014 indicate a healthy population, and include 184 different songs.

"If humpback whale song is like classical music, bowheads are jazz," said lead author Kate Stafford, an oceanographer at the UW's Applied Physics Laboratory. "The sound is more freeform. And when we looked through four winters of acoustic data, not only were there never any song types repeated between years, but each season had a new set of songs."

Stafford has recorded whales' sounds throughout the world's oceans as a way to track and study marine mammals. She first detected bowhead whales singing off the other side of Greenland in 2007. A previous study by Stafford of the Spitsbergen whales off west Greenland reported in 2012 that the whales were singing continuously during the winter breeding season, the first hint that there may be a healthy population in that area.

"We were hoping when we put the hydrophone out that we might hear a few sounds," Stafford said of the earlier study. "When we heard, it was astonishing: Bowhead whales were singing loudly, 24 hours a day, from November until April. And they were singing many, many different songs."

The new paper extends that initial five-month dataset, and confirms that bowhead whales sing in this region regularly from late fall to early spring. In fact the hydrophones, which are underwater microphones, picked up slightly more singing in the later years of the study. But what was most remarkable was the relentless variety in the animals' songs, or distinct musical phrases.

The only other whale that sings elaborate songs -- humpback whales -- are widely studied in their breeding grounds off Hawaii and Mexico. The humpback's melodious song is common to each population of males and shifts slightly during the winter breeding season. Each population debuts a new tune in the spring.

"It was thought that bowhead whales did the same thing, based on limited data from springtime," Stafford said. "But those 2008 recordings were the first hint, and now this data confirms that bowhead whale songs are completely different from the humpbacks'."

Animal songs are not the same as animal calls because songs are complex, distinct musical phrases that must be learned. Many birds and mammals use songs to identify themselves as individuals or as members of a group, among other uses.

"For marine mammals, acoustics is how they do everything," Stafford said. "Humans are mostly visual animals, but marine mammals live in a three-dimensional habitat where sound and acoustic information is how they navigate, how they find food, how they communicate."

Singing whales, like birds, may be doing some combination of acoustic competition with other animals and attracting mates, Stafford said. But little is known about the bowhead whales' singing: whether only males make these sounds, whether individuals can share songs, and, most importantly, why their tune changes all the time.

"Why are they changing their songs so much?" Stafford said. "In terms of behavioral ecology, it's this great mystery."

The new data suggest bowhead whales may be similar to cowbirds and meadowlarks, birds that learn a diverse, ever-changing repertoire of songs, maybe because novelty offers some advantage.

"Bowhead whales do this behavior in the winter, during 24-hour darkness of the polar winter, in 95 to 100 percent sea ice cover. So this is not something that's easy to figure out," Stafford said. "We would never have known about this without new acoustic monitoring technology."

Current research placing radio tags on bowhead whales may someday explain why this whale has evolved to become such a versatile virtuoso.

"Bowheads are superlative animals: they can live 200 years, they've got the thickest blubber of any whale, the longest baleen, they can break through ice," Stafford said. "And you think: They've evolved to do all these amazing things. I don't know why they do this remarkable singing, but there must be a reason."

Credit: 
University of Washington

Professor makes legal case for schools to challenge cyberbullies

image: Benjamin Holden, a U. of I. journalism professor who teaches media law, wants to balance the First Amendment rights of children with the need to keep students safe from cyberbullies.

Image: 
Photo by L. Brian Stauffer, University of Illinois News Bureau

CHAMPAIGN, Ill. -- Student bullying on the internet could be headed for a showdown with a 50-year-old U.S. Supreme Court case that granted expansive

First Amendment rights to kids in public school.

When it does, University of Illinois journalism professor Benjamin Holden, through a two-part legal study, is ready to make the case for challenging the offenders.

Part one of Holden's study, published this week by the Fordham Intellectual Property, Media & Sports Law Journal, argues for new standards under which K-12 public school officials can punish cyberbullying.

Part two, published last November by the Akron Law Review, uses case law from around the country to suggest a new legal rule for when an anonymous cyberbully, preying on a public school victim, can be legally "unmasked" by a court.

The articles were published out of order due to the publishing schedules of the two independent journals.

The new standards are needed, Holden argues, because the 1969 Supreme Court ruling that currently applies, Tinker v. Des Moines, came years before the internet.

"Social media has taken over the lives of these kids," Holden said, and online bullying often disrupts schooling and students' academic success. "Whether a teacher or a school district can manage the cruel cyberbullying of kids in their classes is really the most pressing issue in the area of student discipline in American education."

Holden's Fordham article, or part one of his study, addresses "The Wisniewski problem," coined for the 2007 2nd U.S. Circuit Court of Appeals case Wisniewski v. Board of Education. The problem refers to the dilemma faced by courts and schools when a student's online bullying speech contains "elements of parody cloaked in violence," Holden writes.

His argument for unmasking, presented in his Akron article, may be more controversial, but he still thinks it is important. "Some very high percentage of really foul bullying online is anonymous," he said.

Holden is a professor of journalism who teaches media law. He's also an attorney and a former journalist. As such, his legal research and suggested solutions attempt to balance the First Amendment speech rights of kids with the duty of schools to keep students safe, which he knows can be a challenge.

"Given the toxic mix of immature bravado, anti-establishment machismo and plain juvenile silliness found in the cases, it is often difficult to separate potentially dangerous student cyberspeech from that which is merely stupid," he writes.

Holden brings added perspective to the issue as the founder of a Columbus, Georgia, nonprofit that provides mentoring and funding for low-income kids seeking to attend college. That puts him in contact with many teenagers, and he has seen the pervasive influence of social media and the corrosive effects of cyberbullying.

Courts have disagreed for decades on Tinker's application to out-of-school speech, he said. The internet and the rapid development of phone apps have further complicated the issue.

Determining how and when school officials can address such off-campus speech is "one of the biggest unanswered questions left sort of festering by the Supreme Court," Holden said.

The question has actually been addressed by half of the country's 12 federal circuit courts, but by applying inconsistent legal standards, he said. "It's not that there's no decision, it's that there are conflicting decisions." And the other six circuit courts have been silent.

"The Supreme Court has a responsibility to resolve the conflicts among the courts on the question of when 'off-campus speech' such as hateful Facebook posts, phony caricature websites or bullying Twitter messages can be punished by public schools," Holden said. The First Amendment, like the U.S. Constitution generally, does not limit the ability of private schools to discipline students, he noted.

Holden hopes the Supreme Court eventually sees the need to update the Tinker ruling "to extrapolate or extend its reasoning or its logic into the social media era" - giving school officials and schoolchildren a single standard for dealing with bullies online.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau