Culture

Care less with helmet

image: The psychologist Dr Barbara Schmidt with bicycle helmet, on which an eyetracker is mounted.

Image: 
Jan-Peter Kasper/FSU

(Jena, Germany) The significance of some objects is so deeply entrenched in our psyche that we rely on them even when they are not actually helpful. This is the case with a bike helmet. Since our childhood, we learn that we are more protected in traffic when wearing a helmet on our head. The hard, stable headgear suggests safety - even if the wearer is not sitting on a bike and the helmet cannot fulfil its function. These are the findings of psychologists from the Friedrich Schiller University Jena, Germany in cooperation with the Canadian University of Victoria.

Play a risk game and wear a helmet

During an experiment, the research team had 40 people play a card game on the computer, in which participants choose between a high-risk and a lower-risk gambling option in each trial. Half of the participants wore a bike helmet under the coverstory that the eye tracker mounted on it measures their eye movements. During the game, the Jena scientists used EEG to observe what was happening in participants' brains, which led them to an exciting discovery: The so-called "Frontal Midline Theta Power" - the brain activity that characterises the weighing up of alternatives in the decision-making process - was much less pronounced in the helmet wearers. "Therefore, we conclude that the helmet clearly has an impact on decision-making in the risk game. Obviously, participants associate a feeling of safety with wearing the bike helmet," explains Dr Barbara Schmidt, head of the study. Cognitive control, as psychologists call the neuronal mechanism of weighing things up, is less pronounced when wearing a helmet. "It is possible that this is a priming effect," said Schmidt. "This means that the significance we associate with a helmet automatically has a cognitive effect that is also measurable in the brain."

Influence on risk behaviour

The helmet and the no-helmet group were comparable concerning their trait anxiety, which is why the discovery is not attributable to a pre-existing group difference.

Barbara Schmidt continues her research on psychological factors influencing risk behaviour. In an earlier study, she had already clearly identified the "Frontal Midline Theta Power" as an indicator of weighing up alternatives in the decision-making process and thus laid the foundation for her current work. "Investigating neuronal parameters allows us to learn more about why we act the way we do - and how this can be influenced," says the expert from Jena. "In the present study, we used the very subtle manipulation of wearing a bike helmet. But safety can also be suggested more clearly, for example during hypnosis."

This is the connection to another central field of work of the Jena psychologist. Schmidt is investigating the effect of hypnosis. "It is stunning to observe how suggestions can influence brain activity," she says. "In the hypnotic state, participants are very open to suggestions, for example, the suggestion of a safe place. Wearing a bike helmet can also be interpreted as a suggestion on a subconscious level. The current study shows that even such a subtle intervention significantly affects decision-making processes. Experiments like this help us to understand the mechanisms behind the effect of suggestions on decision-making processes in more depth."

Credit: 
Friedrich-Schiller-Universitaet Jena

The composition of fossil insect eyes surprises researchers

image: Fossil crane-fly from the 54-million-year-old Fur Formation of Denmark (overall width of specimen is about 50 mm). Note distinct compound eyes preserved as dark stains.

Image: 
René Lyng Sylvestersen

Eumelanin - a natural pigment found for instance in human eyes - has, for the first time, been identified in the fossilized compound eyes of 54-million-year-old crane-flies. It was previously assumed that melanic screening pigments did not exist in arthropods.

"We were surprised by what we found because we were not looking for, or expecting it", says Johan Lindgren, an Associate Professor at the Department of Geology, Lund University, and lead author of the study published this week in the journal Nature.

The researchers went on to examine the eyes of living crane-flies, and found additional evidence for eumelanin in the modern species as well.

By comparing the fossilized eyes with optic tissues from living crane-flies, the researchers were able to look closer at how the fossilization process has affected the conservation of compound eyes across geological time.

The fossilized eyes further possessed calcified ommatidial lenses, and Johan Lindgren believes that this mineral has replaced the original chitinous material.

This, in turn, led the researchers to conclude that another widely held hypothesis may need to be reconsidered. Previous research has suggested that trilobites - an exceedingly well-known group of extinct seagoing arthropods - had mineralized lenses in life.

"The general view has been that trilobites had lenses made from single calcium carbonate crystals. However, they were probably much more similar to modern arthropods in that their eyes were primarily organic", says Johan Lindgren.

Compound eyes are found in arthropods, such as insects and crustaceans, and are the most common visual organ seen in the animal kingdom. They are made up of multiple tiny and light-sensitive ommatidia, and the perceived image is a combination of inputs from these individual units.

Credit: 
Lund University

National narcissism rears its head in study of WWII

image: In a survey of people from countries on both sides of World War II, researchers from Washington University in St. Louis show that across the board, people ascribe an inflated weight to their country's contribution to the war effort.

Image: 
Washington University in St. Louis

World War II was, by any measure, a massive undertaking that involved huge loss and suffering.  The countries involved -- Allied and Axis -- committed substantial resources and sacrificed an astounding number of human lives. 

No matter how much a particular country contributed however, the sum total of all losses cannot equal more than 100%. Nonetheless, in a survey of people from countries on both sides of the war, researchers from Washington University in St. Louis show that across the board, people ascribe an inflated weight to their country's contribution to the war effort.

The results were published this week in Proceedings of the National Academy of Sciences.

There was another interesting finding. "Russians view World War II very differently than, basically, people from every other country in our study," said lead author Henry Roediger, the James S. McDonnell Distinguished University Professor in Arts & Sciences. "When you ask people to list the top 10 important events, people from China and Australia, and everyone else, they all list Pearl Harbor and D-Day. Russians don't.

"Well they do list D-Day," Roediger said, "but they call it the 'Opening of the Second Front,' and they see it as relieving some of the pressure on Soviet forces who then drove to Berlin." 

In the first part of the study, researchers surveyed 1,338 people, 18 and older, from 11 countries. They received at least 100 usable surveys from natives of eight of the former Allied powers -- Australia, Canada, France, New Zealand, Russia (as a proxy for the former Soviet Union), the United Kingdom and the United States -- and three former Axis powers, Germany, Italy and Japan.

The study was designed to gauge how important people thought their country's contribution was in the war effort. Participants from the Allied countries were asked, "In terms of percentage, what do you think was (your country's) contribution to the victory in World War II?"

Among just three Allied countries, Russia, the U.K. and the U.S., participants claimed their countries contributed 180%. Russians claimed 75%, the UK respondents claimed 51%, and those from the U.S. 54%. Across all eight Allied countries, the total effort amounted to about 300%.

And that's only eight of the dozens of countries that signed onto the Declaration of the United Nations before the end of the war. According to the National World War II Museum website in New Orleans, 12 additional countries had at least 1,000 military deaths during the war.

"Collective memory refers to the way people remember history, what they believe about it, and these beliefs arise from your background, your society, your education and your media," Roediger said. "And, of course, all these influences create the memories about your country as it was involved in World War II."

Even when the question was changed to require respondents to provide estimates for all eight Allied countries as well as other countries, so that their totals had to equal 100% (or less, if they were considering some of the countries not included in the survey), adding the scores that people in a given country gave their own country still totaled more than 191%. 

NATIONAL NARCISSISM

Overestimating one's contribution is not just a feature of the "winning" side. Residents of former Axis countries were asked what percentage of the war effort their country contributed. The German respondents claimed 64%, Japanese 47% and Italians 29% for a total of 140%.

The results of the study illustrate what Roediger terms national narcissism, borrowing the term from the personality disorder in individuals. Although such results may seem esoteric, they have real consequences that we can see all around us. 

Take Russia. In this second framing of the question, when respondents were forced to look at answers for their country alongside other the other seven countries, most people did rate their country's contribution much lower than in the first question. 

Except Russia. Its respondents lowered their estimates just 11%, from 75% to 64%. "They may be right," Roediger said, "at least for the war in Europe."

Estimates put the loss of Soviet Union soldiers between 8 million and 11 million. In stark contrast to Time Magazine's coverage of D-Day as "24 Hours That Saved the World," many historians would argue that the Soviet Union played the most important role in the outcome of the war in Europe. Overall, the Soviet Union lost between 22 million and 28 million people in the war, or roughly 14% of its population.

But most students in the U.S. learn the Time Magazine version of the war, both in school and through cultural representations, where the U.S. played the leading role. 

"The Russian story is told in practically none of our movies, none of our novels," Roediger said. "It's all about us. 

"But we can understand how the Soviet Union, now Russia, always sees itself as embattled," Roediger said. "We see them as aggressive, but they often see themselves in terms of this narrative going back hundreds of years: Russia is minding its own business, it is viciously attacked and looks like it's going to lose everything, but they come together, against all odds, and win. My co-author, Jim Wertsch, refers to this script as the Russian narrative template."

Today, the Russians may see the United States in that same context.

"Now, we have NATO countries that surround them," Roediger said. "The idea is that if you can understand the historical memory of a people, you can be more successful in understanding their viewpoint and in your negotiations with them."

Credit: 
Washington University in St. Louis

In product design, imagining end user's feelings leads to more original outcomes

image: In new product design, connecting with an end user's heart, rather than their head, can lead to more original and creative outcomes, says published research co-written by Ravi Mehta, a professor of business administration at Illinois and an expert in product development and marketing.

Image: 
Gies College of Business

CHAMPAIGN, Ill. -- Developing original and innovative products is critical to a company's long-term success and competitive advantage. Thus, gaining a better idea of what factors impact how designers cultivate product originality can have important - and potentially profitable - consequences for businesses.

Research co-written by a University of Illinois expert in new product development and marketing indicates that connecting with the end user's heart rather than their head can lead to more original and creative outcomes in product design.

Ravi Mehta, a professor of business administration at the Gies College of Business, shows that adopting an approach that imagines how an end user would feel while using a product leads designers to experience greater empathy, which enhances creativity and, in turn, outcome originality for new product design.

Previous research argues that product designers ought to study how consumers would use a product - and then tailor the product to those specifications.

"There are two ways that the product designer can imagine the consumer's product usage. One focuses on objective utility of the product - how consumers might use the product. The other focuses on feelings - how the product makes the consumer feel," Mehta said.

"You always want to have new products that solve problems more efficiently, more effectively and at a lesser cost. So product designers fall into this trap of being very objective in focusing on the utility of a product. That's important, but the objectivity of the thought process only takes them so far, because they're not imagining how the product will ultimately make consumers feel."

When designers start incorporating what they perceive the end user's feelings will be into product design, "what that does is enhance empathy for the consumer - and that, in turn, produces more out-of-the-box ideas. That's our big takeaway: When you imagine consumers and focus on their feelings, that's powerful and will lead to something much more innovative than only focusing on a product's utility."

Across five experiments, Mehta and co-author Kelly B. Herd of the University of Connecticut differentiated between a "feelings-imagination" approach versus an "objective-imagination" approach of incorporating an end user during the new product ideation process.

"We found that the feelings-imagination approach leads individuals to experience greater empathic concern, which makes them more receptive to multiple perspectives," Mehta said. "This is reflected in higher levels of cognitive flexibility, which, in turn, leads to greater outcome originality."

Taken together, the five experiments demonstrated "consistent support" of the framework that a feelings-based approach is superior to the more commonly used objective-based approach, the researchers wrote.

"It turns out that using the heart has more downstream benefits than just using the head," Mehta said. "It not only helps product designers build a better product, but it also helps them create more innovative products."

The implications of the findings extend to everyday consumers, who now play a role in shaping companies' product lines, Mehta said. Other research has estimated that, in the near future, more than half of consumer goods manufacturers will receive the bulk of their new product ideas from crowdsourcing.

"Marketers are increasingly tapping consumers for new product ideas," he said. "For example, there was a very successful campaign a few years ago that focused on getting consumers to create a new potato chip flavor by submitting ideas through their website.

"Our third experiment in the paper demonstrated a positive effect of adopting a feelings-imagination approach in the context of everyday consumers generating ideas in response to a crowdsourcing campaign. That suggests that these consumers - particularly given their lack of access to observe end users - may benefit from imagining end users' feelings when developing original ideas for products and services that could appeal to the masses. Companies utilizing crowdsourcing techniques can easily adopt this process and prompt feelings-imagination exercises through their websites or social media."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Research suggests glyphosate lowers pH of dicamba spray mixtures below acceptable levels

image: Soybeans exhibiting dicamba injury in a research plot at the West Tennessee AgResearch and Education Center in Jackson. A recent study conducted at the center found that mixing glyphosate and dicamba herbicides lowered the spray mixture pH below acceptable levels.

Image: 
Ginger Rowsey

KNOXVILLE, Tenn. - A University of Tennessee Institute of Agriculture study published in the Journal of Weed Technology found that mixing glyphosate with formulations of dicamba consistently lowered the pH of the spray solution below 5.0--a critical value according to the latest dicamba application labels. These labels recommend maintaining a spray solution pH above 5.0 to reduce potential for dicamba volatility.

The study titled, "Spray Mixture pH as Affected by Dicamba, Glyphosate and Spray Additives," was authored by Tom Mueller and Larry Steckel, both professors in the UT Department of Plant Sciences. Their research consisted of four experiments which examined the effect of different components on spray mixture pH, including formulations of dicamba (XtendiMax and Engenia), glyphosate (Roundup PowerMax II and Cornerstone Plus), ammonium sulfate (AMS), and several pH modifiers.

According to their data, the addition of glyphosate to XtendiMax and Engenia always decreased the measured pH by anywhere from 1.0 to 2.1 pH units. Averaged across water sources, the pH levels for XtendiMax + Roundup PowerMax were 4.8, while the levels for Engenia + Roundup PowerMax were 4.6. Conversely, when no glyphosate was added to these dicamba formulations the final pH was always above 5.0.

"If pH is the main driver for dicamba volatility, then the substantial pH changes we're seeing from the addition of glyphosate could have profound effects on volatility, as well as herbicide efficacy," says Mueller.

Many products containing glyphosate are presently approved to be mixed with dicamba before spray applications. Combining herbicides with different modes of action is a common practice among crop producers to control a wider range of weed species. However, based on Mueller and Steckel's research, UT weed experts are discouraging the addition of glyphosate to XtendiMax and Engenia.

"Based on this research, we believe glyphosate in the tank mix could be a culprit in why we're seeing some of the drift in the fields these past three years," says Steckel.

Dicamba drift has been a hot-button issue in the agricultural community since new and expanded uses for this herbicide were approved in 2017. Off-target dicamba movement, occurring either through physical drift or volatility, has been blamed for the damage of millions of acres of crops, trees and ornamental plants.

In an effort to decrease the potential for dicamba drift through volatilization, the Environmental Protection Agency (EPA) added language to the 2019-2020 labels advising applicators to maintain a spray mixture pH above 5.0 while avoiding the addition of products that would further decrease pH. According to Steckel, the lower the spray mixture pH, the higher the probability that dicamba can dissociate to the acid form, which is the most volatile form of the herbicide.

Another experiment in the UT Institute of Agriculture study evaluated spray mixture pH levels when AMS was added to dicamba formulations. AMS is commonly added to spray mixtures containing glyphosate to improve weed control effectiveness. However, new-generation dicamba labels have always forbidden the addition of AMS due to volatility concerns.

Interestingly, researchers found that while the addition of AMS did always lower dicamba spray mixture pH levels in this study, it did not lower pH levels as much as the addition of glyphosate.

"That was somewhat surprising given the attention AMS has received as a dicamba volatility enhancer," says Mueller. "Now, there could be some other aspects of ammonium sulfate besides pH that are enhancing volatility, but if pH is the main driver, our data would suggest that adding glyphosate, which is currently an approved tank mix partner, would increase dicamba volatility more than the prohibited AMS."

One proposed solution for addressing low pH spray mixtures is adding a pH modifier to the mix. Mueller and Steckel tested three pH modifers: Chempro CP-70, Novus K 20-0-6, and SoyScience. All three products raised the pH levels of the respective mixtures of XtendiMax + Roundup PowerMax and Engenia + Roundup PowerMax above 5.0.

"That is certainly promising," says Steckel, "and the addition of a pH modifier could decrease the probability of dicamba leaving a treated field via volatility. A follow-up study really needs to test that theory. Another question raised by the results of this research--if you artificially raise the pH are you going to lose the weed control from glyphosate? That topic also needs more research."

Until more research is done, UT weed scientists are recommending leaving glyphosate out of the dicamba spray mixture and adding a graminicide, like Clethodim, for control of grass weed species.

"Glyphosate is an important herbicide with many uses. Despite the continued evolution of glyphosate-resistant weeds, farmers would be lost without glyphosate, as it still provides excellent and economical control of many troublesome weed species," says Steckel. "It just doesn't belong in a tank mix with dicamba."

Credit: 
University of Tennessee Institute of Agriculture

Moon glows brighter than sun in images from NASA's Fermi

image: These images show the steadily improving view of the Moon's gamma-ray glow from NASA's Fermi Gamma-ray Space Telescope. Each 5-by-5-degree image is centered on the Moon and shows gamma rays with energies above 31 million electron volts, or tens of millions of times that of visible light. At these energies, the Moon is actually brighter than the Sun. Brighter colors indicate greater numbers of gamma rays. This image sequence shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view.

Image: 
NASA/DOE/Fermi LAT Collaboration

If our eyes could see high-energy radiation called gamma rays, the Moon would appear brighter than the Sun! That's how NASA's Fermi Gamma-ray Space Telescope has seen our neighbor in space for the past decade.

Gamma-ray observations are not sensitive enough to clearly see the shape of the Moon's disk or any surface features. Instead, Fermi's Large Area Telescope (LAT) detects a prominent glow centered on the Moon's position in the sky.

Mario Nicola Mazziotta and Francesco Loparco, both at Italy's National Institute of Nuclear Physics in Bari, have been analyzing the Moon's gamma-ray glow as a way of better understanding another type of radiation from space: fast-moving particles called cosmic rays.

"Cosmic rays are mostly protons accelerated by some of the most energetic phenomena in the universe, like the blast waves of exploding stars and jets produced when matter falls into black holes," explained Mazziotta.

Because the particles are electrically charged, they're strongly affected by magnetic fields, which the Moon lacks. As a result, even low-energy cosmic rays can reach the surface, turning the Moon into a handy space-based particle detector. When cosmic rays strike, they interact with the powdery surface of the Moon, called the regolith, to produce gamma-ray emission. The Moon absorbs most of these gamma rays, but some of them escape.

Mazziotta and Loparco analyzed Fermi LAT lunar observations to show how the view has improved during the mission. They rounded up data for gamma rays with energies above 31 million electron volts -- more than 10 million times greater than the energy of visible light -- and organized them over time, showing how longer exposures improve the view.

"Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full," said Loparco.

As NASA sets its sights on sending humans to the Moon by 2024 through the Artemis program, with the eventual goal of sending astronauts to Mars, understanding various aspects of the lunar environment take on new importance. These gamma-ray observations are a reminder that astronauts on the Moon will require protection from the same cosmic rays that produce this high-energy gamma radiation.

While the Moon's gamma-ray glow is surprising and impressive, the Sun does shine brighter in gamma rays with energies higher than 1 billion electron volts. Cosmic rays with lower energies do not reach the Sun because its powerful magnetic field screens them out. But much more energetic cosmic rays can penetrate this magnetic shield and strike the Sun's denser atmosphere, producing gamma rays that can reach Fermi.

Although the gamma-ray Moon doesn't show a monthly cycle of phases, its brightness does change over time. Fermi LAT data show that the Moon's brightness varies by about 20% over the Sun's 11-year activity cycle. Variations in the intensity of the Sun's magnetic field during the cycle change the rate of cosmic rays reaching the Moon, altering the production of gamma rays.

Credit: 
NASA/Goddard Space Flight Center

Nanoscale 'glass' bottles could enable targeted drug delivery

image: Jichuan Qiu, a postdoctoral fellow at Georgia Tech and Younan Xia, professor and Brock Family Chair in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

Image: 
Allison Carter

Tiny silica bottles filled with medicine and a special temperature-sensitive material could be used for drug delivery to kill malignant cells only in certain parts of the body, according to a study published recently by researchers at the Georgia Institute of Technology.

The research team devised a way to create silica-based hollow spheres around 200 nanometers in size, each with one small hole in the surface that could enable the spheres to encapsulate a wide range of payloads to be released later at certain temperatures only.

In the study, which was published on June 4 in the journal Angewandte Chemie International Edition, the researchers describe packing the spheres with a mixture of fatty acids, a near-infrared dye, and an anticancer drug. The fatty acids remain solid at human body temperature but melt a few degrees above. When an infrared laser is absorbed by the dye, the fatty acids will be quickly melted to release the therapeutic drug.

"This new method could allow infusion therapies to target specific parts of the body and potentially negating certain side effects because the medicine is released only where there's an elevated temperature," said Younan Xia, professor and Brock Family Chair in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "The rest of the drug remains encapsulated by the solid fatty acids inside the bottles, which are biocompatible and biodegradable."

The researchers also showed that the size of the hole could be changed, enabling nanocapsules that release their payloads at different rates.

"This approach holds great promise for medical applications that require drugs to be released in a controlled fashion and has advantages over other methods of controlled drug release," Xia said.

An earlier method for achieving controlled drug release involves loading the temperature-sensitive material into low-density lipoproteins, which is often referred to as "bad cholesterol." Another method involves loading the mixture into gold nanocages. Both have disadvantages in how the material used to encapsulate the drugs interact with the body, according to the study.

To make the silica-based bottles, the research team started by fabricating spheres out of polystyrene with a small gold nanoparticle embedded in its surface. The spheres are then coated with a silica-based material everywhere except where the gold nanoparticle is embedded. Once the gold and polystyrene are removed, only a hollow silica sphere with a small opening remains. To adjust the size of the opening, the researchers simply changed the size of the gold nanoparticle.

The process to load the bottles with their payload involves soaking the spheres in a solution containing the mixture, removing the trapped air, then washing away the excess material and payload with water. The resulting nanocapsules contain an even mixture of the temperature-sensitive material, the therapeutic drug, and the dye.

To test the release mechanism, the researchers then put the nanocapsules in water and used a near-infrared laser to heat the dye while tracking the concentration of the released therapeutic. The test confirmed that without the use of the laser, the medicine remains encapsulated. After several minutes of heating, concentrations of the therapeutic rose in the water.

"This controlled release system enables us to deal with the adverse impacts associated with most chemotherapeutics by only releasing the drug at a dosage above the toxic level inside the diseased site," said Jichuan Qiu, a postdoctoral fellow in the Xia group.

Credit: 
Georgia Institute of Technology

When the cardiology patient ends up in the oncology care ward

Management Science Study Key Takeaways:

Believe it or not, approximately 1 in 5 patients is placed 'off service,' or in a hospital ward designated for a different specialty of care than what they require.

Off-service patient placement leads to a hospital stay that is 23% longer and a higher chance of having to be readmitted within 30 days after initial discharge.

In this study, off-service placements contribute to nearly 4,000 additional patient-days per year in the studied hospital. This makes hospital more crowded and patients worse off.

CATONSVILLE, MD, August 15, 2019 - If you end up needing to go to the hospital, often times you're hoping to get a bed without having to wait hours, but a new study shows you may want to wait a little longer, so that you are placed in the best ward for your needs. New research in the upcoming INFORMS journal Management Science shows that among patients admitted to the hospital, 19.6% are placed in beds in a ward outside the area of care they require. These patients who are placed 'off service' end up experiencing a 23% longer hospital stay and a higher chance of being readmitted within a month.

The bottom line for patients is that it may be worth the wait if you can be placed in a bed in the right ward for your specific clinical needs. For hospitals, improving the match between patients and wards means fewer patient days and helps free up beds for patients who need them.

The study entitled "Capacity Pooling in Hospitals: The Hidden Consequences of Off-Service Placement," was conducted by Hummy Song of The Wharton School at the University of Pennsylvania and Anita Tucker of the Questrom School of Business at Boston University. They analyzed data from a large academic medical center in the northeastern U.S. from 2016 to 2019.

According to the researchers, off-service placement contributes to an additional 3,995 patient-days per year in the studied hospital, or an additional 11 beds being occupied each day. Because of the unintended consequences of off-service placement, the capacity constraints hospitals face can be made even worse.

"At first glance, it may not seem like there is a significant difference in outcomes between patients who are placed on service versus off service," said Song. "This is because, typically, patients who are placed off service are significantly healthier than the patients who are placed on service. Without accounting for this selection bias, we would significantly underestimate the negative effects of off-service placement."

"As patients, we expect to receive the same type and quality of care regardless of where the bed is located within the hospital. However, the location of the bed does have significant implications for how physicians and nurses coordinate patient care, which in turn impacts the length of stay and the likelihood of readmission to the hospital following discharge," added Song.

In that spirit, this research finds that when there are no more available beds on the right ward and therefore a patient must be placed off service, the placement should be based on physical proximity to where they should be. Surprisingly, it is less important to try to find a bed in a ward that cares for patients with more clinically similar conditions.

Ultimately, the study has found that through the proper accounting of the effect of off-service placement on patient flow, hospital leadership can begin to work towards better management of capacity by specialty and begin to shift away from the off-service placement, bed-pooling model to address capacity challenges.

Credit: 
Institute for Operations Research and the Management Sciences

Screening for cervical spine risk factors could reduce CT scans by half

An estimated 8 million children suffer blunt trauma annually, and while cervical spine injury (CSI) is serious, it is uncommon. Screening children suffering from blunt trauma for CSI risk factors could cut unnecessary computed tomography (CT) scans -- and radiation exposure -- by half, a prospective study of more than 4,000 children found.

The findings, published in Pediatrics, largely confirm those made in an earlier retrospective study. Together, the studies support a larger ongoing study aimed at developing a CSI risk assessment tool clinicians can use to decide which children need imaging.

"Injury is the leading cause of morbidity and mortality in children, so we see a lot of children and have to evaluate them for injuries in the prehospital and emergency department settings," said the study's author Julie Leonard, MD, MPH, an Emergency Medicine physician and principal investigator in the Center for Injury Research and Policy at the Abigail Wexner Research Institute at Nationwide Children's Hospital.

"We need to be able to quickly decide which children have serious injuries and provide them the highest level of care and service," said Dr. Leonard, who is also an associate professor of Pediatrics at The Ohio State University College of Medicine. "But we don't want to subject the large majority of children, who have experienced a traumatic event but have minor injuries, to unnecessary and potentially harmful testing and interventions."

Studies estimate that for every 1,000 CT scans in children, one to two new cancers are induced, Dr. Leonard said. "If you cut the current number of CT scans for cervical spine injury in half, each year you spare hundreds of children from cancer."

Testing children for CSI has skyrocketed 400% in the past two decades, and imaging for these patients has shifted away from X-rays, which emit some radiation, to CT scans, which expose kids to far more, Dr. Leonard adds.

"Historically, the line of reasoning given by general physicians and adult emergency physicians for these scans is that children are difficult to evaluate and don't have identifiable risk factors," said Dr. Leonard. "This study shows there are risk factors physicians can use to screen children for CSI that can help aid their decision-making as to which children warrant radiographic testing and which children can be cleared based on history and a physical exam."

In this study, of 4,091 children enrolled at four children's hospitals, 74 had CSIs. The researchers found 14 factors had bivariable associations with CSIs and of these, seven were the most useful in predicting who had a CSI and who didn't. The model based on the smaller list of factors was 92% sensitive and 50.3% specific. A model based on the retrospective study was 90.5% sensitive and 45% specific.

The seven factors that were the most informative are:

injury acquired while diving

axial load, from landing on the top of the head or other blunt force directly in line with the spine

any self-reported neck pain

reported inability to move neck

altered mental status on examination in the emergency department

intubation due to airway injury, respiratory failure or inability to protect their airway, or respiratory distress

Dr. Leonard and colleagues are now in the first year of a five-year national study that will include more than 20,000 children and observational data from emergency department and emergency medical services providers.

The researchers expect to increase the sensitivity, specificity and confidence factor of the mathematical model, with the goal of providing a Pediatric CSI Risk Assessment Tool that clinicians in all settings can use with children presenting with traumatic injury.

Credit: 
Nationwide Children's Hospital

Best of both worlds: Asteroids and massive mergers

image: The Searches after Gravitational Waves Using ARizona Observatories, or SAGUARO, logo.

Image: 
Michael Lundquist

The race is on. Since the construction of technology able to detect the ripples in space and time triggered by collisions from massive objects in the universe, astronomers around the world have been searching for the bursts of light that could accompany such collisions, which are thought to be the sources of rare heavy elements.

The University of Arizona's Steward Observatory has partnered with the Catalina Sky Survey, which searches for near-Earth asteroids from atop Mount Lemmon, in an effort dubbed Searches after Gravitational Waves Using ARizona Observatories, or SAGUARO, to find optical counterparts to massive mergers.

"Catalina Sky Survey has all of this infrastructure for their asteroid survey. So we have deployed additional software to take gravitational wave alerts from LIGO (the Laser Interferometer Gravitational-Wave Observatory) and the Virgo interferometer then notify the survey to search an area of sky most likely to contain the optical counterpart," said Michael Lundquist, postdoctoral research associate and lead author on the study published today in the Astrophysical Journal Letters.

"Essentially, instead of searching the next section of sky that we would normally, we go off and observe some other area that has a higher probability of containing an optical counterpart of a gravitational wave event," said Eric Christensen, Catalina Sky Survey director and Lunar and Planetary Laboratory senior staff scientist. "The main idea is we can run this system while still maintaining the asteroid search."

The ongoing campaign began in April, and in that month alone, the team was notified of three massive collisions. Because it is difficult to tell the precise location from which the gravitational wave originated, locating optical counterparts can be difficult.

According to Lundquist, two strategies are being employed. In the first, teams with small telescopes target galaxies that are at the right approximate distance, according to the gravitational wave signal. Catalina Sky Survey, on the other hand, utilizes a 60-inch telescope with a wide field of view to scan large swaths of sky in 30 minutes.

Three alerts, on April 9, 25 and 26, triggered the team's software to search nearly 20,000 objects. Machine learning software then trimmed down the total number of potential optical counterparts to five.

The first gravitational wave event was a merger of two black holes, Lundquist said.

"There are some people who think you can get an optical counterpart to those, but it's definitely inconclusive," he said.

The second event was a merger of two neutron stars, the incredibly dense core of a collapsed giant star. The third is thought to be a merger between a neutron star and a black hole, Lundquist said.

While no teams confirmed optical counterparts, the UA team did find several supernovae. They also used the Large Binocular Telescope Observatory to spectroscopically classify one promising target from another group. It was determined to be a supernova and not associated with the gravitational wave event.

"We also found a near-Earth object in the search field on April 25," Christensen said. "That proves right there we can do both things at the same time."

They were able to do this because Catalina Sky Survey has observations of the same swaths of sky going back many years. Many other groups don't have easy access to past photos for comparison, offering the UA team a leg up.

"We have really nice references," Lundquist said. "We subtract the new image from the old image and use that difference to look for anything new in the sky."

"The process Michael described," Christensen said, "starting with a large number of candidate detections and filtering down to whatever the true detections are, is very familiar. We do that with near-Earth objects, as well."

The team is planning on deploying a second telescope in the hunt for optical counterparts: Catalina Sky Survey's 0.7-meter Schmidt telescope. While the telescope is smaller than the 60-inch telescope, it has an even wider field of view, which allows astronomers to quickly search an even larger chunk of sky. They've also improved their machine learning software to filter out stars that regularly change in brightness.

"Catalina Sky Survey takes hundreds of thousands of images of the sky every year, from multiple telescopes. Our survey telescopes image the entire visible nighttime sky several times per month, then we are looking for one kind of narrow slice of the pie," Christensen said. "So, we've been willing to share the data with whoever wants to use it."

Credit: 
University of Arizona

Tweaked CRISPR in neurons gives scientists new power to probe brain diseases

A team of scientists at UC San Francisco and the National Institutes of Health have achieved another CRISPR first, one which may fundamentally alter the way scientists study brain diseases.

In a paper published August 15 in the journal Neuron, the researchers describe a technique that uses a special version of CRISPR developed at UCSF to systematically alter the activity of genes in human neurons generated from stem cells, the first successful merger of stem cell-derived cell types and CRISPR screening technologies.

Though mutations and other genetic variants are known to be associated with an increased risk for many neurological diseases, technological bottlenecks have thwarted the efforts of scientists working to understand exactly how these genes cause disease.

"Prior to this study, there were significant limitations that restricted what scientists could do with human neurons in the lab," said Martin Kampmann, PhD, associate professor in UCSF's Institute for Neurodegenerative Diseases, a CZ Biohub Investigator, and co-senior author of the new study.

For one thing, until fairly recently, there was no way for scientists to reliably obtain human brain cells that could be used in advanced lab experiments, explained Kampmann, also a member of the UCSF Weill Institute for Neurosciences. "It was possible to get neurons donated by patients who had undergone procedures that involve removing brain tissue to treat epilepsy or brain cancer. But these samples can only survive for a few days. You can't perform experiments to probe gene function on short-lived neurons."

Instead, scientists have generally relied on animal models of brain disease, which can fail to capture many nuances of human neurobiology.

A breakthrough came in 2006 when Shinya Yamanaka, MD, PhD, of Kyoto University and the UCSF-affiliated Gladstone Institutes, discovered a way to rewind the developmental clock and turn adult cells into stem cells that could, with some coaxing, be transformed into any type of cell found in the body -- including neurons. These "induced pluripotent stem cells" (iPSCs) made human brain cells widely available for lab research.

When the CRISPR gene-editing system arrived six years later, scientists thought they finally had all the tools they would need to manipulate genes in human neurons and determine how they contribute to neurological disease.

But scientists quickly discovered that the DNA-cutting machinery of the CRISPR system, an enzyme known as Cas9, didn't mix well with iPSCs. "Stem cells have a very active DNA damage response. When Cas9 produces even just one or two DNA cuts, it can lead to toxicity that causes the cells to die," Kampmann said.

So Kampmann decided to tackle the toxicity problem. As a postdoc in the lab of UCSF Professor Jonathan Weissman, PhD, Kampmann co-invented a tool known as CRISPRi (for "interference"), a modified form of CRISPR technology in which the Cas9 enzyme has been deactivated. When CRISPRi finds the gene it's seeking, it suppresses its activity without making any cuts. As a result, unlike standard CRISPR-Cas9, Kampmann predicted, CRISPRi shouldn't be toxic to iPSCs or stem cell-derived neurons.

In the new paper, Kampmann and his collaborators describe how they adapted CRISPRi for use in human iPSCs and iPSC-derived neurons, and found that it could target and interfere with genes without killing the cell -- a feat that had long eluded scientists.

Using this system, the researchers demonstrated how their technique can be used to find genes that may cause or contribute to brain diseases. For example, they identified genes that specifically extend the lifespan of neurons, but have no comparable effect on iPSCs or cancer cells. They also found genes that increased the number of neurites -- projections that grow from neurons and transmit nerve signals -- and determined how frequently they branched.

But one of the most surprising findings was the discovery that "housekeeping" genes -- known to be essential for survival, but thought to perform the same function in all cells -- actually behave differently in neurons and stem cells. When the researchers interfered with the same housekeeping genes in these two cell types, the cells responded by activating (or inactivating) a vastly different set of genes. This result suggests that, contrary to received wisdom, housekeeping genes may not work the same way in different cell types, an idea that Kampmann and his lab are eager to explore further, as these differences may play important roles in disease.

Kampmann is now using the technology to study different types of neurons in an effort to determine why certain diseases selectively affect just a subset of neurons, such as the way motor neurons are selectively damaged in ALS. He's also expanding his investigations into other types of brain cells -- including cells known as astrocytes and microglia -- which scientists only recently figured out how to produce from human iPSCs.

But ultimately, the goal is to turn this technology that combines CRISPRi and iPSCs into a tool that uncovers much-needed new therapeutic approaches to treating brain diseases.

"One of the big challenges facing the field is that, for most of these disorders, the precise molecular pathways that we should target for drug development remain unclear," said Michael Ward, MD, PhD, co-senior author of the new study and a physician-scientist at the National Institutes of Health.

"With this technology, we can take skin or blood cells from a patient with a neurodegenerative disease like Alzheimer's, turn them into neurons or other brain cells, and figure out which genes control the cellular defects associated with the disease," said Kampmann. "The information may allow us to identify effective therapeutic targets."

Credit: 
University of California - San Francisco

Stanford develops wireless sensors that stick to the skin to track our health

image: Using metallic ink, researchers screen-print an antenna and sensor onto a stretchable sticker designed to adhere to skin and track pulse and other health indicators, and beam these readings to a receiver on a person's clothing.

Image: 
Bao lab

We tend to take our skin's protective function for granted, ignoring its other roles in signaling subtleties like a fluttering heart or a flush of embarrassment.

Now, Stanford engineers have developed a way to detect physiological signals emanating from the skin with sensors that stick like band-aids and beam wireless readings to a receiver clipped onto clothing.

To demonstrate this wearable technology, the researchers stuck sensors to the wrist and abdomen of one test subject to monitor the person's pulse and respiration by detecting how their skin stretched and contracted with each heartbeat or breath. Likewise, stickers on the person's elbows and knees tracked arm and leg motions by gauging the minute tightening or relaxation of the skin each time the corresponding muscle flexed.

Zhenan Bao, the chemical engineering professor whose lab described the system in an Aug. 15 article in Nature Electronics, thinks this wearable technology, which they call BodyNet, will first be used in medical settings such as monitoring patients with sleep disorders or heart conditions. Her lab is already trying to develop new stickers to sense sweat and other secretions to track variables such as body temperature and stress. Her ultimate goal is to create an array of wireless sensors that stick to the skin and work in conjunction with smart clothing to more accurately track a wider variety of health indicators than the smart phones or watches consumers use today.

"We think one day it will be possible to create a full-body skin-sensor array to collect physiological data without interfering with a person's normal behavior," said Bao, who is also the K.K. Lee Professor in the School of Engineering.

Stretchable, comfortable, functional

Postdoctoral scholars Simiao Niu and Naoji Matsuhisa led the 14-person team that spent three years designing the sensors. Their goal was to develop a technology that would be comfortable to wear and have no batteries or rigid circuits to prevent the stickers from stretching and contracting with the skin.

Their eventual design met these parameters with a variation of the RFID - radiofrequency identification - technology used to control keyless entry to locked rooms. When a person holds an ID card up to an RFID receiver, an antenna in the ID card harvests a tiny bit of RFID energy from the receiver and uses this to generate a code that it then beams back to the receiver.

The BodyNet sticker is similar to the ID card: It has an antenna that harvests a bit of the incoming RFID energy from a receiver on the clothing to power its sensors. It then takes readings from the skin and beams them back to the nearby receiver.

But to make the wireless sticker work, the researchers had to create an antenna that could stretch and bend like skin. They did this by screen-printing metallic ink on a rubber sticker. However, whenever the antenna bent or stretched, those movements made its signal too weak and unstable to be useful.

To get around this problem, the Stanford researchers developed a new type of RFID system that could beam strong and accurate signals to the receiver despite constant fluctuations. The battery-powered receiver then uses Bluetooth to periodically upload data from the stickers to a smartphone, computer or other permanent storage system.

The initial version of the stickers relied on tiny motion sensors to take respiration and pulse readings. The researchers are now studying how to integrate sweat, temperature and other sensors into their antenna systems.

To move their technology beyond clinical applications and into consumer-friendly devices, the researchers need to overcome another challenge - keeping the sensor and receiver close to each other. In their experiments, the researchers clipped a receiver on clothing just above each sensor. One-to-one pairings of sensors and receivers would be fine in medical monitoring, but to create a BodyNet that someone could wear while exercising, antennas would have to be woven into clothing to receive and transmit signals no matter where a person sticks a sensor.

Credit: 
Stanford University

Numbers count in the genetics of moles and melanomas

image: Three key mole classes, reticular, globular and non-specific were magnified under a dermoscope to assess their pattern and risk factors.

Image: 
The University of Queensland

University of Queensland scientists have identified a way to help dermatologists determine a patient's risk of developing melanoma.

UQ Diamantina Institute researcher Associate Professor Rick Sturm said the team uncovered the specific gene variations affecting the number and types of moles on the body and their role in causing skin cancer.

"The goal was to investigate the genetic underpinnings of different mole classes or 'naevi types' and understand how these affect melanoma risk," Dr Sturm said.

"Based on our work, the number of moles in each category can give a more complete assessment of melanoma risk rather than just the number of moles alone."

Three key mole classes, reticular, globular and non-specific were magnified under a dermoscope to assess their pattern and risk factors.

"We found people who had more non-specific mole patterns increased their melanoma risk by two per cent with every extra mole carried," he said.

"As we age, we tend to increase the amount of non-specific moles on our body, and the risk of developing melanoma increases."

Dr Sturm said globular and reticular mole patterns were also found to change over time.

"Globular patterns were shown to decrease as we get older, typically petering out after the age of 50 to 60," he said.

"Reticular moles also decreased over time but were likely to head down a more dangerous path and develop into the non-specific pattern."

A cohort of more than 1200 people, half melanoma patients, were recruited into the almost nine-year study.

Their results were then overlayed with genetic testing, which found variations in four major genes.

"We found some major relationships between genes and the number of moles and patterns when looking at the DNA," Dr Sturm said.

"Certain gene types influenced the number of different naevi types -- for example, the IRF4 gene was found to strongly influence the number of globular naevi found on the body."

The findings will help dermatologists to better understand mole patterns and provide more holistic care to patients who may be at risk of melanoma.

"For a long time, clinicians have been interested in how pigmented moles relate to melanoma and melanoma risk," he said.

"With the availability of dermoscopes and imaging, these results provide a new layer of understanding to guide clinical practice."

Credit: 
University of Queensland

Ice sheets impact core elements of the Earth's carbon cycle

image: Leverett Glacier - SW Greenland Ice Sheet - vast volumes of meltwater and associated carbon and nutrient are exported from ice sheets every year during melt.

Image: 
Dr Stefan Hofer

The Earth's carbon cycle is crucial in controlling the greenhouse gas content of our atmosphere, and ultimately our climate.

Ice sheets which cover about 10 percent of our Earth's land surface at present, were thought 20 years ago to be frozen wastelands, devoid of life and with supressed chemical weathering - irrelevant parts of the carbon cycle.

Now a world-leading international team, led by Professor Jemma Wadham from the University of Bristol' School of Geographical Sciences and Cabot Institute for the Environment, have pulled together a wealth of evidence published over the last 20 years to demonstrate that ice sheets can no longer be regarded as frozen and passive parts of Earth's carbon cycle.

Their findings are published today in the journal Nature Communications.

Professor Wadham said; "A unique set of conditions present beneath ice sheets make them important reactors in the Earth's carbon cycle.

"Here, grinding of rock by moving ice is high, liquid water is abundant and microbes thrive in melt zones despite inhospitable conditions - the ice sheets erode their bedrock, cold-adapted microbes process the ground rock and boost nutrient release and glacial meltwaters export this nutrient to the oceans, also stimulating the upwelling of further nutrient from depth at glacier marine margins.

"All this nutrient supports fisheries and stimulates drawdown of carbon dioxide (CO2) from the atmosphere."

Co-author Professor Rob Spencer from Florida State University added: "Ice sheets are also very effective at storing vast amounts of carbon as they grow over marine sediments, soils and vegetation.

"The Antarctic Ice Sheet alone potentially stores up to 20,000 billion tonnes of organic carbon - ten times more than that estimated for Northern Hemisphere permafrost.

"Some of this carbon is released in meltwaters and fuels marine food webs. The carbon that is left behind in deep parts of ice sheets is converted to methane gas by microbial and/or geothermal activity, which has the potential to be stored as solid methane hydrate under low temperatures and high pressure conditions.

"We have no idea how stable potential methane hydrate will be in a warming climate if ice sheets thin. There is evidence from past phases of ice sheet wastage in Europe that sub-ice sheet methane hydrate has existed and can be released rapidly if ice thins."

The study also takes a walk back in time to the last transition from glacial (cold) to interglacial (warm) conditions of the present day, analysing ocean cores around Antarctica for clues which might link ice sheet nutrient (iron) export via Antarctic icebergs to the changing productivity of the Southern Ocean - an important global sink for carbon.

Co-author, Dr Jon Hawkings from Florida State University/GFZ-Potsdam, said: "One important way that the Southern Ocean takes carbon out of the atmosphere is by growth of phytoplankton in its surface waters.

"However, these tiny ocean dwelling plants are limited by availability of iron. We have long thought that atmospheric dust was important as a supplier of iron to these waters, but we now know that icebergs host iron-rich sediments which also fertilise the ocean waters as the bergs melt."

Professor Karen Kohfeld, a palaeo-oceanographer and co-author from Simon Fraser University, added: "What you see in ocean cores from the sub-Antarctic is that as the climate warmed at the end of the last glacial period, iceberg sediment (and therefore, iron) supply to the sub-Antarctic Southern Ocean falls, as does marine productivity while CO2 rises.

"While there are many possible causes for the CO2 rise, the data tantalizingly suggests that falling iron supply to the Southern Ocean via icebergs could have been a contributing factor."

What is important about this study is that is brings together the work of hundreds of scientists from all over the world published over three decades to show, via a landmark paper, that we can no longer ignore ice sheets in models of the carbon cycle and under scenarios of climate change.

Professor Wadham added: "Ice sheets are highly sensitive parts of our planet - we change temperatures in the air and ocean waters around them and thinning and retreat are inevitable.

"The evidence we present here suggests that ice sheets may have important feedbacks to the carbon cycle which require further study as the uncertainty is still huge.

"Gaining access to some of the most inaccessible and challenging parts of ice sheet beds, for example via deep drilling, alongside building numerical models which can represent biogeochemical processes in ice sheets will be key to future progress in this field."

Credit: 
University of Bristol

IRS budget cuts result in $34.3 billion in lost tax revenue from large firms

BLOOMINGTON, Ind. -- Budget cuts at the Internal Revenue Service threaten the agency's effectiveness and have led to billions of dollars in lost tax revenue, new research from the Indiana University Kelley School of Business shows.

The research is among the first to quantify the amount of tax revenue lost -- not just because the IRS is auditing fewer returns but because it has fewer resources to identify potential errors and to follow up on returns with taxpayers. The research is also the first to quantify the amount of corporate tax revenue lost during the audit process per dollar of IRS budget cuts.

"We're quantifying the effect of budget cuts on collections by trying to better understand how cuts impact the entire enforcement process -- from audit rates to ultimate settlements between taxpayers and tax authorities," said Casey Schwab, an associate professor of accounting at Kelley.

The research study, "How do IRS resources affect the corporate audit process?" accepted by The Accounting Review, relies on confidential IRS audit data from tax return years 2000 through 2010 of large, publicly traded corporations. It estimates that the IRS could have increased collections from just this subsample of firms by $34.3 billion if provided with an additional $13.7 billion in overall resources. The $34.3 billion estimate accounts for approximately 19.3 percent of the estimated corporate tax gap from 2002 through 2014.

"The scope of the audits is substantially reduced," Schwab said. "The IRS has fewer resources to actually dig into the details. While the IRS appears to still target the most aggressive positions, they can't audit as many positions within the return. They just don't have the resources."

Adjusted for inflation, the IRS budget of $11.3 billion in 2019 is smaller than in 2000 and 19 percent below its highest level of funding in 2010, according to the Government Accountability Office. It now has 21 percent fewer employees than it did eight years ago. The number of examiners has declined by 38 percent since 2010.

"Given the continued cuts to the IRS budget, the amount of lost tax revenue from public companies could be even higher than what we estimate," said Bridget Stomberg, also an associate professor of accounting at Kelley. She jointly conducted the study with Schwab; Michelle Nessa, an assistant professor at Michigan State University; and Erin Towery, an associate professor at the University of Georgia and an academic research consultant to the Internal Revenue Service.

The professors said their research should be of interest to Congress when it decides on the amount of resources to allocate to the IRS. Their findings are particularly relevant given that any resource constraints the IRS currently faces will be magnified by the increased responsibilities it has as a result of tax reform.

"We're seeing a lot less regulation in many areas," Stomberg said. "Cutting budgets is one way to handicap an agency without eliminating it altogether."

"By eliminating the role of the IRS, you're effectively reducing corporate tax burdens. On the one hand, this could be used to spur economic growth," Schwab said. "On the other hand, there's a notion that everyone should pay their fair share. The IRS is fundamental in preventing businesses from engaging in transactions that aggressively reduce their tax liability."

Credit: 
Indiana University