Culture

Artificial intelligence improves heart attack risk assessment

OAK BROOK, Ill. - When used with a common heart scan, machine learning (ML), a type of artificial intelligence, does better than conventional risk models at predicting heart attacks and other cardiac events, according to a study published in the journal Radiology.

Heart disease is the leading cause of death for both men and women in the United States. Accurate risk assessment is crucial for early interventions including diet, exercise and drugs like cholesterol-lowering statins. However, risk determination is an imperfect science, and popular existing models like the Framingham Risk Score have limitations, as they don't directly consider the condition of the coronary arteries.

Coronary computed tomography arteriography (CCTA), a kind of CT that gives highly detailed images of the heart vessels, is a promising tool for refining risk assessment--so promising that a multidisciplinary working group recently introduced a scoring system for summarizing CCTA results. The decision-making tool, known as the coronary artery disease reporting and data system (CAD-RADS), emphasizes stenoses, or blockages and narrowing in the coronary arteries. While CAD-RADS is an important and useful development in the management of cardiac patients, its focus on stenoses may leave out important information about the arteries, according to study lead author Kevin M. Johnson, M.D., associate professor of radiology and biomedical imaging at the Yale School of Medicine in New Haven, Conn.

Noting that CCTA shows more than just stenoses, Dr. Johnson recently investigated an ML system capable of mining the myriad details in these images for a more comprehensive prognostic picture.

"Starting from the ground up, I took imaging features from the coronary CT," he said. "Each patient had 64 of these features and I fed them into a machine learning algorithm. The algorithm is able to pull out the patterns in the data and predict that patients with certain patterns are more likely to have an adverse event like a heart attack than patients with other patterns."

For the study, Dr. Johnson and colleagues compared the ML approach with CAD-RADS and other vessel scoring systems in 6,892 patients. They followed the patients for an average of nine years after CCTA. There were 380 deaths from all causes, including 70 from coronary artery disease. In addition, 43 patients reported heart attacks.

Compared to CAD-RADS and other scores, the ML approach better discriminated which patients would have a cardiac event from those who would not. When deciding whether to start statins, the ML score ensured that 93 percent of patients with events would receive the drug, compared with only 69 percent if CAD-RADS were relied on.

"The risk estimate that you get from doing the machine learning version of the model is more accurate than the risk estimate you're going to get if you rely on CAD-RADS," Dr. Johnson said. "Both methods perform better than just using the Framingham risk estimate. This shows the value of looking at the coronary arteries to better estimate people's risk."

If machine learning can improve vessel scoring, it would enhance the contribution of noninvasive imaging to cardiovascular risk assessment. Additionally, the ML-derived vessel scores could be combined with non-imaging risk factors such as age, gender, hypertension and smoking to develop more comprehensive risk models. This would benefit both physicians and patients.

"Once you use a tool like this to help see that someone's at risk, then you can get the person on statins or get their glucose under control, get them off smoking, get their hypertension controlled, because those are the big, modifiable risk factors," he said.

Dr. Johnson is currently working on a paper that takes results from this study and folds them into the bigger picture with non-imaging risk factors.

"If you add people's ages and particulars like smoking, diabetes and hypertension, that should increase the overall power of the method and improve the overall results," he said.

Credit: 
Radiological Society of North America

Seizures in Alzheimer's mouse model disrupt adult neurogenesis

image: Dr. Jeannie Chin.

Image: 
Baylor College of Medicine

Memory impairment and mood changes are typically observed in patients with Alzheimer's disease but what disturbs these neuronal functions is unclear. Some researchers have proposed that alterations in the production of new neurons in the brain, or neurogenesis, may be involved; however, whether neurogenesis happens in humans, much less those with Alzheimer's disease, has been debated. A discovery published today in the journal Cell Reports provides a possible explanation for this debate and may shed light on what happens in Alzheimer's disease.

Working with animal models of Alzheimer's disease, a team led by researchers at Baylor College of Medicine discovered that seizures that are associated with the disease both in animal models and humans alter the normal dynamics of neurogenesis in adult brains. Administering anti-seizure medication restored neurogenesis and improved performance in a spatial discrimination task.

"Whether neurogenesis is altered at all in patients with Alzheimer's disease is a controversial topic in the field. While some groups present evidence supporting a decrease in neurogenesis, others claim that it increases or that there is no change," said corresponding author Dr. Jeannie Chin, associate professor of neuroscience at Baylor College of Medicine. "Working with mouse models of the disease we have identified a mechanism that we propose may integrate all of these various findings."

Neurogenesis is altered in animal models of Alzheimer's disease

Scientists have found that in normal mice neurogenesis occurs throughout life and decreases as the animals age. Neurogenesis is a multistep process that begins with the proliferation of neural stem cells, continues with the production of intermediary cells such as progenitor cells, and ends with new neurons being born. Findings from Chin's collaborators demonstrated that the pool of neural stem cells in the brain is finite. After a certain number of cell divisions, a neural stem cell stops being a stem cell, and is not replaced. Consequently, throughout the life of normal mice, the number of neural stem cells progressively decreases.

"In this study, we compared the process of neurogenesis in normal mice with that in our mouse model of Alzheimer's disease, and found that the processes are similar but, when seizures occur in mice with the disease, neurogenesis is acutely stimulated," said first author Chia-Hsuan Fu, neuroscience graduate student in the Chin lab.

The researchers observed that early in the disease when seizures begin, there is an increase in neurogenesis as neural stem cells are stimulated to divide. As the disease progresses and seizures recur stimulating more neural stem cells, the cell pool is depleted faster than in animals without the disease, and neurogenesis decreases. Later in the disease, when the pool of cells is exhausted, there is very little neurogenesis.

"These findings indicate that in the mouse model of the disease, seizures accelerate the normal process of neurogenesis. The pool of neural stem cells that generates new neurons is exhausted faster than in mice without the condition," Fu said.

"Although our study was conducted in mice, we suggest that the level of neurogenesis in a human brain with Alzheimer's disease may similarly depend on the stage of the disease, being higher at very early stages and then declining more rapidly than normal as disease continues," Chin said. "The level of neurogenesis would depend on when during the disease the samples were taken. These findings may explain the seemingly contradictory findings about whether and how neurogenesis is affected in Alzheimer's disease."

Chin also speculated that the mechanism they discovered may also help explain controversial studies regarding whether adult neurogenesis, which has primarily been studied in rodents, occurs at all in humans. "Depending on the medical history of the patients from whom the brain samples were obtained, they might have high, low or even nonexistent levels of neurogenesis at the time of death," Chin said.

Interestingly, when the researchers treated their mouse model with anti-seizure drugs they normalized the dynamics of neurogenesis. Further, animals treated with anti-seizure drugs improved their performance in a spatial discrimination task when compared with non-treated mice. Taken together, the findings suggest that the seizures are deeply connected to the alterations in neurogenesis and cognitive functions both in Alzheimer's disease and epilepsy.

They were expecting that accumulation of amyloid-beta in the brain, a major hallmark of the disease, would also affect neurogenesis as suggested by experiments in the labs of other groups, Chin said. However, the observations of Chin and her colleagues suggest that the effect of seizures on neurogenesis seems to be larger than that of amyloid-beta accumulation.

The finding that anti-seizure medications can normalize the process of neurogenesis in a model of the disease suggests that it also might be possible to do so in human patients.

"However," Chin remarks, "more work is needed and caution is necessary because the medication would have to be administered early in the disease, just as seizures are beginning to occur."

Credit: 
Baylor College of Medicine

Levänluhta jewellery links Finland to a European exchange network

image: Archaeological findings of Levänluhta in the Finnish National Museum's exhibition. In the front arm rings and necklaces found from the burial site, made out of copper alloy.

Image: 
Elisabeth Holmqvist-Sipilä

The Levänluhta water burial site, dating back to the Iron Age (300-800 CE), is one of Finland's most famous archaeological sites. Nearly one hundred individuals, mainly women or children, were buried in a lake located at Isokyrö in SW Finland, during the Iron Age. Some of the deceased were accompanied by arm rings and necklaces made out of copper alloy, bronze or brass.

Style of jewellery domestic but material from abroad

"The origin of the metals used in these pieces of jewellery was determined on the basis of the objects' geochemical and lead isotope compositions. The jewellery of the deceased is stylistically typical Finnish Iron Age jewellery, making it probable that they were cast in local workshops. However, the metals used to make these objects are unlikely to be originally from the region, since copper ores had not yet been discovered here during the Iron Age," says Elisabeth Holmqvist-Sipilä, a postdoctoral researcher.

Up to now, archaeologists have assumed that copper used in the Iron Age came mainly from the copper ores discovered in southern Scandinavia. However, this interpretation has in recent years been called into question, since the copper found in archaeological metal discoveries in Sweden has also been determined to be imported.

In a study conducted in collaboration between archaeologists at the University of Helsinki and the Geological Survey of Finland, the origin of the bronze and brass jewellery found at Levänluhta was investigated by comparing their geochemical composition and lead isotope ratios to known copper ores in Finland, Sweden and elsewhere in Europe. The study was published in the Journal of Archaeological Science: Reports.

Copper tracks lead to southern Europe

"The results demonstrate that the copper used in the objects was not from Finland or the nearby regions; rather, it has travelled to Finland along extensive exchange networks, most likely from southern Europe," says Holmqvist-Sipilä.

Based on the lead isotope ratios, the copper in the objects has its origins in the copper ores found in Greece and Bulgaria. These regions produced a large quantity of copper in the Bronze and Iron Age, which spread around Europe as various object forms, distributed as presents, loot and merchandise. Metals were also recycled by melting old objects into raw material for new casts. It may be possible that metals that ended up in Finland during the Bronze Age were recycled in the Levänluhta region.

The findings of this project, funded by the Emil Aaltonen Foundation, demonstrate that products of the copper exchange network of continental Europe also reached Finland across the Baltic Sea, thus making it possible to link the region with the extensive copper exchange system known to have extended throughout Europe. The results also illustrate the temporally and technologically multi-layered nature of prehistoric metal artefacts: raw materials found their way here through a number of hands, most likely over a long period of time and across very great distances. In domestic artisan workshops, these metals of international origin were manufactured into pieces of jewellery in domestic Iron Age fashion, perhaps embodying the local identity and place of residence of the bearer.

Credit: 
University of Helsinki

Study looks at opioid use after knee surgery

Bottom Line: A small study looked at whether reducing the number of opioid tablets prescribed after knee surgery would reduce postoperative use and if preoperative opioid-use education would reduce it even more. The study included 264 patients who underwent anterior cruciate ligament (ACL) surgery at a single academic ambulatory surgery center. They were divided into three groups: 109 were prescribed 50 opioid tablets after surgery; 78 patients were prescribed 30 tablets and before surgery received education on appropriate opioid use and alternative pain control strategies; and 77 patients received 30 tablets and no education. Patients were surveyed about their opioid use three weeks after surgery. Researchers report patients who received 50 tablets consumed more tablets (an average of 25) and for more days (nearly 6) than those given 30 tablets and no education who consumed less (an average of about 16) and for fewer days (about 4½). Patients who received 30 tablets and preoperative education used fewer tablets (average of about 12) and for fewer days (about 3½) than patients who received 30 tablets but no opioid use education. A limitation of the study to consider is that was conducted at just one surgery center and the patient group was young.

Authors: John Xerogeanes, M.D., Emory University School of Medicine, Atlanta, and coauthors

(doi:10.1001/jama.2019.6125)

Editor's Note: The article includes conflict of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Researchers model how octopus arms make decisions (+ video)

video: The graphs represent angular speed of the arms over time. These graphs can indicate synchronous or asynchronous movement patterns between arms, which indicates when arms are receiving a common signal from the brain or the environment.

Image: 
Dominic Sivitilli/ University of Washington

BELLEVUE, WA--Researchers studying the behavior and neuroscience of octopuses have long suspected that the animals' arms may have minds of their own.

A new model being presented here is the first attempt at a comprehensive representation of information flow between the octopus's suckers, arms and brain, based on previous research in octopus neuroscience and behavior, and new video observations conducted in the lab.

The new research supports previous findings that octopus' suckers can initiate action in response to information they acquire from their environment, coordinating with neighboring suckers along the arm. The arms then process sensory and motor information, and muster collective action in the peripheral nervous system, without waiting on commands from the brain.

The result is a bottom-up, or arm-up, decision mechanism rather than the brain-down mechanism typical of vertebrates, like humans, according to Dominic Sivitilli, a graduate student in behavioral neuroscience and astrobiology at the University of Washington in Seattle who will present the new research Wednesday at the 2019 Astrobiology Science Conference (AbSciCon 2019).

The researchers ultimately want to use their model to understand how decisions made locally in the arms fit into the context of complex behaviors like hunting, which also require direction from the brain.

"One of the big picture questions we have is just how a distributed nervous system would work, especially when it's trying to do something complicated, like move through fluid and find food on a complex ocean floor. There are a lot of open questions about how these nodes in the nervous system are connected to each other," said David Gire, a neuroscientist at the University of Washington and Sivitilli's advisor for the project.

Long an inspiration for science-fictional, tentacled aliens from outer space, the octopus may be as alien an intelligence as we can meet on Earth, Sivitilli said. He believes understanding how the octopus perceives its world is as close as we can come to preparing to meet intelligent life beyond our planet.

"It's an alternative model for intelligence," Sivitilli said. "It gives us an understanding as to the diversity of cognition in the world, and perhaps the universe."

The octopus exhibits many similar behaviors to vertebrates, like humans, but its nervous system architecture is fundamentally different, because it evolved after vertebrates and invertebrates parted evolutionary ways, more than 500 million years ago.

Vertebrates arranged their central nervous system in a cord up the backbone, leading to highly centralized processing in the brain. Cephalopods, like the octopus, evolved multiple concentrations of neurons called ganglia, arranged in a distributed network throughout the body. Some of these ganglia grew more dominant, evolving into a brain, but the underlying distributed architecture persists in the octopus's arms, and throughout its body.

"The octopus' arms have a neural ring that bypasses the brain, and so the arms can send information to each other without the brain being aware of it," Sivitilli said. "So while the brain isn't quite sure where the arms are in space, the arms know where each other are and this allows the arms to coordinate during actions like crawling locomotion."

Of the octopus' 500 million neurons, more than 350 million are in its eight arms. The arms need all that processing power to manage incoming sensory information, to move and to keep track of their position in space. Processing information in the arms allows the octopus to think and react faster, like parallel processors in computers.

Sivitilli works with the largest octopus in the world, the Giant Pacific octopus, as well as the smaller East Pacific red, or ruby, octopus. Both species are native to Puget Sound off Seattle's coast and the Salish Sea, and have learning and problem-solving capabilities analogous to those studied in crows, parrots and primates.

To entertain the octopuses and study their movements, Sivitilli and his colleagues gave the octopuses interesting, new objects to investigate, like cinder blocks, textured rocks, Legos and elaborate mazes with food inside. His research group is looking for patterns that reveal how the octopus' nervous system delegates among the arms as the animal approaches a task or reacts to new stimuli, looking for clues to which movements are directed by the brain and which are managed from the arms.

Sivitilli employed a camera and a computer program to observe the octopus as it explored objects in its tank and looked for food. The program quantifies movements of the arms, tracking how the arms work together in synchrony, suggesting direction from the brain, or asynchronously, suggesting independent decision-making in each appendage.

"You're seeing a lot of little decisions being made by these distributed ganglia, just by watching the arm move, so one of the first things we're doing is trying to break down what that movement actually looks like, from a computational perspective," Gire said. "What we're looking at, more than what's been looked at in the past, is how sensory information is being integrated in this network while the animal is making complicated decisions."

Credit: 
American Geophysical Union

Analyzing the tweets of Republicans and Democrats

New Stanford linguistics research has analyzed how Republicans and Democrats use different language when discussing mass shootings on social media and found that Republicans talk more about the shooter and Democrats focus more on the victims.

Focusing on posts shared on the social media platform Twitter, the researchers found that Republicans tended to concentrate on breaking news reports and on event-specific facts in their tweets while Democrats centered on discussing potential policy changes, according to the new study, presented at a computational linguistics conference in June.

"We live in a very polarized time," said the study's co-author Dan Jurafsky, professor of linguistics and of computer science. "Understanding what different groups of people say and why is the first step in determining how we can help bring people together. This research can also help us figure out how polarization spreads and how it changes over time."

Researchers examined 4.4 million tweets posted in response to 21 different mass shooting events, including the Orlando nightclub shooting in 2016, to determine what words and emotions people with different political leanings expressed.

They found that Republicans were more likely to express fear and disgust in their tweets than Democrats, who were more likely to communicate sadness and calls for action. Republicans were also 25 percent more likely than Democrats to write "terrorist" in tweets about the shootings in which the shooter was African American, Hispanic or Middle Eastern. Democrats were 25 percent more likely to use the same word when they tweeted about shootings in which the shooter was white.

Studying tweets

Researchers launched the study because they had three main questions: What is different about how Democrats and Republicans talk on Twitter? Could Republicans or Democrats be identified based on particular words they use in their tweets? How could these differences help understand the causes and consequences of social media polarization?

To answer those questions, researchers used a method developed by Stanford economist Matthew Gentzkow together with Brown University economist Jesse Shapiro, who are co-authors on the new study, and economist Matt Taddy. The method determines the degree of polarization in speech, and it was used in previous research that examined the speech of members of Congress.

The researchers applied the method and a language processing framework they created to a database of 4.4 million tweets about 21 mass shooting events that happened between 2015 and 2018. The researchers excluded retweets, and they determined whether a Twitter user was a Republican or Democrat by analyzing if they followed more Republican or Democratic politicians' accounts.

Researchers chose to focus on responses to mass shootings because "they are events with objective facts, the meanings of which people twist in different ways," said Dora Demszky, the lead author on the study and a Stanford linguistics graduate student. The interdisciplinary team of co-authors also includes James Zou, assistant professor of biomedical data science, linguistics graduate student Rob Voigt and electrical engineering graduate student Nikhil Garg.

Researchers found that when people mentioned an earlier shooting as a way to contextualize the new shooting, Democrats were 2.7 times more likely than Republicans to mention a previous school shooting, most often the 2012 Sandy Hook Elementary School shooting. But Republicans were 2.5 times more likely to mention an event of mass violence that involved a perpetrator who was a person of color, which most often involved a mention of the Sept. 11 attacks.

Researchers saw that the degree of polarization in the tweets increased over time in the hours and days following the events. For the three events where there was sufficient long-term data to draw conclusions, polarization plateaued usually after about three to four days, Demszky said.

"Ideological polarization happens very fast," Demszky said. "As soon as an event like a mass shooting happens, people react very differently right away. This research gives a large-scale insight into how polarization works linguistically."

Among other findings, researchers found that Democrats were more likely than Republicans to use phrases like "need to," "should," "have to" and "must" as part of their calls for political action.

The research study also confirms previous research showing the relationship between people's beliefs, personalities and worldviews. The new study reveals that different emotions are expressed by people with different political leanings.

Limitations and further research

While some of the difference in speech patterns between Republicans and Democrats may be intuitive, the new study is one of the first to quantify polarization of language on social media in the hours and days after major events, Jurafsky and Demszky said.

"In order to think about how we could fix the echo chambers that social media creates, we need data on how polarization happens," Demszky said.

Further research is needed to understand the linguistic differences among Republicans and Democrats.

One limitation of the new study is that researchers categorized each Twitter user they analyzed as either Republican or Democrat rather than locating them along an ideological spectrum.

Demszky said she hopes that talking about language bias could be helpful in and of itself.

"It's easy to not reflect on the words you use daily," Demszky said. "But I think it's a good step forward if people are just aware of their own biases."

Credit: 
Stanford University

Massachusetts General study identifies pathway linking socioeconomic status to cardiovascular risk

A biological pathway previously found to contribute to the impact of stress on the risk of cardiovascular disease also may underlie the increased incidence of such disease experienced by individuals with lower socioeconomic status. The report from investigators at Massachusetts General Hospital (MGH), published online in the Journal of the American College of Cardiology, is a follow-up to a 2017 Lancet paper by some of the same authors that, for the first time in humans, linked activity of the stress-responsive brain structure the amygdala to elevated risk of events such as heart attack and stroke.

"This new study identifies a potentially modifiable biological pathway that contributes to the increased burden of cardiovascular disease that encumbers socioeconomically disadvantaged individuals," says Ahmed Tawakol, MD, director of Nuclear Cardiology in the MGH Division of Cardiology, lead author of the paper. "These observations point to a mechanism that may be an attractive target for future therapies aimed at reducing disparities in health outcomes."

The increased incidence of cardiovascular disease among those of lower socioeconomic status is well known. While some of that risk can be attributed to known risk factors - including rates of smoking and obesity and limited access to care - the greater risk persists even after adjusting for those factors, indicating that additional biological factors may be in play. The 2017 study used PET/CT brain imaging with a radiopharmaceutical that measures both activity within the brain and arterial inflammation to define a pathway that led from amygdala activation, to elevated production of immune cells, to increased arterial inflammation resulting in an increased risk of heart attack, stroke or angina among 293 individuals who had brain imaging for cancer screening.

The current study focused on 289 of the participants in the 2017 study for whom data was available reflecting their socioeconomic status, based on census data covering their home ZIP codes. The team's analysis revealed that individuals from neighborhoods with lower household incomes or higher crime rates had a significantly increased risk of experiencing a major adverse cardiac event - such as a heart attack, unstable angina, heart failure or cardiac death - in the four years after imaging. While many of those experiencing a cardiovascular event had traditional risk factors, living in a low-income neighborhood was strongly associated with increased resting amygdalar activity, elevated immune cell production and arterial inflammation. A similar association of those factors with neighborhood crime rates was observed, although it was not statistically significant.

"These results provide further support for considering socioeconomic status when assessing an individual's risk for cardiovascular disease and suggest new approaches to helping reduce cardiovascular risk among those patients," says Tawakol, an associate professor of Medicine at Harvard Medical School (HMS). "The multiple nodes of the biological pathway that we have defined - brain stress centers, immune cell production and arterial inflammation - could each be targeted by lifestyle approaches such as sufficient sleep, exercise and meditation; statins to reduce arterial inflammation, and novel treatments targeting this path."

He and his team hope to further define this biological pathway, evaluate interventions that could reduce its activity and investigate the resiliency observed in a few individuals with low socioeconomic status who were found to have lower than average amygdalar activity and cardiovascular risk.

Katrina Armstrong, MD, chief of the MGH Department of Medicine and senior author of the Journal of the American College of Cardiology paper, says, "These analyses highlight the incredible opportunity we have to apply recent advances in the understanding of human biology to addressing disparities in health." Armstrong is the Jackson Professor of Clinical Medicine at HMS.

Credit: 
Massachusetts General Hospital

Goat milk kefir is proven to be good for your health

Kefir is a fermented dairy product that is gradually becoming more and more common to see on the shelves of Spanish shops and supermarkets. Since it is a milk-based product, made from lactic acid and alcoholic fermentation, it is assumed to have several health-enhancing functions resulting from its protein and peptide content with biological activity (molecules made up of amino acids, smaller than proteins, that are beneficial for one's health).

However, to date there had never been a complete analysis of what kinds of peptides goat milk kefir has. So, a University of Cordoba research team made up of researchers from the Biochemistry, Proteomics and Biology of Plant and Agroforestry Systems Group as well as from the Headquarters for Research Support (abbreviatd to SCAI in Spanish) led by Professor Manuel Rodríguez decided to characterize the peptidome (set of peptides) of this product in order to open the doors to the study of kefir's positive characteristics.

In order to accomplish this detailed result, they focused on 22 proteins and applied the technique of tandem mass spectrometry to kefir in three fermentation times (12 hours, 24 hours and 36 hours) to detect, in addition to the advantageous compounds, the peaks of concentration depending on fermentation time. A gradual increase in peptide content was found to occur during fermentation for 24 hours. When the 24 hour mark was reached, the concentration was highest and began to descrease.

Once the peptides present in goat milk kefir and their quantities according to fermentation time were determined, the University of Cordoba team had detected 11 beneficial compounds related to antihypertensive, antioxidant and antibacterial activity.

These kinds of exploratory studies will enable different research teams to continue delving deeper into understanding the health benefits of this product, and may well breathe new life into the goat sector.

Credit: 
University of Córdoba

Helping physics teachers who don't know physics

COLUMBUS, Ohio--A shortage of high school physics teachers has led to teachers with little-to-no physics training taking over physics classrooms, causing additional stress and job dissatisfaction for those teachers--and a difficult learning experience for their students.

But new research indicates that focused physics professional development for teachers--even those who have no prior physics training--can lead to better experiences for both students and teachers, and can improve students' understanding of physics concepts.

The study, published last month in the Journal of Science Teacher Education, followed two groups of advanced-placement science teachers as they went through three years of training. The program was designed to improve their understanding of physics concepts and to assist them in developing teaching strategies to help their students better retain what they learn about physics.

Justina Ogodo, the study's author and postdoctoral researcher at The Ohio State University's Department of Teaching and Learning, said that when she launched this project, she remembered being a physics student in high school, and being uninspired by the education she received.

"I truly hated physics, because my teacher would speak to the board--he would teach to the board," she said. "I imagined students were having the same experience I had, because the teachers don't have the content knowledge or pedagogical skills to teach physics."

Ogodo wanted to understand how a teacher's subject-matter knowledge could affect a student's ability to learn and understand. She followed a group of advanced-placement physics teachers through intensive physics professional development funded by the National Science Foundation, then compared their teaching practices and student outcomes with AP teachers who did not attend the courses.

To evaluate the teachers, Ogodo used the Reformed Teaching and Observation Protocol (RTOP) instrument, which has been in use as a teacher-evaluation tool since 2000. Ogodo used the instrument to measure each teacher's effectiveness in five categories: lesson design and implementation, content, classroom culture, communicative interactions and student/teacher relationships. She found that teachers who completed the training earned scores about 40 percent higher than teachers who did not participate in the professional development.

Prior to the training, Ogodo found, most teachers used "traditional, teacher-centered methods" to teach. Those methods include lectures, note-taking and problem-solving activities--methods designed to complete the AP curriculum and focused on the AP exam.

Ogodo observed that teachers who completed the course were more likely to use conceptual learning techniques and the Socratic method to teach their students--a method driven by inquiry-based teaching and learning, along with hands-on labs to help students see the real-world applications of the theories they learned.

The teachers who did not complete the training, Ogodo found, continued to fall back on lectures and standardized labs.

The shortage of physics teachers is severe. Across the United States, just 47 percent of physics teachers have physics degrees or physics education, according to the National Science Foundation.

And in Alabama, where this study was conducted, the problem is worse: Just 9 percent of physics teachers there have physics degrees or certification in physics education.

"They are just thrown into the physics classrooms to teach," Ogodo said. "That means they are not equipped to teach physics, and that can be frustrating for both teachers and students."

The results can be harmful, Ogodo found. Some teachers in Ogodo's study reported feeling a lack of confidence in their abilities, especially when teaching physics concepts they did not understand, and suggested that these feelings could lead to teacher burn-out. Ogodo also found that teachers' lack of knowledge can diminish students' interest in physics.

But in classrooms led by teachers who participated in the intensive physics education training, teachers reported feeling greater satisfaction in teaching physics and greater trust in their abilities.

Previous studies about science and education have shown that students' ability to achieve in any subject is directly connected to the quality and effectiveness of their teachers.

Ogodo said this study shows that increasing training for teachers will likely lead to better outcomes for students and to greater numbers of students seeking futures in the sciences.

"One student told me she likes to write, and that she wanted to be a creative writer, but that after taking this physics class with her teacher who had learned these better techniques, she wants to be a physics teacher," Ogodo said. "That just made my day."

Credit: 
Ohio State University

Hubble finds tiny 'electric soccer balls' in space, helps solve interstellar mystery

image: This is an artist's concept depicting the presence of buckyballs in space. Buckyballs, which consist of 60 carbon atoms arranged like soccer balls, have been detected in space before by scientists using NASA's Spitzer Space Telescope. The new result is the first time an electrically charged (ionized) version has been found in the interstellar medium.

Image: 
NASA/JPL-Caltech

Scientists using NASA's Hubble Space Telescope have confirmed the presence of electrically-charged molecules in space shaped like soccer balls, shedding light on the mysterious contents of the interstellar medium (ISM) - the gas and dust that fills interstellar space.

Since stars and planets form from collapsing clouds of gas and dust in space, "The diffuse ISM can be considered as the starting point for the chemical processes that ultimately give rise to planets and life," said Martin Cordiner of the Catholic University of America, Washington. "So fully identifying its contents provides information on the ingredients available to create stars and planets." Cordiner, who is stationed at NASA's Goddard Space Flight Center in Greenbelt, Maryland, is lead author of a paper on this research published April 22nd in the Astrophysical Journal Letters.

The molecules identified by Cordiner and his team are a form of carbon called "Buckminsterfullerene," also known as "Buckyballs," which consists of 60 carbon atoms (C60) arranged in a hollow sphere. C60 has been found in some rare cases on Earth in rocks and minerals, and can also turn up in high-temperature combustion soot.

C60 has been seen in space before. However, this is the first time an electrically charged (ionized) version has been confirmed to be present in the diffuse ISM. The C60 gets ionized when ultraviolet light from stars tears off an electron from the molecule, giving the C60 a positive charge (C60+). "The diffuse ISM was historically considered too harsh and tenuous an environment for appreciable abundances of large molecules to occur," said Cordiner. "Prior to the detection of C60, the largest known molecules in space were only 12 atoms in size. Our confirmation of C60+ shows just how complex astrochemistry can get, even in the lowest density, most strongly ultraviolet-irradiated environments in the Galaxy."

Life as we know it is based on carbon-bearing molecules, and this discovery shows complex carbon molecules can form and survive in the harsh environment of interstellar space. "In some ways, life can be thought of as the ultimate in chemical complexity," said Cordiner. "The presence of C60 unequivocally demonstrates a high level of chemical complexity intrinsic to space environments, and points toward a strong likelihood for other extremely complex, carbon-bearing molecules arising spontaneously in space."

Most of the ISM is hydrogen and helium, but it's spiked with many compounds that haven't been identified. Since interstellar space is so remote, scientists study how it affects the light from distant stars to identify its contents. As starlight passes through space, elements and compounds in the ISM absorb and block certain colors (wavelengths) of the light. When scientists analyze starlight by separating it into its component colors (spectrum), the colors that have been absorbed appear dim or are absent. Each element or compound has a unique absorption pattern that acts as a fingerprint allowing it to be identified. However, some absorption patterns from the ISM cover a broader range of colors, which appear different from any known atom or molecule on Earth. These absorption patterns are called Diffuse Interstellar Bands (DIBs). Their identity has remained a mystery ever since they were discovered by Mary Lea Heger, who published observations of the first two DIBs in 1922.

A DIB can be assigned by finding a precise match with the absorption fingerprint of a substance in the laboratory. However, there are millions of different molecular structures to try, so it would take many lifetimes to test them all.

"Today, more than 400 DIBs are known, but (apart from the few newly attributed to C60+), none has been conclusively identified," said Cordiner. "Together, the appearance of the DIBs indicate the presence of a large amount of carbon-rich molecules in space, some of which may eventually participate in the chemistry that gives rise to life. However, the composition and characteristics of this material will remain unknown until the remaining DIBs are assigned."

Decades of laboratory studies have failed to find a precise match with any DIBs until the work on C60+. In the new work, the team was able to match the absorption pattern seen from C60+ in the laboratory to that from Hubble observations of the ISM, confirming the recently claimed assignment by a team from University of Basel, Switzerland, whose laboratory studies provided the required C60+ comparison data. The big problem for detecting C60+ using conventional, ground-based telescopes, is that atmospheric water vapor blocks the view of the C60+ absorption pattern. However, orbiting above most of the atmosphere in space, the Hubble telescope has a clear, unobstructed view. Nevertheless, they still had to push Hubble far beyond its usual sensitivity limits to stand a chance of detecting the faint fingerprints of C60+.

The observed stars were all blue supergiants, located in the plane of our Galaxy, the Milky Way. The Milky Way's interstellar material is primarily located in a relatively flat disk, so lines of sight to stars in the Galactic plane traverse the greatest quantities of interstellar matter, and therefore show the strongest absorption features due to interstellar molecules.

The detection of C60+ in the diffuse ISM supports the team's expectations that very large, carbon-bearing molecules are likely candidates to explain many of the remaining, unidentified DIBs. This suggests that future laboratory efforts measure the absorption patterns of compounds related to C60+, to help identify some of the remaining DIBs.

The team is seeking to detect C60+ in more environments to see just how widespread buckyballs are in the Universe. According to Cordiner, based on their observations so far, it seems that C60+ is very widespread in the Galaxy.

Credit: 
NASA/Goddard Space Flight Center

These neurons affect how much you do, or don't, want to eat

Like a symphony, multiple brain regions work in concert to regulate the need to eat. University of Arizona researchers believe they have identified a symphony conductor - a brain region that regulates appetite suppression and activation - tucked within the amygdala, the brain's emotional hub.

The UA Department of Neuroscience team found the neurocircuitry controlling appetite loss, called anorexia, said assistant professor Haijiang Cai, who is a member of the BIO5 Institute and heads up the neuroscience lab that ran the study.

Anorexia can be triggered by disease-induced inflammation, and can negatively impact recovery and treatment success. It is harmful to quality of life and increases morbidity in many diseases, the authors wrote. The paper, "A bed nucleus of stria terminalis microcircuit regulating inflammation-associated modulation of feeding," was published June 24 in Nature Communications.

To determine if the specific neurons within the amygdala control feeding behavior, researchers inhibited the neurons, which increased appetite. They then activated the neurons, causing a decrease in appetite.

"By silencing the neurons within the circuit, we can effectively block feeding suppression caused by inflammation to make patients eat more," Cai said. "We used anorexia for simplification, but for people with obesity, we can activate those neurons to help them eat less. That's the potential impact of this kind of study."

Feeding sounds simple, but it's not, Cai related. People feel hunger either to satisfy nutritional deficits or for the reward of eating something good. Once food i found, we check that it's good before chewing and swallowing. After a certain point, we feel satisfaction.

Theoretically, each step is controlled by different neurociruitry.

"This circuitry we found is really exciting because it suggests that many different parts of brain regions talk to each other," Cai said. "We can hopefully find a way to understand how these different steps of feeding are coordinated."

The brain region was found in mice models. The next step is to identify it in humans and validate that same mechanisms exist. If they do, then scientists can find some way to control feeding activities, Cai said.

Credit: 
University of Arizona

Santorini volcano, a new terrestrial analogue of Mars

image: On the island of Santorini, basaltic rocks similar to those located by the Curiosity rover in the crater Gale de Marte have been found.

Image: 
NASA/JPL-Caltech/MSSS

One of the great attractions of the island of Santorini, in Greece, lies in its spectacular volcanic landscape, which also contains places similar to those of Mars. A team of European and U.S. scientists has discovered it after analysing basaltic rocks collected in one of its coves.

The Greek island of Santorini is now one of the most popular tourist destinations in the Mediterranean, but 3,600 years ago it suffered one of the largest volcanic eruptions recorded in history. Among the material that has been exposed, scientists have now found rocks similar to those of Mars.

"In the Balos Cove -located to the south of the island - we have discovered basalts such as those that have been identified by the rovers on Mars and with properties similar to those of certain meteorites from the red planet and those of terrestrial rocks classified as Martian analogues," points out Ioannis Baziotis, a researcher at the Agricultural University of Athens and co-author of the study, recently published in Icarus journal.

More specifically, the authors have confirmed that this basaltic material is equivalent to that located by the Spirit and Curiosity rovers in the Gusev and Gale craters of the red planet, and that its chemical and mineralogical composition resembles that of genuinely Martian meteorites (olivine-phyric shergottites) and similar Martian samples included in The International Space Analogue Rockstore (ISAR), a collection of terrestrial rocks used to test and calibrate instruments that will fly on space missions.

"The basalts of this cove and other, similar ones that we have also found in two areas northeast of Santorini are quite abundant," explains Baziotis, "so they can serve as an accessible and low-cost resource for experiments, instead of using the rare and expensive olivine-phyric shergottites collected on Earth or material laboriously prepared from synthetic mixes".

"Optical microscopy and geochemical analyses show that the basalts of Balos Cove are viable analogues for characterising geological processes and chemical and mineralogical properties of materials present on the Martian surface," says another author, Anezina Solomonidou, a researcher at the European Space Astronomy Centre (ESAC) run by the European Space Agency (ESA) near Madrid.

"In addition -she adds-, this area of the island is easily accessible and offers excellent logistics for sampling, testing and calibration instruments, field training and other activities related to current and future Mars exploration."

Along with its tourist relevance, Santorini has thus become an excellent destination for comparative planet studies, a field which, according to Solomonidou, "plays an important role both in characterising geologically distant exotic worlds, such as planets and moons, and in better understanding our own planet."

Credit: 
Spanish Foundation for Science and Technology

Biochip advances enable next-generation sequencing technologies

image: Biochips are driving next-generation DNA sequencing technologies, and this powerful combination is capable of solving unique and important biological problems, such as single-cell, rare-cell or rare-molecule analysis, which next-generation sequencing can't do on its own. In APL Bioengineering, researchers from Seoul National University explore the role advancements in biochip technology are playing in driving groundbreaking scientific discoveries and breakthroughs in medicine via next-generation sequencing, aka high-throughput sequencing. This image shows perspectives on potential biochips used for next generation sequencing for promising applications in biotechnology.

Image: 
Amos Chungwon Lee

WASHINGTON, D.C., June 25, 2019 -- Biochips are essentially tiny laboratories designed to function inside living organisms, and they are driving next-generation DNA sequencing technologies. This powerful combination is capable of solving unique and important biological problems, such as single-cell, rare-cell or rare-molecule analysis, which next-generation sequencing can't do on its own.

Now that the scaling and throughput power of biochip technologies has emerged, the next trend in biochips will involve being capable of providing applications across a wide spectrum -- from identifying rare bacterium to population-based clinical studies.

In APL Bioengineering, from AIP Publishing, a group of researchers from Seoul National University explore the role advancements in biochip technology are playing in driving groundbreaking scientific discoveries and breakthroughs in medicine via next-generation sequencing, aka high-throughput sequencing.

"There have been many experimental successes and failures within the biochip field, and one of the most important things we point out in our review is that these were not done in vain," said Amos Chungwon Lee, lead author and a graduate student in Sunghoon Kwon's Biophotonics and Nano Engineering Laboratory. "These technologies are now moving forward and being applied in real-world settings."

One example of real-world use is biochips that isolate single cells from a heterogeneous mix of a complex biological mass -- they are enabling preprocessing for massively parallel $1 single-cell analysis.

"When these biochips meet next-generation sequencing, the net value of both technologies will increase exponentially," Lee said. "If biochips that allow single-cell analysis prevailed for the last decade, there are many more biochips with different functions that will now innovate the bio field -- including drug-screening biochips."

In their paper, the researchers provide a review of current state-of-the-art biochip technologies, separated into two main categories: microfluidic- and micromanipulation-based methods.

"Microfluidic-based methods use micrometer-sized fluidic channels to compartmentalize biological targets to analyze each and every target, while micromanipulation-based methods use optical devices to pick out the biological target of interest," explained Lee. "The biggest differences between these methods are the throughput and depth. Microfluidic-based methods have higher throughput, but the micromanipulation-based methods can provide deeper data for the biological targets."

In the future, biochips will become widely available to the public.

"Like smartphones, biochips will be used by people on a daily basis to check their health or nutritional status," said Lee. "In the same way that smartphones have shifted the paradigm for information flow, smart biochips will play a key role in revolutionizing the system for the acquisition and interpretation of Mother Nature's information flow."

Credit: 
American Institute of Physics

Study funded by NIH supports optimal threshold for diagnosing COPD

image: COPD is the 4th leading cause of death in the US.

Image: 
National Heart, Lung, and Blood Institute

A new study provides evidence to support a simple measurement for diagnosing clinically significant airflow obstruction, the key characteristic of chronic obstructive pulmonary disease (COPD), the fourth leading cause of death in the United States. The study found that a 70% ratio of two indicators of lung function proved as or more accurate than other thresholds for predicting COPD-related hospitalizations and deaths.

The study was funded by the National Heart, Lung, and Blood Institute (NHLBI), part of the National Institutes of Health, and its findings were published online today in the Journal of the American Medical Association. Approximately 16 million Americans have COPD, and it is estimated that millions more have the disease and do not know it.

The research, which draws on a wide range of multi-ethnic studies, validates current guidelines from major respiratory societies and contributes to identify a fixed threshold of disease severity. This approach has led to great strides in early detection and treatment of other conditions such as hypertension and diabetes.

"Diagnosis of airflow obstruction remains a major hurdle to improving care for patients with COPD," said James Kiley, Ph.D., director of the NHLBI Division of Lung Diseases. "This validation of a fixed threshold confirms the usefulness of a simple approach for assessment of the disease. As we celebrate the 50th anniversary of the Division of Lung Diseases, this rigorous analysis of populations-based, multiethnic studies is yet another example of research we fund that improves clinical practice, public health, and patient care."

To monitor lung function and gauge the severity of a lung disease, doctors use spirometry, a test that measures several indicators. Those include the ratio of forced expiratory volume in one second (FEV1) - that is, the amount of air exhaled forcefully in one second - over forced vital capacity (FVC) - or the full amount of air that can be forcefully exhaled in a complete breath. The two values are usually proportional; and lower ratios are seen in individuals with obstructive lung diseases, such as asthma or COPD.

The researchers aimed to determine how accurate various thresholds were in predicting COPD-related hospitalizations and mortality. For that, the NHLBI Pooled Cohorts Study analyzed data from four U.S. population-based studies that collected spirometry results and followed up participants for COPD-related clinical events. The study included 24,207 adult participants, of which 54% were women, 69% white, and 24% black.

"The selection of a threshold for defining airflow obstruction has major implications for patient care and public health, as the prevalence of the condition could vary by more than a third depending on the metric used," said study author Elizabeth C. Oelsner, M.D., M.P.H., the Herbert Irving Assistant Professor of Medicine at Columbia University, New York City. "Defining 'normal' lung function is very challenging in diverse and changing populations, and certain approaches might interpret low levels of lung function as normal in women, non-whites, or the elderly. We were able to show that a simple fixed threshold worked well in our study's very diverse sample, which improves the generalizability of our results."

The researchers said establishing a diagnostic threshold that is easy to use not only is critical to improving the adoption of spirometry in primary care; it may also result in earlier detection and treatment for patients.

Credit: 
NIH/National Heart, Lung and Blood Institute

Lifelong ill-health after exposure to chemical weapons

image: This is Faraidoun Moradi, PhD student of occupational and environmental medicine at Sahlgrenska Academy, University of Gothenburg, registered pharmacist and a a specialist resident doctor in general medical practice.

Image: 
Siri Sjolin

People exposed to chemical warfare agents (CWAs) often incur chronic damage to their lungs, skin and eyes, for example. They also frequently succumb to depression, anxiety and suicidal thoughts. This is shown by research on survivors from the 1988 gas attacks against Kurdish Halabja in Iraq.

"The findings show that exposure to chemical warfare agents, especially sulfur mustard, results in lifelong physical and mental ill-health," says Faraidoun Moradi, a doctoral student of occupational and environmental medicine at Sahlgrenska Academy, University of Gothenburg, Sweden.

Faraidoun Moradi is a registered pharmacist, a specialist resident doctor in general medical practice, and the first author of the current article in the journal Plos One.

Sulfur mustard (SM, also known as mustard gas) and other chemical warfare agents (CWAs) remain a threat to human safety. Today, there are tens of thousands of patients, mainly in the Middle East, suffering from lasting damage after exposure to chemical weapons.

In the second half of the 1980s, SM was used on a large scale in Iraq. The most notorious and severe gas attacks were against the city of Halabja, where some 5,000 people died and tens of thousands were injured.

The qualitative study now published is based on in-depth interviews with 16 patients diagnosed with chronic pulmonary complications, aged 34 to 67, in Halabja. The group who conducted the study comprised researchers in medicine, psychology, and anthropology at the University of Gothenburg and Martin Luther University Halle-Wittenberg, Germany.

The victims suffer from severely impaired health, both physical and mental. As well as respiratory problems, insomnia, fatigue and eye problems, they also have depressive symptoms, anxiety, suicidal thoughts and post-traumatic stress disorder (PTSD).

The researchers refer to "chemical contamination anxiety," a powerful reaction to exposure among these people that has limited their family lives, social relations and work capacity. Unemployment and loss of social capital have, in turn, led to social isolation.

"Our conclusion is that holistic care of the victims and, above all, detection of their somatic and mental ill-health, can minimize the deterioration in their health," Moradi thinks.

He also emphasizes the fact that hundreds of Kurdish and Syrian victims of gassing with SM have migrated to Sweden, and may need care and monitoring in the Swedish primary care services.

"Studies of SM-exposed patients in Sweden, and their symptoms, experience and care needs, are lacking. We need more knowledge in this area to be able to improve their reception and clinical treatment by the care services, and be prepared to deal with incidents in the future," Moradi concludes.

Credit: 
University of Gothenburg