Culture

Awareness without a sense of self

In the context of meditation practice, meditators can experience a state of "pure awareness" or "pure consciousness", in which they perceive consciousness itself. This state can be experienced in various ways, but evidently incorporates specific sensations as well as non-specific accompanying perceptions, feelings, and thoughts. These are just some of the findings of the most extensive survey of meditators ever conducted on the experience of pure consciousness. The findings of the survey recently have been published in PLOS ONE. The study was conducted by Professor Thomas Metzinger from the Department of Philosophy at Johannes Gutenberg University Mainz (JGU) and Dr. Alex Gamma from the Psychiatric Hospital of the University of Zurich. They designed an online questionnaire comprising more than a hundred questions and asked thousands of meditators worldwide to answer it. "The goal of our research was not to learn more about meditation. We are interested in human consciousness," said Metzinger. "Our working hypothesis was that pure consciousness is the simplest form of conscious experience. And our goal was to develop a minimal model explanation of human consciousness experience on the basis of this hypothesis." The study is part of the international Minimal Phenomenal Experience (MPE) project led by Metzinger.

The online questionnaire was made available in five languages - German, English, French, Spanish, and Italian - and was completed by approximately 3,600 meditators in 2020. In addition to questions about the participants themselves, such as gender, age, and meditation techniques used, the questionnaire consisted of 92 questions about their experience of pure awareness. Instructing the meditators to select and focus on one particular experience of pure awareness, the questionnaire included questions like: "Did you experience sensations of temperature?", "Were you in a positive mood?", or "Did you experience thoughts?". Each of these could be answered on a scale from "no" to "yes, very intensely" via a slider bar. Of the questionnaires Metzinger and Gamma received back, 1,400 were filled out in full and so could be used for a so-called factor analysis. They employed this type of statistical evaluation to identify groups of questions that were frequently answered in a similar manner. "This led us to identify twelve groups, which in turn allowed us to name twelve factors that characterize pure consciousness," Metzinger explained. "According to this scheme, typical characteristics of pure consciousness seem to be, for example, the perception of silence, clarity, and an alert awareness without egoic self-consciousness. Time, effort, and desire, which can certainly transpire in parallel, are experienced somewhat less explicitly."

"Based on these twelve factors, we can now develop a prototypical minimal model of human consciousness," said Metzinger. In addition, the study opens up numerous avenues for further research. Neuroscientists from the USA, Australia, and Switzerland, for instance, have already inquired whether they can use the questionnaire for their own research. For his own part, Metzinger hopes to discover whether pure consciousness - that is, the quality of consciousness in itself - is also experienced in situations other than meditation: "The responses we received also included personal reports suggesting that pure consciousness is also experienced in other situations, such as during accidents and serious illness, at the threshold between sleep and wakefulness, or when immersed in play as a child."

Credit: 
Johannes Gutenberg Universitaet Mainz

Researchers develop tool that may help in understanding inaccuracy in eyewitness testimony

video: University of Toronto Department of Psychology researchers have developed a new tool called a "scene wheel" to help understand how we perceive and remember visual experiences such as witnessing crimes or accidents.

Image: 
Faculty of Arts & Science/University of Toronto

TORONTO, ON - Researchers at the University of Toronto have developed an innovative tool to aid in the investigation of how we perceive and remember visual experiences.

The new tool, referred to as a "scene wheel," will help researchers study how accurately we construct mental representations of visual experiences for later retrieval -- for example, how well an eyewitness recalls details of a crime or accident.

"We know that eyewitness testimony is not reliable," says Gaeun Son. "With the new scene wheel, we can start to characterize the specific nature of those memory failures."

Son is a PhD student in the Faculty of Arts & Science's Department of Psychology and lead author of a paper published in Behavior Research Methods that describes the scene wheel methodology.

"Studying how people perceive and remember the world requires careful control of the physical stimuli presented in experiments," says Michael Mack. "This kind of control isn't difficult in experiments using simple stimuli like colour. But it's very challenging for more complex, realistic scenes."

Mack and Dirk Bernhardt-Walther are both professors of psychology in the department and co-authors of the study.

Traditional experiments in this field involve test subjects performing tasks such as identifying which colour or which arrangement of graphic symbols most resembles a previously viewed colour or graphic. While these methods provide some insight, their simplicity imposes a fundamental limit to what they can reveal.

The scene wheel moves into a whole new experimental realm by using highly realistic images that more closely simulate our day-to-day visual experiences -- while still providing the rigorous control needed.

The wheel is a continuous, looping series of gradually changing images depicting typical domestic spaces: dining rooms, living rooms and bedrooms. The images are detailed and realistic, and vary continuously in subtle ways: tables subtly transform into desks, mirrors become framed pictures, walls become windows, etc.

The collaborators used deep-learning methods in computer vision -- specifically, generative adversarial networks (GAN) -- to create the images and arrange them in a continuous "spectrum" analogous to a 360-degree colour wheel.

"The success of this project is all thanks to the recent revolution in deep-learning fields," says Son. "Especially in GANs which is the same sort of approach used in creating so-called 'deep fake' videos in which one person's face is very realistically replaced with someone else's."

To test whether their approach worked, the researchers had subjects view a still image of a scene from the wheel for one second, followed by a blank screen. Next, the subjects were presented with a scene similar to the one they just viewed.

The subjects then altered the second image by moving their cursor in a circle around it. As they moved their cursor, the scene changed. Subjects were asked to stop their cursor when the image matched their memory of the original image.

"With the scene wheel, we've provided a new experimental bridge that brings more of the richness of everyday experience into a controlled experimental setting," says Son. "We anticipate that our method will allow researchers to test the validity of classic findings in the field that are based on experiments using simple stimuli."

What's more, the approach could lead to different applications. For example, it could potentially lead to a wheel that uses faces instead of rooms. Such a "face wheel" could take the place of police lineups which are not particularly reliable in identifying individuals.

Says Mack, "Our method will allow for a better understanding of how precise that identification of individuals actually is."

Credit: 
University of Toronto

Traditional Japanese food may hold building blocks of COVID-19 treatments

image: Researchers from the Tokyo University of Agriculture and Technology found that an extract made from natto appears to digest the binding site of SARS-CoV-2, inhibiting the virus's ability to infect cells. More research is needed to identify the molecular mechanism of this inhibition.

Image: 
Part of Figure is adopted from Biochemical and Biophysical Research Communications Volume 570, 17 September 2021, Pages 21-25. © 2021 The Authors. Published by Elsevier Inc.

Natto, a fermented soybean dish often served for breakfast in Japan, originated at the turn of the last millennium but may hold an answer to a modern problem: COVID-19, according to a new study based on cell cultures.

Long thought to contribute to longer, healthier lives across Japan -- the country with the longest life expectancy on Earth and home to more than a quarter of the world's population aged 65 years or older -- natto was previously found to be a diet staple in those who were least likely to die from stroke or cardiac disease. Now, researchers have found that extract made from the sticky, strong smelling natto may inhibit the ability of the virus that causes COVID-19 to infect cells.

The team published its results on July 13th in Biochemical and Biophysical Research Communications.

"Traditionally, Japanese people have assumed that natto is beneficial for their health," said paper author Tetsuya Mizutani, director of the Center for Infectious Disease Epidemiology and Prevention Research at the Tokyo University of Agriculture and Technology (CEPiR-TUAT). "In recent years, research studies have revealed scientific evidence for this belief. In this study, we investigated natto's antiviral effects on SARS-CoV-2, the virus that causes COVID-19, and bovine herpesvirus 1 (BHV-1), which causes respiratory disease in cattle."

Natto is made by fermenting soybeans with Bacillus subtilis, a bacteria found in plant and in soil. The researchers prepared two natto extracts from the food, one with heat and one without. They applied the extracts to sets of lab-cultured cells from cattle and from humans. One set was infected with SARS-CoV-2, while the other set was infected with BHV-1.

When treated with the natto extract made without heat, both SARS-CoV-2 and BHV-1 lost the ability to infect cells. However, neither virus appeared to be affected by the heat-treated natto extract.

"We found what appears to be a protease or proteases -- proteins that metabolize other proteins -- in the natto extract directly digests the receptor binding domain on the spike protein in SARS-CoV-2," Mizutani said, noting that the protease appears to break down in heat, losing the ability to digest proteins and letting the virus remain infectious.

The spike protein sits on the virus's surface and binds to a receptor on host cells. With an inactive spike protein, SARS-CoV-2 cannot infect healthy cells. The researchers found a similar effect on BHV-1.

"We also confirmed that the natto extract has the same digestive effects on the receptor binding domain proteins of the SARS-CoV-2 mutated strains, such as the Alpha variant," Mizutani said.

While the results are promising, Mizutani said, he also cautioned that further studies are needed to identify the exact molecular mechanisms at work. He also stressed that the research does not provide any evidence of reduced viral infection simply by eating natto. Once the components are identified and their functions verified, the researchers plan to advance their work to clinical studies in animal models.

"Although there are vaccines for COVID-19, we do not know how they effective they may be against every variant," Mizutani said. "It will also take time to vaccinate everyone, and there are still reports of breakthrough cases, so we need to make treatments for those who develop COVID-19. This work may offer a big hint for such pharmaceutical design."

Credit: 
Tokyo University of Agriculture and Technology

Researchers discover nucleotide sequence responsible for effectively fighting pathologies

image: The pattern discovered by the researchers. Letters that grow upwards represent error-free microRNA processing, while those growing downwards represent a processing pattern with errors. The bigger the letter, the stronger the correlation.

Image: 
Nersisyan S. et al.

Researchers from HSE University have discovered nucleotide sequences characteristic of microRNA isoforms (microRNAs with errors). The discovery will help predict errors in microRNA behaviour and create drugs that can detect targets (such as viruses) more effectively. The results of the study have been published in the RNA Biology journal.

MicroRNAs (miRNAs) are very small molecules that regulate all the processes in a cell, including the transformation of inherited information in RNA or proteins (gene expression). Each microRNA has its own unique set of targets--genes whose activity it can suppress. Recent studies show that even slight changes in microRNA nucleotide sequences (so-called microRNA isoforms or isomiRs) can completely rebuild numerous targets. This can drastically alter the biological function of the molecule. However, until recently, researchers did not know why some microRNAs have isoforms, while others do not.

HSE Faculty of Biology and Biotechnology researchers Anton Zhiyanov, Stepan Nersisyan, and Alexander Tonevitsky applied bioinformatics methods to find the answer to this question. The team managed to create an algorithm that characterizes the fundamental differences between microRNAs that have isoforms and those that do not.

Their study also has important applications for the creation of artificial molecules similar to microRNAs. Dozens of research teams across the globe are currently working to solve this problem. Researchers artificially synthetize molecules that are similar to microRNAs (so-called short hairpin RNAs or shRNAs) in order to 'knock down' the gene they are interested in. In addition to having academic applications, this technology is also used in therapy to suppress 'bad' genes that cause diseases.

The authors of the study demonstrated that such artificially synthetized molecules can also have isoforms.

'Some combinations of nucleotides (AGCU, AGUU) are most often found in microRNAs where no errors occur. Combinations such as CCAG and some of its variations can predict changes and target failure with up to 70% precision. Sequencing short hairpin RNAs from our own experiments revealed that they also have isoforms. This means that it is possible to have a situation where we invent a molecule with a specific list of targets, but in practice, isoforms appear with unintended targets of their own. Our algorithm helps predict such events at the computer analysis stage without having to carry out costly experiments,' said Stepan Nersisyan, Junior Research Fellow at the HSE International Laboratory of Microphysiological Systems.

Credit: 
National Research University Higher School of Economics

Toxic facility relocation depends on community pressure

URBANA, Ill. - No one wants to live near a toxic plant. Toxic-releasing facilities such as paper, pulp, and other manufacturing plants negatively affect human health, environmental quality, and property values. And communities with lower income and educational attainment are more likely to house such facilities.

Since mandatory reporting about toxic facilities became publicly available in 1990, affected communities have increasingly expressed concern through the media, and engaged in targeted collective action and "toxic torts" lawsuits for health and environmental damages.

New research from the University of Illinois explores the effects of community pressure on the relocation of toxic-releasing facilities.

"Current studies usually focus on the question of where new plants choose to locate. Our research looks at whether facilities make relocation decisions based on the socioeconomic characteristics of the community," explains Xiao Wang, graduate student alum from the Department of Agricultural and Consumer Economics (ACE) at U of I and lead author on the paper. Co-authors include Madhu Khanna, ACES distinguished professor in ACE; George Deltas, Department of Economics at U of I; and Xiang Bi, University of Florida.

Using the Toxics Release Inventory (TRI), an environmental disclosure program that makes information about these facilities publicly available, the study looks at how much community pressure can "push and pull" facilities to relocate, thus adding to environmental injustice when they move into disadvantaged communities. This effect results in a "redistribution of pollution," the researchers note.

Information disclosure programs like the TRI help to empower communities and the public to impose pressure on facilities to improve environmental performance. However, even with the TRI available to the general public, there are imbalances across communities, due to unintended consequences of the TRI as well as more ingrained socioeconomic inequities.

"Our study finds the extent to which communities can push out toxic plants differs based on their socioeconomic characteristics. After the disclosure of toxic release information, socially disadvantaged communities were less able to do so, and this resulted in facilities moving out of communities with higher educational status and income to those with lower," Wang explains.

The researchers found that toxic-releasing facilities are more likely to move from communities with high population density, income, and education level. On the flipside, a plant is more likely to migrate into communities with lower population density, income, and education level. Furthermore, the move is often associated with the plant's growth in both size and emissions. A facility will even plan to move in anticipation of the public disclosure of the TRI and the expected community backlash.

The authors conclude that policymakers need to consider the side effects of public disclosure programs like the TRI.

"Our study shows that TRI has the effect of redistributing pollution across locations. Policy makers need to consider the potential side effect of such a regulatory tool on distributional justice and strengthen the channels for vulnerable populations to voice their concerns about facility location in addition to strengthening zoning laws and regulations," Khanna states.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Sandia designs better batteries for grid-scale energy storage

image: Postdoctoral researcher Martha Gross works in an argon glove box with a test battery cell illustrating a lab-scale sodium iodide battery. The Sandia National Laboratories research team developed a new sodium iodide catholyte solution (purple liquid) and a special ceramic separator to allow the molten sodium battery to operate at 230 degrees Fahrenheit (110 degrees Celsius).

Image: 
Photo by Randy Montoya/Sandia National Laboratories

ALBUQUERQUE, N.M. -- Researchers at Sandia National Laboratories have designed a new class of molten sodium batteries for grid-scale energy storage. The new battery design was shared in a paper published today in the scientific journal Cell Reports Physical Science.

Molten sodium batteries have been used for many years to store energy from renewable sources, such as solar panels and wind turbines. However, commercially available molten sodium batteries, called sodium-sulfur batteries, typically operate at 520-660 degrees Fahrenheit. Sandia's new molten sodium-iodide battery operates at a much cooler 230 degrees Fahrenheit instead.

"We've been working to bring the operating temperature of molten sodium batteries down as low as physically possible," said Leo Small, the lead researcher on the project. "There's a whole cascading cost savings that comes along with lowering the battery temperature. You can use less expensive materials. The batteries need less insulation and the wiring that connects all the batteries can be a lot thinner."

However, the battery chemistry that works at 550 degrees doesn't work at 230 degrees, he added. Among the major innovations that allowed this lower operating temperature was the development of what he calls a catholyte. A catholyte is a liquid mixture of two salts, in this case, sodium iodide and gallium chloride.

Basics of building better batteries

A basic lead-acid battery, commonly used as a car ignition battery, has a lead plate and a lead dioxide plate with a sulfuric acid electrolyte in the middle. As energy is discharged from the battery, the lead plate reacts with sulfuric acid to form lead sulfate and electrons. These electrons start the car and return to the other side of the battery, where the lead dioxide plate uses the electrons and sulfuric acid to form lead sulfate and water. For the new molten sodium battery, the lead plate is replaced by liquid sodium metal, and the lead dioxide plate is replaced by a liquid mixture of sodium iodide and a small amount of gallium chloride, said Erik Spoerke, a materials scientist who has been working on molten sodium batteries for more than a decade.

When energy is discharged from the new battery, the sodium metal produces sodium ions and electrons. On the other side, the electrons turn iodine into iodide ions. The sodium ions move across a separator to the other side where they react with the iodide ions to form molten sodium iodide salt. Instead of a sulfuric acid electrolyte, the middle of the battery is a special ceramic separator that allows only sodium ions to move from side to side, nothing else.

"In our system, unlike a lithium ion battery, everything is liquid on the two sides," Spoerke said. "That means we don't have to deal with issues like the material undergoing complex phase changes or falling apart; it's all liquid. Basically, these liquid-based batteries don't have as limited a lifetime as many other batteries."

In fact, commercial molten sodium batteries have lifetimes of 10-15 years, significantly longer than standard lead-acid batteries or lithium ion batteries.

Long-lasting batteries that are safer

Sandia's small, lab-scale sodium-iodide battery was tested for eight months inside an oven. Martha Gross, a postdoctoral researcher who has worked on the laboratory tests for the past two years, conducted experiments charging and discharging the battery more than 400 times over those eight months.

Because of the COVID-19 pandemic, they had to pause the experiment for a month and let the molten sodium and the catholyte cool down to room temperature and freeze, she said. Gross was pleased that after warming the battery up, it still worked.

This means that if a large-scale energy disruption were to occur, like what occurred in Texas in February, the sodium-iodide batteries could be used, and then allowed to cool until frozen. Once the disruption was over, they could be warmed up, recharged and returned to normal operation without a lengthy or costly start-up process, and without degradation of the battery's internal chemistry, Spoerke added.

Sodium-iodide batteries are also safer. Spoerke said, "A lithium ion battery catches on fire when there is a failure inside the battery, leading to runaway overheating of the battery. We've proven that cannot happen with our battery chemistry. Our battery, if you were to take the ceramic separator out, and allow the sodium metal to mix with the salts, nothing happens. Certainly, the battery stops working, but there's no violent chemical reaction or fire."

If an outside fire engulfs a sodium-iodide battery, it is likely the battery will crack and fail, but it shouldn't add fuel to the fire or cause a sodium fire, Small added.

Additionally, at 3.6 volts, the new sodium-iodide battery has a 40% higher operating voltage than a commercial molten sodium battery. This voltage leads to higher energy density, and that means that potential future batteries made with this chemistry would need fewer cells, fewer connections between cells and an overall lower unit cost to store the same amount of electricity, Small said.

"We were really excited about how much energy we could potentially cram into the system because of the new catholyte we're reporting in this paper," Gross added. "Molten sodium batteries have existed for decades, and they're all over the globe, but no one ever talks about them. So, being able to lower the temperature and come back with some numbers and say, 'this is a really, really viable system' is pretty neat."

The future of sodium-iodide batteries

The next step for the sodium-iodide battery project is to continue to tune and refine the catholyte chemistry to replace the gallium chloride component, Small said. Gallium chloride is very expensive, more than 100 times as expensive as table salt.

The team is also working on various engineering tweaks to get the battery to charge and discharge faster and more fully, Spoerke added. One previously identified modification to speed up the battery charging was to coat the molten sodium side of the ceramic separator with a thin layer of tin.

Spoerke added that it would likely take five to 10 years to get sodium-iodide batteries to market, with most of the remaining challenges being commercialization challenges, rather than technical challenges.

"This is the first demonstration of long-term, stable cycling of a low-temperature molten-sodium battery," Spoerke said. "The magic of what we've put together is that we've identified salt chemistry and electrochemistry that allow us to operate effectively at 230 degrees Fahrenheit. This low-temperature sodium-iodide configuration is sort of a reinvention of what it means to have a molten sodium battery."

The development of the new sodium battery was supported by the Department of Energy's Office of Electricity Energy Storage Program.

Credit: 
DOE/Sandia National Laboratories

A new model of coral reef health

image: The Global Reef Expedition provided the data needed to model and map coral reef health and resiliency across the South Pacific.

Image: 
© Keith A. Ellenbogen/iLCP

Scientists have developed a new way to model and map the health of coral reef ecosystems using data collected on the Khaled bin Sultan Living Oceans Foundation's Global Reef Expedition. This innovative method, presented today at the International Coral Reef Symposium (ICRS), can determine which natural and anthropogenic factors are most likely to lead to persistently vibrant coral and fish communities. Their findings can help scientists identify the reefs most likely to survive in a changing world.

The new models are a first step in being able to produce maps of global coral reef resilience.

To create these models, scientist Anna Bakker needed a lot of data on coral reefs from many countries across the South Pacific. She found what she needed in the data collected on the Khaled bin Sultan Living Oceans Foundation's (KSLOF's) Global Reef Expedition--a 10-year research mission that circumnavigated the globe to study the health and resiliency of coral reefs in an attempt to address the coral reef crisis. This expedition was an ambitious undertaking, involving more than 200 scientists from around the world who worked together with local experts to survey more than 1,000 reefs in 16 countries. In addition to a treasure-trove of standardized and geo-referenced data on coral cover, fish biomass, and algae, the expedition produced over 65,000 square kilometers of high-resolution marine habitat maps that KSLOF verified with data collected in the field.

"This research would not have been possible without the massive amount of field data collected on the Global Reef Expedition," said Anna Bakker, a Ph.D. student at the University of Miami's Rosenstiel School of Marine and Atmospheric Science and the lead author of the study. "The sheer amount of benthic and fish observations collected in the field from around the world enabled us to take a holistic look at indicators of coral reef health--such as coral cover or fish biomass--and figure out what drives one coral reef to be healthier than another."

Bakker pooled all of the data collected in the South Pacific--from Palau, the Solomon Islands, New Caledonia, Fiji, Tonga, the Cook Islands, and French Polynesia--into her models. She correlated fish biomass as well as coral and algal cover with dozens of natural and anthropogenic factors such as wave energy, degree heating weeks, human density, and protected areas to determine which factors were driving differences observed in the marine environment.

She found that there were only a few major factors that drove variations in the health of the coral reef community. Fish biomass was primarily driven by biophysical drivers such as water temperature. In contrast, the algal cover was driven mainly by anthropogenic factors, with significantly less macroalgae in places with more terrestrial and marine protected areas. Coral cover was more complicated and correlated with many natural and anthropogenic factors, chiefly human population, development, and water temperature.

This short list of the most important drivers can help conservationists prioritize what factors need to be addressed to protect coral reefs and identify which reefs are more likely to thrive in a rapidly changing world.

"Anna's work is an important step in the scaling of local measures of reef resilience to a global appraisal," said Sam Purkis, KSLOF's Chief Scientist as well as Professor and Chair of the Department of Marine Geosciences at the University of Miami's Rosenstiel School of Marine and Atmospheric Science.

The model results presented today covered coral reefs in the South Pacific, but future research Bakker intends to conduct will use the same modeling approach to identify the primary drivers of the health of coral reef ecosystems and the status of their fish and coral communities, and model and map the potential resilience of coral reefs in the Indian and Pacific Oceans and their associated seas.

These coral reef resiliency maps will provide marine managers with critical information on where to focus their limited time and resources to protect coral reefs. At the same time, the analysis of the primary socio-environmental factors that influence the health of coral and reef fish communities will provide conservationists with insight into what measures they can use to preserve these reefs before it is too late.

"When we embarked on the Global Reef Expedition in 2006, we had no idea our data could be used in this way. The technology to develop and run these kinds of models did not exist yet. But we knew we were collecting valuable data that needed to stand the test of time," said Alexandra Dempsey, the Director of Science Management at KSLOF and one of the authors of today's research presentation. "The Global Reef Expedition mission gave us the chance to study some of the most remote and pristine coral reefs in the world. Now, it is providing the conservation community with the information we need to identify the reefs that can be saved before it is too late to save them. Let's not waste this opportunity."

Credit: 
Khaled bin Sultan Living Oceans Foundation

Toward one drug to treat all coronaviruses

Safe and effective vaccines offer hope for an end to the COVID-19 pandemic. However, the possible emergence of vaccine-resistant SARS-CoV-2 variants, as well as novel coronaviruses, make finding treatments that work against all coronaviruses as important as ever. Now, researchers reporting in ACS' Journal of Proteome Research have analyzed viral proteins across 27 coronavirus species and thousands of samples from COVID-19 patients, identifying highly conserved sequences that could make the best drug targets.

Drugs often bind inside "pockets" on proteins that hold the drug snugly, causing it to interfere with the protein's function. Scientists can identify potential drug-binding pockets from the 3D structures of viral proteins. Over time, however, viruses can mutate their protein pockets so that drugs no longer fit. But some drug-binding pockets are so essential to the protein's function that they can't be mutated, and these sequences are generally conserved over time in the same and related viruses. Matthieu Schapira and colleagues wanted to find the most highly conserved drug-binding pockets in viral proteins from COVID-19 patient samples and from other coronaviruses, revealing the most promising targets for pan-coronavirus drugs.

The team used a computer algorithm to identify drug-binding pockets in the 3D structures of 15 SARS-CoV-2 proteins. The researchers then found corresponding proteins in 27 coronavirus species and compared their sequences in the drug-binding pockets. The two most conserved druggable sites were a pocket overlapping the RNA binding site of the helicase nsp13, and a binding pocket containing the catalytic site of the RNA-dependent RNA polymerase nsp12. Both of these proteins are involved in viral RNA replication and transcription. The drug-binding pocket on nsp13 was also the most highly conserved across thousands of SARS-CoV-2 samples taken from COVID-19 patients, with not a single mutation. The researchers say that novel antiviral drugs targeting the catalytic site of nsp12 are currently in phase II and III clinical trials for COVID-19, and that the RNA binding site of nsp13 is a previously underexplored target that should be a high priority for drug development.

Credit: 
American Chemical Society

Researchers find immune component to rare neurodegenerative disease

image: Immunofluorescent staining of Npc1-deficient mouse cerebellum, with Purkinje neurons in red, activated microglia in green and nucleus in blue. Massive microglia infiltration and loss of Purkinje neurons cause severe neurological disease in Niemann-Pick disease type C.

Image: 
Ting-Ting Chu

UT Southwestern researchers have identified an immune protein tied to the rare neurodegenerative condition known as Niemann-Pick disease type C. The finding, made in mouse models and published online in Nature, could offer a powerful new therapeutic target for Niemann-Pick disease type C, a condition that was identified more than a century ago but still lacks effective treatments.

"Niemann-Pick disease has never been considered an immune disorder," says study leader Nan Yan, Ph.D., associate professor of immunology and microbiology. "These findings put it in a whole new light."

Niemann-Pick disease type C, which affects about 1 in every 150,000 people worldwide, has long been considered a disease of cholesterol metabolism and distribution, a topic well-studied at UT Southwestern, where faculty members Michael Brown, M.D., and Joseph Goldstein, M.D., won the Nobel Prize in 1985 for their discovery of low-density lipoprotein (LDL) receptors, which led to the development of statin drugs.

When the Npc1 gene is mutated, cholesterol isn't sent where it's needed in cells, causing the progressive decline in motor and intellectual abilities that characterize Niemann-Pick. Yan's lab, which doesn't study cholesterol metabolism, made its discovery by chance while researching an immune protein known as STING, short for stimulator of interferon genes.

STING is a critical part of the body's defense against viruses, typically relying on another protein known as cyclic GMP-AMP synthase (cGAS) to sense DNA and turn on immune genes to fight off viral invaders. The cGAS enzyme was identified at UT Southwestern.

STING journeys to different organelles to perform various tasks before it ends up in lysosomes, which serve as cellular garbage dumps. Disposal of STING is critical for an appropriate immune response, explains Yan; research from his lab and others has shown that when STING isn't properly discarded, it continues to signal immune cells, leading to a variety of autoimmune conditions.

To determine what proteins interact with STING as it travels through cells, Yan and his colleagues used a technique called proximity labeling, which causes other proteins around a protein of interest to glow. After analyzing their data, Yan's team was surprised to find that STING interacts with a protein that's located on the surface of lysosomes and is produced by the Npc1 gene.

Because STING had never been implicated in Niemann-Pick disease type C, Yan and his team investigated whether it might play a role. The researchers removed the gene for STING from mice in which the Npc1 gene had also been deleted. Deleting Npc1 typically causes progressive problems in motor function, but animals with both the Npc1 and Sting genes deleted remained healthy.

Further research suggested that the protein produced by Npc1 has a binding site for STING that allows it to enter lysosomes for disposal. When the protein produced by Npc1 is missing, STING remains in cells, propagating Niemann-Pick disease type C. When Yan and his colleagues analyzed cells from human Niemann-Pick disease type C patients, they found that several immune-stimulating genes were overactive, as would be expected if STING disposal was defective.

In addition, Yan found that STING signaling is activated independently of cGAS in Niemann-Pick disease. This expands STING biology beyond its conventional role in host defense against infection.

Yan says that his lab and others are investigating the use of experimental drugs that inhibit STING to treat various autoimmune conditions. These compounds may also be useful for Niemann-Pick disease type C.

"If we can demonstrate that these compounds are effective in our animal models," Yan says, "we may be able to offer an effective therapy to Niemann-Pick disease patients."

Credit: 
UT Southwestern Medical Center

How does the structure of cytolysins influence their activity?

image: First author and van der Donk lab member Imran Rahman (left) with Richard E. Heckert Professor of Chemistry Wilfred van der Donk.

Image: 
Jillian Nickel

Although Enterococcus faecalis is usually an innocuous member of the bacterial community in the human gut, it can also cause several infections, including liver disorders. The bacteria produce cytolysins, which are molecules that destroy cells. In a new study, researchers have uncovered how they do so.

"Your chances of dying increase by 5-fold when you get infected by E. faecalis that can make cytolysin compared to those that cannot," said Wilfred van der Donk (MMG), a professor of chemistry and investigator of the Howard Hughes Medical Institute. "Cytolysin is an important molecule and it has been known since the 1930s, our lab determined the cytolysin structure only in 2013."

Concerningly, E. faecalis is resistant to vancomycin, which is used as a last resort to treat bacterial infections. By understanding how cytolysins affect cells, the researchers hope to prevent its production and reduce the number of lethal infections.

Cytolysin is made up of two subunits, CylLL" and CylLS", which have been previously shown to kill both mammalian and bacterial cells. The structure of the subunits is stabilized with the help of rings, called macrocycles, that staple the ends and prevent the structure from unfolding. To understand how they work, the researchers replaced each amino acid in both these subunits to determine which amino acids are important.

"After mutating the amino acid residues, we purified and tested each of the mutants to see whether they had anti-bacterial activity or if they could lyse rabbit blood cells," said Imran Rahman, a graduate student in the van der Donk lab and the first author of the paper. "We found that the macrocyclizations in both subunits are important for both activities."

Additionally, the researchers discovered that CylLL" contains a hinge region which is also important. "The hinge contains three consecutive glycine residues and if we delete them, CylLL" becomes so unstable that we can no longer purify it," Rahman said.

The residues help CylLL" switch between two different shapes: a jackknife and an elongated form that can span across the bacterial membrane. "When it's longer, the subunit can make holes in the membrane and we think that's why the glycine residues are required for its activity," van der Donk said.

The researchers are interested in identifying the targets of cytolysin. "These molecules are unusual because unlike our current antibiotics, which bind to big cellular targets, these bind to small molecules and seem to use them to make holes in the membrane," van der Donk said. "We don't know what their targets are and we're working to find them."

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

New simulator helps robots sharpen their cutting skills

video: THE TEAM DEVISED A UNIQUE APPROACH TO SIMULATE CUTTING BY INTRODUCING SPRINGS BETWEEN THE TWO HALVES OF THE OBJECT BEING CUT, REPRESENTED BY A MESH.

Image: 
Eric Heiden/NVIDIA.

Researchers from the University of Southern California (USC) Department of Computer Science and NVIDIA have unveiled a new simulator for robotic cutting that can accurately reproduce the forces acting on a knife as it slices through common foodstuffs, such as fruit and vegetables. The system could also simulate cutting through human tissue, offering potential applications in surgical robotics. The paper was presented at the Robotics: Science and Systems (RSS) Conference 2021 on July 16, where it received the Best Student Paper Award.

In the past, researchers have had trouble creating intelligent robots that replicate cutting. One challenge: in the real world, no two objects are the same, and current robotic cutting systems struggle with variation. To overcome this, the team devised a unique approach to simulate cutting by introducing springs between the two halves of the object being cut, represented by a mesh. These springs are weakened over time in proportion to the force exerted by the knife on the mesh.

"What makes ours a special kind of simulator is that it is 'differentiable,' which means that it can help us automatically tune these simulation parameters from real-world measurements," said lead author Eric Heiden, a PhD in computer science student at USC. "That's important because closing this reality gap is a significant challenge for roboticists today. Without this, robots may never break out of simulation into the real world."

To transfer skills from simulation to reality, the simulator must be able to model a real system. In one of the experiments, the researchers used a dataset of force profiles from a physical robot to produce highly accurate predictions of how the knife would move in real life. In addition to applications in the food processing industry, where robots could take over dangerous tasks like repetitive cutting, the simulator could improve force haptic feedback accuracy in surgical robots, helping to guide surgeons and prevent injury.

"Here, it is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut," said Heiden. "With our approach, we are able to automatically tune our simulator to match such different types of material and achieve highly accurate simulations of the force profile." In ongoing research, the team is applying the system to real-world robots.

Credit: 
University of Southern California

New framework applies machine learning to atomistic modeling

Northwestern University researchers have developed a new framework using machine learning that improves the accuracy of interatomic potentials -- the guiding rules describing how atoms interact -- in new materials design. The findings could lead to more accurate predictions of how new materials transfer heat, deform, and fail at the atomic scale.

Designing new nanomaterials is an important aspect of developing next-generation devices used in electronics, sensors, energy harvesting and storage, optical detectors, and structural materials. To design these materials, researchers create interatomic potentials through atomistic modeling, a computational approach that predicts how these materials behave by accounting for their properties at the smallest level. The process to establish materials' interatomic potential -- called parameterization -- has required significant chemical and physical intuition, leading to less accurate prediction of new materials design.

The researchers' platform minimizes user intervention by employing multi-objective genetic algorithm optimization and statistical analysis techniques, and screens promising interatomic potentials and parameter sets.

"The computational algorithms we developed provide analysts with a methodology to assess and avoid traditional shortcomings," said Horacio Espinosa, James N. and Nancy J. Farley Professor in Manufacturing and Entrepreneurship and professor of mechanical engineering and (by courtesy) biomedical engineering and civil and environmental engineering, who led the research. "They also provide the means to tailor the parameterization to applications of interest."

The findings were published in a study titled "Parametrization of Interatomic Potentials for Accurate Large Deformation Pathways Using Multi-Objective Genetic Algorithms and Statistical Analyses: A Case Study on Two-Dimensional Materials" on July 21 in Nature Partner Journals - Computational Materials.

Xu Zhang and Hoang Nguyen, both students in Northwestern Engineering's Theoretical and Applied Mechanics (TAM) graduate program, were co-first authors of the study. Other co-authors included Jeffrey T. Paci of the University of Victoria, Canada, Subramanian Sankaranarayanan of Argonne National Laboratory, and Jose Mendoza of Michigan State University.

The researchers' framework uses training and screening datasets obtained from density functional theory simulation results, followed by an evaluation step that includes principal component analysis and correlation analysis.

"We defined a sequence of steps to reach an iterative learning approach given specific optimization objectives," said Espinosa, who directs the TAM program. "Our statistical approach enables users to realize conflicting optimization goals that are important in setting limits of applicability and transferability to the parametrized potentials." These relations can reveal underlying physics behind some phenomena that seem to be irrelevant to each other.

The team identified a positive correlation between the accuracy of interatomic potential and the complexity and number of the stated parameters -- a phenomenon believed to be true in the field, but previously unproven using quantitative methods. This level of complexity must be met by a commensurate amount of training data. Failure to do so, especially data carrying critical information, leads to reduced accuracy.

The researchers found, for example, that to improve the fidelity of interatomic potentials, non-equilibrium properties and force-matching data are required.

"This included a better description of large deformation pathways and failure in materials," Nguyen said.

"While these are not conventional properties that people target during parametrization, they are critical in understanding the reliability and functionality of materials and devices," Zhang said.

The new approach also helps remove the barrier of user experience to enter this research field. "Through this work, we hope to make a step forward by making the simulation techniques more accurately reflect the property of materials. That knowledge can be expanded upon and eventually impact the design of devices and technology we all use," Zhang said.

Next, the researchers will use their models to expand their investigation to study fracture and deformation in 2D materials, as well as the role of defect engineering in toughness enhancements. They are also developing in situ electron microscopy experiments that will reveal atomistic failure modes, providing a way to assess the predictive capabilities of the parameterized potentials.

Credit: 
Northwestern University

A history of African dust

image: Interest in African dust began over 50 years ago when it was discovered that it was frequently transported across the Atlantic in great quantities. New study chronicles the initial discovery of African dust and our current state of knowledge.

Image: 
Image: NOAA, Atlantic Oceanographic and Meteorological Laboratory

In a recently published paper, a research team, led by University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science Professor Emeritus Joseph M. Prospero, chronicles the history of African dust transport, including three independent "first" discoveries of African dust in the Caribbean Basin in the 1950s and 1960s.

Every year, mineral-rich dust from North Africa's Sahara Desert is lifted into the atmosphere by winds and carried on a 5,000-mile journey across the North Atlantic to the Americas. African dust contains iron, phosphorus and other important nutrients that are essential for life in marine and terrestrial ecosystems, including the Amazon Basin. Wind-borne mineral dust also plays an important role in climate by modulating solar radiation and cloud properties.

The researchers also discuss the discovery in the 1970s and 1980s of the link between dust transport and African climate following an increase in dust transport to the Caribbean due to the onset of severe drought in the Sahel. Much of today's dust research is focused on North Africa as it is Earth's largest and most persistent source of dust.

Today, Prospero, nicknamed the "father of dust," is using a system of ground stations and satellites to study the effect that the global transport from the Sahara has on the atmospheric composition above the Caribbean.

Credit: 
University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science

Team streamlines neural networks to be more adept at computing on encrypted data

BROOKLYN, New York, Wednesday, July 21, 2021 - This week, at the 38th International Conference on Machine Learning (ICML 21), researchers at the NYU Center for Cyber Security at the NYU Tandon School of Engineering are revealing new insights into the basic functions that drive the ability of neural networks to make inferences on encrypted data.

In the paper, "DeepReDuce: ReLU Reduction for Fast Private Inference," the team focuses on linear and non-linear operators, key features of neural network frameworks that, depending on the operation, introduce a heavy toll in time and computational resources. When neural networks compute on encrypted data, many of these costs are incurred by rectified linear activation function (ReLU), a non-linear operation.

Brandon Reagen, professor of computer science and engineering and electrical and computer engineering and a team of collaborators including Nandan Kumar Jha, a Ph.D. student, and Zahra Ghodsi, a former doctoral student under the guidance of Siddharth Garg, developed a framework called DeepReDuce. It offers a solution through rearrangement and reduction of ReLUs in neural networks.

Reagen explained that this shift requires a fundamental reassessment of where and how many components are distributed in neural networks systems.

"What we are trying to do is rethink how neural nets are designed in the first place," he explained. "You can skip a lot of these time- and computationally-expensive ReLU operations and still get high performing networks at 2 to 4 times faster run time."

The team found that, compared to the state-of-the-art for private inference, DeepReDuce improved accuracy and reduced ReLU count by up to 3.5% and 3.5×, respectively.

The inquiry is not merely academic. As the use of AI grows in concert with concerns about the security of personal, corporate, and government data security, neural networks are increasingly making computations on encrypted data. In such scenarios involving neural networks generating private inferences (PI's) on hidden data without disclosing inputs, it is the non-linear functions that exert the highest "cost" in time and power. Because these costs increase the difficulty and time it takes for learning machines to do PI, researchers have struggled to lighten the load ReLUs exert on such computations.

The team's work builds on innovative technology called CryptoNAS. Described in an earlier paper whose authors include Ghodsi and a third Ph.D. student, Akshaj Veldanda, CryptoNAS optimizes the use of ReLUs as one might rearrange how rocks are arranged in a stream to optimize the flow of water: it rebalances the distribution of ReLUS in the network and removes redundant ReLUs.

DeepReDuce expands on CryptoNAS by streamlining the process further. It comprises a set of optimizations for the judicious removal of ReLUs after CryptoNAS reorganization functions. The researchers tested DeepReDuce by using it to remove ReLUs from classic networks, finding that they were able to significantly reduce inference latency while maintaining high accuracy.

Reagan, with Mihalis Maniatakos, research assistant professor of electrical and computer engineering, is also part of a collaboration with data security company Duality to design a new microchip designed to handle computation on fully encrypted data.

Credit: 
NYU Tandon School of Engineering

Dynamic heart model mimics hemodynamic loads, advances engineered heart tissue technology

Efforts to understand cardiac disease progression and develop therapeutic tissues that can repair the human heart are just a few areas of focus for the Feinberg research group at Carnegie Mellon University. The group's latest dynamic model, created in partnership with collaborators in the Netherlands, mimics physiologic loads on engineering heart muscle tissues, yielding an unprecedented view of how genetics and mechanical forces contribute to heart muscle function.

"Our lab has been working for a long time on engineering and building human heart muscle tissue, so we can better track how disease manifests and also, create therapeutic tissues to one day repair and replace heart damage," explains Adam Feinberg, a professor of biomedical engineering and materials science and engineering. "One of the challenges is that we have to build these small pieces of heart muscle in a petri dish, and we've been doing that for many years. What we've realized is that these in-vitro systems do not accurately recreate the mechanical loading we see in the real heart due to blood pressure."

Hemodynamic loads, or the preload (stretch on heart muscle during chamber filling) and afterload (when the heart muscle contracts), are important not only for healthy heart muscle function, but can also contribute to cardiac disease progression. Preload and afterload can lead to maladaptive changes in heart muscle, as is the case of hypertension, myocardial infarction, and cardiomyopathies.

In new research published in Science Translational Medicine, the group introduces a system comprised of engineered heart muscle tissue (EHT) that is attached to an elastic strip designed to mimic physiologic preloads and afterloads. This first-of-its-kind model shows that recreating exercise-like loading drives formation of more functional heart muscle that is better organized and generates more force each time it contracts. However, using cells from patients with certain types of heart disease, these same exercise-like loads can result in heart muscle dysfunction.

"One of the really important things about this work is that it's a collaborative effort between our lab and collaborators in the Netherlands, including Cardiologist Peter van der Meer," says Feinberg. "Peter treats patients that have genetically-linked cardiovascular disease, including a type called arrhythmogenic cardiomyopathy (ACM) that often becomes worse with exercise. We have been able to get patient-specific induced pluripotent stem cells, differentiate these into heart muscle cells, and then use these in our new EHT model to recreate ACM in a petri dish, so we can better understand it."

Jacqueline Bliley, a biomedical engineering graduate student and co-first author of the recently published paper, adds, "The collaborative nature of this work is so important, to be able to ensure reproducibility of the research and compare findings across the world."

Looking to the future, the collaborators aim to use their model and findings to study a wide range of other heart diseases with genetic mutations, develop new therapeutic treatments and test drugs to gauge their effectiveness.

"We can take lessons learned from building the EHT in a dish to create larger pieces of heart muscle that could be used therapeutically. By combining these new results with our previous work involving 3D bioprinting heart muscle (published in Science in 2019), we hope to one day engineer tissues large and functional enough to implant, and repair the human heart," projects Feinberg.

Credit: 
College of Engineering, Carnegie Mellon University