Heavens

3D reconstructions of individual nanoparticles

video: 3D density maps, atomic position maps, and strain maps of 8 reconstructed nanocrystals show critical differences between the individual particles.

Image: 
IBS

What do you see in the picture above (Figure 1)? Merely a precisely-drawn three-dimensional picture of nanoparticles? Far more than that, nanotechnologists will say, due to a new study published in the journal Science. Whether a material catalyzes chemical reactions or impedes any molecular response is all about how its atoms are arranged. The ultimate goal of nanotechnology is centered around the ability to design and build materials atom by atom, thus allowing scientists to control their properties in any given scenario. However, atomic imaging techniques have not been sufficient to determine the precise three-dimensional atomic arrangements of materials in liquid solution, which would tell scientists how materials behave in everyday life, such as in water or blood plasma.

Researchers at the Center for Nanoparticle Research within the Institute for Basic Science (IBS, South Korea), in collaboration with Dr. Hans Elmlund at Monash University's Biomedicine Discovery Institute in Australia and Dr. Peter Ercius at Lawrence Berkeley National Laboratory's Molecular Foundry in the USA, have reported a new analytic methodology that can resolve the 3D structure of individual nanoparticles with atomic-level resolution. The 3D atomic positions of individual nanoparticles can be extracted with a precision of 0.02 nm--six times smaller than the smallest atom: hydrogen. In other words, this high-resolution method detects individual atoms and how they are arranged within a nanoparticle.

The researchers call their development 3D SINGLE (Structure Identification of Nanoparticles by Graphene Liquid cell Electron microscopy) and utilize mathematical algorithms to derive 3D structures from a set of 2D imaging data acquired by one of the most powerful microscopes on Earth. First, a nanocrystal solution is sandwiched in-between two graphene sheets which are each just a single atom thick (Figure 2.1). "If a fish bowl were made of a thick material, it would be hard to see through it. Since graphene is the thinnest and strongest material in the world, we created graphene pockets that allow the electron beam of the microscope to shine through the material while simultaneously sealing the liquid sample," explains PARK Jungwon, one of the corresponding authors of the study (assistant professor at the School of Chemical and Biological Engineering in Seoul National University).

The researchers obtain movies at 400 images per second of each nanoparticle freely rotating in liquid using a high-resolution transmission electron microscope (TEM). The team then applies their reconstruction methodology to combine the 2D images into a 3D map showing the atomic arrangement. Locating the precise position of each atom tells researchers how the nanoparticle was created and how it will interact in chemical reactions.

The study defined the atomic structures of eight platinum nanoparticles - platinum is the most valuable of the precious metals, used in a number of applications such as catalytic materials for energy storage in fuel cells and petroleum refinement. Even though all of the particles were synthesized in the same batch, they displayed important differences in their atomic structures which affect their performance.

"Now it is possible to experimentally determine the precise 3D structures of nanomaterials that had only been theoretically speculated. The methodology we developed will contribute to fields where nanomaterials are used, such as fuel cells, hydrogen vehicles, and petrochemical synthesis," says Dr. KIM Byung Hyo, the first author of the study. Notably, this methodology can measure the atomic displacement and strain on the surface atoms of individual nanoparticles. The strain analysis from the 3D reconstruction facilitates characterization of the active sites of nanocatalysts at the atomic scale, which will enable structure-based design to improve the catalytic activities. The methodology can also contribute more generally to the enhancement of nanomaterials' performance.

"We have developed a groundbreaking methodology for determining the structures that govern the physical and chemical properties of nanoparticles at the atomic level in their native environment. The methodology will provide important clues in the synthesis of nanomaterials. The algorithm we introduced is related to new drug development through structure analysis of proteins and big data analysis, so we are expecting further application to new convergence research," notes Director HYEON Taeghwan of the IBS Center for Nanoparticle Research.

Credit: 
Institute for Basic Science

Wits researchers unravel the mystery of non-cotectic magmatic rocks

image: Photomicrographs showing anorthosites with 'correct' and 'wrong' proportions of chromite from the Bushveld Complex, South Africa.

Image: 
Wits University

Researchers at Wits University in Johannesburg, South Africa, have found the answer to an enigma that has had geologists scratching their heads for years.

The question is that of how certain magmatic rocks that are formed through crystallisation in magmatic chambers in the Earth's crust, defy the norm, and contain minerals in random proportions.

Normally, magmatic rocks consist of some fixed proportions of various minerals. Geologists know, for instance, that a certain rock will have 90% of one mineral and 10% of another mineral.

However, there are some magmatic rocks that defy this norm and do not adhere to this general rule of thumb. These rocks, called as non-cotectic rocks, contain minerals in completely random proportions.

One example is chromite-bearing anorthosite from the famous Bushveld Complex in South Africa. These rocks contain up to 15% to 20% of chromite, instead of only 1%, as would normally be expected.

"Traditionally, these rocks with a 'wrong' composition were attributed to either mechanical sorting of minerals that crystallised from a single magma or mechanical mixing of minerals formed from two or more different magmas," says Professor Rais Latypov from the Wits School of Geosciences.

Seeing serious problems with both these approaches, Latypov and his colleague Dr Sofya Chistyakova, also from the Wits School of Geosciences, found that there is actually a simple explanation to this question - and it has nothing to do with the mechanical sorting or mixing of minerals to produce these rocks.

Their research, published in the journal Geology, shows that an excess amount of some minerals contained in these rocks may originate in the feeder conduits along which the magmas are travelling from the deep-seated staging chambers towards Earth's surface.

"While travelling up through the feeder channels, the magma gets into contact with cold sidewalls and starts crystallising, thereby producing more of the mineral(s) than what should be expected," says Chistyakova.

The general principle of this approach can be extended to any magmatic rocks with 'wrong' proportions of minerals in both plutonic and volcanic environments of the Earth.

"It is possible that a clue to some other petrological problems of magmatic complexes should be searched for in the feeder conduits rather than in magma chambers themselves. This appealing approach holds great promise for igneous petrologists working with basaltic magma complexes," says Latypov.

Credit: 
University of the Witwatersrand

Lucy had an ape-like brain

image: Brain imprints in fossil skulls of the species Australopithecus afarensis (famous for "Lucy", and the "Dikika child" from Ethiopia pictured here in frontal and lateral view) shed new light on the evolution of brain growth and organization.

Image: 
Philipp Gunz, CC BY-NC-ND 4.0

The species Australopithecus afarensis inhabited East Africa more than three million years ago, and occupies a key position in the hominin family tree, as it is widely accepted to be ancestral to all later hominins, including the human lineage. "Lucy and her kind provide important evidence about early hominin behavior. They walked upright, had brains that were around 20 percent larger than those of chimpanzees, may have used sharp stone tools," explains senior author Zeresenay Alemseged from the University of Chicago, who directs the Dikika field project in Ethiopia, where the skeleton of an Australopithecus child was found in the year 2000. "Our new results show how their brains developed, and how they were organized," adds Alemseged.

To study brain growth and organization in Australopithecus afarensis the researchers scanned the fossil cranium of the Dikika child using synchrotron microtomography at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. With the help of this state-of the-art technology researchers can reveal the age at death with a precision of a few weeks.

In addition, seven other well-preserved fossil crania from the Ethiopian sites Dikika and Hadar were scanned using high-resolution conventional tomography. Several years of painstaking fossil reconstruction, and counting of dental growth lines, yielded an exceptionally preserved brain imprint of the Dikika child, a precise age at death, new endocranial volume estimates, and previously undetected endocranial features of well-known Australopithecus fossils.

These data shed new light on two questions that have been controversial: Is there evidence for human-like brain reorganization in Australopithecus afarensis? Was the pattern of brain growth in A. afarensis more similar to that of chimpanzees or that of humans?

Extended childhood

Contrary to previous claims, the endocranial imprints of Australopithecus afarensis reveal an ape-like brain organization, and no features derived towards humans. However, a comparison of infant and adult endocranial volumes nevertheless indicates more human-like protracted brain growth in Australopithecus afarensis, likely critical for the evolution of a long period of childhood learning in hominins.

The brains of modern humans are not only much larger than those of our closest living ape relatives, they are also organized differently, and take longer to grow and mature. For example, compared with chimpanzees, modern human infants learn longer at the expense of being entirely dependent on parental care for longer periods of time. Together, these characteristics are important for human cognition and social behavior, but their evolutionary origins remain unclear. Brains do not fossilize, but as the brain grows and expands before and after birth, the tissues surrounding its outer layer leave an imprint in the bony braincase. Based on these endocasts the researchers could measure endocranial volume, and infer key aspects of brain organization from impressions of brain convolutions in the skull.

Differences in brain organization

A key difference between apes and humans involves the organization of the brain's parietal and occipital lobes. "In all ape brains, a well-defined lunate sulcus approximates the anterior boundary of the primary visual cortex of the occipital lobes," explains co-author Dean Falk from Florida State University, a specialist in interpreting endocranial imprints. Some have previously argued that structural changes of the brain resulted in a more backwards (human-like) placement of the lunate sulcus on endocasts of australopiths, and eventually to the disappearance of a clear endocranial impression in humans. Hypothetically, such brain reorganization in australopiths could have been linked to behaviors that were more complex than those of their great ape relatives (e.g., tool manufacture, mentalizing, and vocal communication). Unfortunately, the lunate sulcus typically does not reproduce well on endocasts, so there is unresolved controversy about its position in australopiths.

The exceptionally well preserved endocast of the Dikika child has an unambiguous impression of a lunate sulcus in an ape-like position. Likewise, the computed tomographic scans reveal a previously undetected impression of an ape-like lunate sulcus in a well-known fossil of an adult Australopithecus individual from Hadar (A.L. 162-28). Contrary to previous claims, the researchers did not find evidence for brain reorganization in any Australopithecus afarensis endocast that preserves detailed sulcal impressions.

Virtual dental histology

In infants, synchrotron computed tomographic scans of the dentition make it possible to determine an individual's age at death by counting dental growth lines. Similar to the growth rings of a tree, virtual sections of a tooth reveal incremental growth lines reflecting the body's internal rhythm. Studying the fossilized teeth of the Dikika infant, the team's dental experts Paul Tafforeau (ESRF), Adeline Le Cabec (ESRF/Max Planck Institute for Evolutionary Anthropology), and Tanya Smith (Griffith University) calculated an age at death of 861 days (2.4 years).

"After seven years of work, we finally had all the puzzle pieces to study the evolution of brain growth," says lead author Philipp Gunz: "The age at death of the Dikika child and its endocranial volume, the endocranial volumes of the best-preserved adult Australopithecus afarensis fossils, and comparative data of more than 1600 modern humans and chimpanzees."

Protracted brain growth

The pace of dental development of the Dikika infant was broadly comparable to that of chimpanzees and therefore faster than in modern humans. However, given that the brains of Australopithecus afarensis adults were roughly 20 percent larger than those of chimpanzees, the Dikika child's small endocranial volume suggests a prolonged period of brain development relative to chimpanzees. "Even a conservative comparison of the Dikika infant to small-statured and small-brained adults like Lucy, suggests that brain growth in Australopithecus afarensis was protracted as in humans today," explains Simon Neubauer.

"Our data show that Australopithecus afarensis had an ape-like brain organization, but suggest that these brains developed over a longer period of time than in chimpanzees," concludes Philipp Gunz. Among primates in general, different rates of growth and maturation are associated with different infant-care strategies, suggesting that the extended period of brain growth in Australopithecus afarensis may have been linked to a long dependence on caregivers. Alternatively, slow brain growth could also primarily represent a way to spread the energetic requirements of dependent offspring over many years in environments where food is not abundant. In either case the protracted brain growth in Australopithecus afarensis provided a basis for subsequent evolution of the brain and social behavior in hominins, and was likely critical for the evolution of a long period of childhood learning.

Credit: 
Max Planck Institute for Evolutionary Anthropology

Research identifies regular climbing behavior in a human ancestor

A new study led by the University of Kent has found evidence that human ancestors as recent as two million years ago may have regularly climbed trees.

Walking on two legs has long been a defining feature to differentiate modern humans, as well as extinct species on our lineage (aka hominins), from our closest living ape relatives: chimpanzees, gorillas and orangutans. This new research, based on analysis of fossil leg bones, provides evidence that a hominin species (believed to be either Paranthropus robustus or early Homo) regularly adopted highly flexed hip joints; a posture that in other non-human apes is associated with climbing trees.

These findings came from analysing and comparing the internal bone structures of two fossil leg bones from South Africa, discovered over 60 years ago and believed to have lived between 1 and 3 million years ago. For both fossils, the external shape of the bones were very similar showing a more human-like than ape-like hip joint, suggesting they were both walking on two legs. The researchers examined the internal bone structure because it remodels during life based on how individuals use their limbs. Unexpectedly, when the team analysed the inside of the spherical head of the femur, it showed that they were loading their hip joints in different ways.

The research project was led by Dr Leoni Georgiou, Dr Matthew Skinner and Professor Tracy Kivell at the University of Kent's School of Anthropology and Conservation, and included a large international team of biomechanical engineers and palaeontologists. These results demonstrate that novel information about human evolution can be hidden within fossil bones that can alter our understanding of when, where and how we became the humans we are today.

Dr Georgiou said: 'It is very exciting to be able to reconstruct the actual behaviour of these individuals who lived millions of years ago and every time we CT scan a new fossil it is a chance to learn something new about our evolutionary history.'

Dr Skinner said: 'It has been challenging to resolve debates regarding the degree to which climbing remained an important behaviour in our past. Evidence has been sparse, controversial and not widely accepted, and as we have shown in this study the external shape of bones can be misleading. Further analysis of the internal structure of other bones of the skeleton may reveal exciting findings about the evolution of other key human behaviours such as stone tool making and tool use. Our research team is now expanding our work to look at hands, feet, knees, shoulders and the spine.'

Credit: 
University of Kent

Argonne and CERN weigh in on the origin of heavy elements

image: A look inside the ISOLDE Solenoid Spectrometer at CERN.

Image: 
Argonne National Laboratory

A long-held mystery in the field of nuclear physics is why the universe is composed of the specific materials we see around us. In other words, why is it made of “this” stuff and not other stuff?

Specifically of interest are the physical processes responsible for producing heavy elements — like gold, platinum and uranium — that are thought to happen during neutron star mergers and explosive stellar events.

Scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory led an international nuclear physics experiment conducted at CERN, the European Organization for Nuclear Research, that utilizes novel techniques developed at Argonne to study the nature and origin of heavy elements in the universe. The study may provide critical insights into the processes that work together to create the exotic nuclei, and it will inform models of stellar events and the early universe.

“We can’t just go dig up a supernova out of the earth, so we have to create these extreme environments and study the reactions that occur in them.” — Ben Kay, Argonne physicist and lead scientist on the study

The nuclear physicists in the collaboration are the first to observe the neutron-shell structure of a nucleus with fewer protons than lead and more than 126 neutrons — “magic numbers” in the field of nuclear physics.

At these magic numbers, of which 8, 20, 28, 50 and 126 are canonical values, nuclei have enhanced stability, much as the noble gases do with closed electron shells. Nuclei with neutrons above the magic number of 126 are largely unexplored because they are difficult to produce. Knowledge of their behavior is crucial for understanding the rapid neutron-capture process, or r-process, that produces many of the heavy elements in the universe.

The r-process is thought to occur in extreme stellar conditions such as neutron-star mergers or supernovae. These neutron rich environments are where nuclei can rapidly grow, capturing neutrons to produce new and heavier elements before they have chance to decay.

This experiment focused on the mercury isotope 207Hg. The study of 207Hg could shed light on the properties of its close neighbors, nuclei directly involved in key aspects of the r-process.

“One of the biggest questions of this century has been how the elements formed at the beginning of the universe,” said Argonne physicist Ben Kay, the lead scientist on the study. “It’s difficult to research because we can’t just go dig up a supernova out of the earth, so we have to create these extreme environments and study the reactions that occur in them.”

To study the structure of 207Hg, the researchers first used the HIE-ISOLDE facility at CERN in Geneva, Switzerland. A high-energy beam of protons was fired at a molten lead target, with the resulting collisions producing hundreds of exotic and radioactive isotopes.

They then separated 206Hg nuclei from the other fragments and used CERN’s HIE-ISOLDE accelerator to create a beam of the nuclei with the highest energy ever achieved at that accelerator facility. They then focused the beam at a deuterium target inside the new ISOLDE Solenoidal Spectrometer (ISS).

“No other facility can make mercury beams of this mass and accelerate them to these energies,” said Kay. “This, coupled with the outstanding resolving power of the ISS, allowed us to observe the spectrum of excited states in 207Hg for the first time.”

The ISS is a newly-developed magnetic spectrometer that the nuclear physicists used to detect instances of 206Hg nuclei capturing a neutron and becoming 207Hg. The spectrometer’s solenoidal magnet is a recycled 4-Tesla superconducting MRI magnet from a hospital in Australia. It was moved to CERN and installed at ISOLDE, thanks to a UK-led collaboration between University of Liverpool, University of Manchester, Daresbury Laboratory and collaborators from KU Leuven in Belgium.

Deuterium, a rare heavy isotope of hydrogen, consists of a proton and neutron. When 206Hg captures a neutron from the deuterium target, the proton recoils. The protons emitted during these reactions travel to the detector in the ISS, and their energy and position yield key information on the structure of the nucleus and how it is bound together. These properties have a significant impact on the r-process, and the results can inform important calculations in models of nuclear astrophysics.

The ISS uses a pioneering concept suggested by Argonne distinguished fellow John Schiffer that was built as the lab’s helical orbital spectrometer, HELIOS — the instrument that inspired the development of the ISS spectrometer. HELIOS has allowed exploration of nuclear properties that were once impossible to study, but thanks to HELIOS, have been carried out at Argonne since 2008. CERN’s ISOLDE facility can produce beams of nuclei that complement those that can be made at Argonne.

For the past century, nuclear physicists have been able to gather information about nuclei from the study of collisions where light ion beams hit heavy targets. However, when heavy beams hit light targets, the physics of the collision becomes distorted and more difficult to parse. Argonne’s HELIOS concept was the solution to removing this distortion.

“When you’ve got a cannonball of a beam hitting a fragile target, the kinematics change, and the resulting spectra are compressed,” said Kay. “But John Schiffer realized that when the collision occurs inside a magnet, the emitted protons travel in a spiral pattern towards the detector, and by a mathematical ‘trick’, this unfolds the kinematic compression, resulting in an uncompressed spectrum that reveals the underlying nuclear structure.”

The first analyses of the data from the CERN experiment confirm the theoretical predictions of current nuclear models, and the team plans to study other nuclei in the region of 207Hg using these new capabilities, giving deeper insights into the unknown regions of nuclear physics and the r-process.

Credit: 
DOE/Argonne National Laboratory

X-ray observations of Milky Way's halo rule out models of dark matter decay

An unidentified X-ray signature recently observed in nearby galaxies and galaxy clusters is not due to decay of dark matter, researchers report. The findings rule out previously proposed interpretations of dark matter particle physics. Dark matter (DM) constitutes more than 80% of the matter in the Universe and its gravitational pull is responsible for binding galaxies and galaxy clusters together. Despite its cosmological abundance and the well-established astrophysical evidence of its existence, little about the mysterious material is known, including which subatomic particles make up DM. Some models of potential DM particles predict that they might slowly decay into ordinary matter. If so, the process of dark matter decay would produce faint photon emissions detectable by X-ray telescopes. Recent X-ray observations of nearby galaxy clusters have detected an unidentified X-ray emission line at 3.5 kiloelectronvolts (keV), which has been interpreted by some as a signature of dark matter decay - particularly a hypothetical dark matter particle known as the sterile neutrino. If this is correct, DM surrounding our Galaxy should decay and produce a similar X-ray emission line, spread faintly across the entire night sky. Christopher Dessert and colleagues searched for the 3.5 keV signal within the ambient halo of the Milky Way using data from the European Space Agency's XMM-Newton space telescope. Dessert et al. analyzed blank-sky observations (parts of the sky away from large X-ray emitting regions) with a total exposure time of roughly a year, finding no evidence for the predicted 3.5 KeV line. According to the authors, the findings rule out the predicted signal strength by over an order of magnitude.

Credit: 
American Association for the Advancement of Science (AAAS)

Discovering the diet of the fossil Theropithecus oswaldi found in Cueva Victoria in Spain

image: Cueva Victoria provided with fossil remains of about a hundred species of vertebrates and it is one of the few European sites of the early Pleistocene with remains of human species.

Image: 
UNIVERSITY OF BARCELONA

A study published in Journal of Human Evolution reveals for the first time the diet of the fossil baboon Theropithecus oswaldi found in Cueva Victoria in Cartagena (Murcia, Spain), the only site in Europe with remains of this primate whose origins date back to four million years ago in eastern Africa.

The new study analyses for the first time the diet of the only fossil remains of this primate with the analysis of buccal dental microwear. According to the conclusions, the eating pattern of this guenon -the most abundant in the fossil records from the African Pleistocene- would be different than the one in the baboon Theropithecus gelada -the phylogenetically closest species living in Semien Mountains, northern Ethiopia, at the current moment-, which usually eats herbs and stalks.

The study, led by the lecturers Laura Martínez and Alejandro Pérez-Pérez, from the Faculty of Biology of the University of Barcelona (UB), counts on the participation of experts from the Faculty of Earth Sciences and the Faculty of Psychology of the UB as well as members from the Autonomous University of Barcelona, University of Alicante, the Museum of Orce Prehistory and Palaeontology (Granada) and the George Washington University (United States).

Cueva Victoria: the long journey of the African baboon Theropithecus oswaldi

The genre Theropithecus spread over the Sahara Desert, from east to north and south in the African continent. Its evolutionary lineage, also present in some European and Asian areas, reached its limit of disappearance about 500,000 years ago. Today, it would be only represented by the species Theropithecus gelada, a baboon which only eats plants and shows an ecological profile more similar to herbivore animals rather than primates.

In 1990, the excavation campaign led by the palaeontologist Josep Gibert found the first fossil remain -a tooth- of Theropithecus oswald( Journal of Human Evolution, 1995). This cave -an old manganese mine- provided with fossil remains of about a hundred species of vertebrates and it is one of the few European sites of the early Pleistocene with remains of human species. Outside the African continent, the fossil records of this baboon is scarce and researchers have only found other remains in Ubeidiya (Israel) and Minzapur (India).

The new fossil evidence of T. oswaldi -which date back to 900,000 and 850,000 million years ago- were recovered by a team led by the lecturers Carles Ferràndez-Cañadell and Lluís Gibert, from the Department of Mineralogy, Petrology and Applied Geology of the Faculty of Earth Sciences of the UB. The presence of this African guenon in the south-eastern area of the Iberian Peninsula strengthens the hypothesis of the animal dispersal models going from the African continent to Europe during the Pleistocene through the Strait of Gibraltar.

What was the fossil baboon diet like in the south of the Iberian Peninsula?

The analysis of the produced buccal-dental stretch marks due to food intake reveal the T. oswaldi specimens in Cueva Victoria "would have a more abrasive diet compared to the current T. gelada, and more similar to the diet of other primates such as mangabeys i(Cercocebus sp) and mandrylles (Mandrillys sphinx), which eat fruits and seeds in forested and semiopen ecosystems", notes Laura Martínez, lecturer at the Department of Evolutionary Biology, Ecology and Environmental Sciences of the Faculty of Biology and first author of the study.

Other recent studies based on the observation of T. gelada in the area of Guassa, Ethiopia, describe a more diverse diet, with rhizome and tubers over the most unfavourable season. "The difference between T. oswaldi and T. gelada -continues the researcher- shows that the observed specialization in the current baboon could be a derived specialization which did not exist in the fossils of its lineage. This could respond to a regression in its ecological niche as an adaptation to anthropically altered ecosystems or as a result from climate change".

The published study in Journal of Human Evolution which analyses dental and cranial adaptions of primates from the tribe papionine as the analogue model to the evolution of the hominini lineage -which shared a common geographical space in similar datings. The new study on dental microwear counted on the support from the Spanish Ministry for Research, Development and Innovation, the Catalan Government and La Caixa Foundation.

Credit: 
University of Barcelona

Reducing reliance on nitrogen fertilizers with biological nitrogen fixation

image: Three-day-old seedlings of Setaria viridis A10.1 were inoculated with either Herbaspirillum seropedicae SmR1 (fix+) or SmR54 (fix?), while the control (CTRL) plants were uninoculated. The plants grew for 2 weeks after inoculation under greenhouse conditions. The roots and leaves were harvested. Roots from plants that were inoculated with SmR1 or SmR54 were analyzed by fluorescence microscopy.

Image: 
Beverly J. Agtuca, Sylwia A. Stopka, Thalita R. Tuleski, Fernanda P. do Amaral, Sterling Evans, Yang Liu, Dong Xu, Rose Adele Monteiro, David W. Koppenaal, Ljiljana Paša-Toli?, Christopher R. Anderton, Akos Vertes, and Gary Stacey

Crop yields have increased substantially over the past decades, occurring alongside the increasing use of nitrogen fertilizer. While nitrogen fertilizer benefits crop growth, it has negative effects on the environment and climate, as it requires a great amount of energy to produce. Many scientists are seeking ways to develop more sustainable practices that maintain high crop yields with reduced inputs.

"A more sustainable way to provide nitrogen to crops would be through the use of biological nitrogen fixation, a practice well developed for leguminous crops," says plant pathologist Gary Stacey of the University of Missouri. "A variety of nitrogen fixing bacteria are common in the rhizosphere of most plants. However, such plant growth promoting bacteria (PGPB) have seen only limited use as inoculants in agriculture."

Stacey and his college believe that this limited use is due to the general problems associated with the use of biologicals for crop production and variable efficacy upon application. They conducted research to gain a greater understanding of the metabolic response of the plant host in order to reduce the variability seen with the response of crops to PGPB.

"One challenge with our research is that, while PGPB can colonize roots to high levels, the sites of colonization can be highly localized," said Stacey. "Hence, isolating whole roots results in a considerable dilution of any signal due to the great majority of the root cells not in contact with the bacteria."

To overcome this challenge, Stacey and his team utilized laser ablation electrospray ionization mass spectrometry (LAESI-MS), which allowed them to sample only those sites infected by the bacteria, which they could localized due to expression of green fluorescent protein.

Their results showed that bacterial colonization results in significant shifts in plant metabolism, with some metabolites more significantly abundant in inoculated plants and others, including metabolites indicative of nitrogen, were reduced in roots uninoculated or inoculated with a bacterial strain unable to fix nitrogen.

"Interestingly, compounds, involved in indole-alkaloid biosynthesis were more abundant in the roots colonized by the fix- strain, perhaps reflecting a plant defense response," said Stacey. "Ultimately, through such research, we hope to define the molecular mechanisms by which PGPB stimulate plant growth so as to devise effective and consistent inoculation protocols to improve crop performance."

Stacey's lab has long been interested in biological nitrogen fixation and plant-microbe interactions in general. Since the discovery of biological nitrogen fixation (BNF), the lab has had a goal to convey the benefits of BNF to non-leguminous crops such as maize. PGPB have this ability in nature but this has not been adequately captured for practical agricultural production.

"We believe that, in contrast to other better studied interactions, such as rhizobium-legume, this is due to a general lack of information about the molecular mechanisms by which PGPB stimulate plant growth. Hence, we have undertaken in our lab projects that seek to provide this information in the belief that such information will increase the efficacy of PGPG inoculants with the net effect to increase their use for crop production."

Stacey and his team were most surprised to find that they did not see a significant impact on phytohormone production that correlated tightly with the ability of PGPB to enhance plant growth. This suggests that PGPB impact plant metabolism to a greater extent than previously realized, pointing perhaps to more complex explanations for how these bacteria impact plant growth.

Credit: 
American Phytopathological Society

The growth of an organism rides on a pattern of waves

When an egg cell of almost any sexually reproducing species is fertilized, it sets off a series of waves that ripple across the egg's surface. These waves are produced by billions of activated proteins that surge through the egg's membrane like streams of tiny burrowing sentinels, signaling the egg to start dividing, folding, and dividing again, to form the first cellular seeds of an organism.

Now MIT scientists have taken a detailed look at the pattern of these waves, produced on the surface of starfish eggs. These eggs are large and therefore easy to observe, and scientists consider starfish eggs to be representative of the eggs of many other animal species.

In each egg, the team introduced a protein to mimic the onset of fertilization, and recorded the pattern of waves that rippled across their surfaces in response. They observed that each wave emerged in a spiral pattern, and that multiple spirals whirled across an egg's surface at a time. Some spirals spontaneously appeared and swirled away in opposite directions, while others collided head-on and immediately disappeared.

The behavior of these swirling waves, the researchers realized, is similar to the waves generated in other, seemingly unrelated systems, such as the vortices in quantum fluids, the circulations in the atmosphere and oceans, and the electrical signals that propagate through the heart and brain.

"Not much was known about the dynamics of these surface waves in eggs, and after we started analyzing and modeling these waves, we found these same patterns show up in all these other systems," says physicist Nikta Fakhri, the Thomas D. and Virginia W. Cabot Assistant Professor at MIT. "It's a manifestation of this very universal wave pattern."

"It opens a completely new perspective," adds Jörn Dunkel, associate professor of mathematics at MIT. "You can borrow a lot of techniques people have developed to study similar patterns in other systems, to learn something about biology."

Fakhri and Dunkel have published their results today in the journal Nature Physics. Their co-authors are Tzer Han Tan, Jinghui Liu, Pearson Miller, and Melis Tekant of MIT.

Finding one's center

Previous studies have shown that the fertilization of an egg immediately activates Rho-GTP, a protein within the egg which normally floats around in the cell's cytoplasm in an inactive state. Once activated, billions of the protein rise up out of the cytoplasm's morass to attach to the egg's membrane, snaking along the wall in waves.

"Imagine if you have a very dirty aquarium, and once a fish swims close to the glass, you can see it," Dunkel explains. "In a similar way, the proteins are somewhere inside the cell, and when they become activated, they attach to the membrane, and you start to see them move."

Fakhri says the waves of proteins moving across the egg's membrane serve, in part, to organize cell division around the cell's core.

"The egg is a huge cell, and these proteins have to work together to find its center, so that the cell knows where to divide and fold, many times over, to form an organism," Fakhri says. "Without these proteins making waves, there would be no cell division."

In their study, the team focused on the active form of Rho-GTP and the pattern of waves produced on an egg's surface when they altered the protein's concentration.

For their experiments, they obtained about 10 eggs from the ovaries of starfish through a minimally invasive surgical procedure. They introduced a hormone to stimulate maturation, and also injected fluorescent markers to attach to any active forms of Rho-GTP that rose up in response. They then observed each egg through a confocal microscope and watched as billions of the proteins activated and rippled across the egg's surface in response to varying concentrations of the artificial hormonal protein.

"In this way, we created a kaleidoscope of different patterns and looked at their resulting dynamics," Fakhri says.

Hurricane track

The researchers first assembled black-and-white videos of each egg, showing the bright waves that traveled over its surface. The brighter a region in a wave, the higher the concentration of Rho-GTP in that particular region. For each video, they compared the brightness, or concentration of protein from pixel to pixel, and used these comparisons to generate an animation of the same wave patterns.

From their videos, the team observed that waves seemed to oscillate outward as tiny, hurricane-like spirals. The researchers traced the origin of each wave to the core of each spiral, which they refer to as a "topological defect." Out of curiosity, they tracked the movement of these defects themselves. They did some statistical analysis to determine how fast certain defects moved across an egg's surface, and how often, and in what configurations the spirals popped up, collided, and disappeared.

In a surprising twist, they found that their statistical results, and the behavior of waves in an egg's surface, were the same as the behavior of waves in other larger and seemingly unrelated systems.

"When you look at the statistics of these defects, it's essentially the same as vortices in a fluid, or waves in the brain, or systems on a larger scale," Dunkel says. "It's the same universal phenomenon, just scaled down to the level of a cell."

The researchers are particularly interested in the waves' similarity to ideas in quantum computing. Just as the pattern of waves in an egg convey specific signals, in this case of cell division, quantum computing is a field that aims to manipulate atoms in a fluid, in precise patterns, in order to translate information and perform calculations.

"Perhaps now we can borrow ideas from quantum fluids, to build minicomputers from biological cells," Fakhri says. "We expect some differences, but we will try to explore [biological signaling waves] further as a tool for computation."

Credit: 
Massachusetts Institute of Technology

Solar system acquired current configuration not long after its formation

The hypothesis that the Solar System was born from a gigantic cloud of gas and dust was first floated in the second half of the eighteenth century. It was proposed by German philosopher Immanuel Kant and developed by French mathematician Pierre-Simon de Laplace. It is now a consensus among astronomers. Thanks to the enormous amount of observational data, theoretical input and computational resources now available, it has been continually refined, but this is not a linear process.

Nor is it without controversies. Until recently the Solar System was thought to have acquired its present features as a result of a period of turbulence that occurred some 700 million years after its formation.

However, some of the latest research suggests it took shape in the more remote past, at some stage during the first 100 million years and very probably between 10 million and 60 million years ago.

A study conducted by three Brazilian researchers offers robust evidence of this earlier structuring. Reported in an article published
in the journal Icarus
, the study was supported by São Paulo Research Foundation - FAPESP. The authors are all affiliated with São Paulo State University's Engineering School (FEG-UNESP) in Guaratinguetá (Brazil).

The lead author is Rafael Ribeiro de Sousa. The other two authors are André Izidoro Ferreira da Costa and Ernesto Vieira Neto, principal investigator for the study.

"The large amount of data acquired from detailed observation of the Solar System enables us to define with precision the trajectories of the many bodies that orbit the Sun," Ribeiro told. "This orbital structure enables us to write the history of the formation of the Solar System.

Emerging from the gas and dust cloud that surrounded our star some 4.6 billion years ago, the giant planets formed in orbits closer to each other and also closer to the Sun. The orbits were also more co-planar and more circular than they are now, and more interconnected in resonant dynamic systems. These stable systems are the most likely outcome of the gravitational dynamics of planet formation from gaseous protoplanetary disks."

Izidoro offered more details. "The four giant planets - Jupiter, Saturn, Uranus and Neptune - emerged from the gas and dust cloud in more compact orbits," he said. "Their motions were strongly synchronous owing to resonant chains, with Jupiter completing three revolutions around the Sun while Saturn completed two. All the planets were involved in this synchronicity produced by the dynamics of the primordial gas disk and the gravitational dynamics of the planets."

However, throughout the formation region of the outer Solar System, which includes the zone located beyond the current orbits of Uranus and Neptune, the Solar System had a large population of planetesimals, small bodies of rock and ice considered the building blocks of planets and forerunners of asteroids, comets and satellites.

The outer planetesimal disk began disturbing the system's gravitational balance. The resonances were disrupted after the gas phase, and the system entered a period of chaos in which the giant planets interacted violently and ejected matter into space.

"Pluto and its icy neighbors were pushed into the Kuiper Belt, where they're located now, and the entire group of planets migrated to orbits more distant from the Sun," Ribeiro said.

The Kuiper Belt, whose existence was proposed in 1951 by Dutch astronomer Gerard Kuiper and later confirmed by astronomical observations, is a toroidal (doughnut-shaped) structure made up of thousands of small bodies orbiting the Sun.

The diversity of their orbits is not seen in any other part of the Solar System. The Kuiper Belt's inner edge begins at the orbit of Neptune about 30 astronomical units (AUs) from the Sun. The outer edge is about 50 AUs from the Sun. One AU is approximately equal to the average distance from Earth to the Sun.

Returning to the disruption of synchronicity and the onset of the chaotic stage, the question is when this happened - very early in the life of the Solar System, when it was 100 million years old or less, or much later, probably about 700 million years after the planets formed?

"Until recently the late instability hypothesis predominated," Ribeiro said. "Dating of the Moon rocks brought back by the Apollo astronauts suggested they were created by asteroids and comets crashing into the lunar surface at the same time.

This cataclysm is known as the 'late heavy bombardment' of the Moon. If it happened on the Moon, it presumably also happened on Earth and the Solar System's other terrestrial planets. Because a great deal of matter in the form of asteroids and comets was projected in all directions in the Solar System during the period of planetary instability, it was deduced from the Moon rocks that this chaotic period occurred late, but in recent years the idea of a 'late bombardment' of the Moon has fallen out of favor."

According to Ribeiro, if the late chaotic catastrophe had occurred it would have destroyed Earth and the other terrestrial planets, or at least caused disturbances that would have placed them in totally different orbits from those we observe now.

Furthermore, the Moon rocks brought back by the Apollo astronauts were found to have been produced by a single impact. If they had originated in late giant planet instability, there would be evidence of several different impacts, given the scattering of the planetesimals by the giant planets.

"The starting-point for our study was the idea that the instability should be dated dynamically. The instability can only have happened later if there was a relatively large distance between the inner edge of the disk of planetesimals and Neptune's orbit when the gas was exhausted. This relatively large distance proved unsustainable in our simulation," Ribeiro said.

The argument is based on a simple premise: the shorter the distance between Neptune and the planetesimal disk, the greater the gravitational influence, and hence the earlier the period of instability. Conversely, later instability requires a larger distance.

"What we did was sculpt the primordial planetesimal disk for the first time. To do so we had to go back to the formation of the ice giants Uranus and Neptune. Computer simulations based on a model constructed by Professor Izidoro [Ferreira da Costa] in 2015 showed that the formation of Uranus and Neptune may have originated in planetary embryos with several Earth masses. Massive collisions of these super-Earths would explain, for example, why Uranus spins on its side," Ribeiro said, referring to Uranus's "tilt", with north and south poles located on its sides rather than top and bottom.

Previous studies had pointed to the importance of the distance between Neptune's orbit and the inner boundary of the planetesimal disk, but they used a model in which the four giant planets were already formed.

"The novelty of this latest study is that the model doesn't begin with completely formed planets. Instead, Uranus and Neptune are still in the growth stage, and the growth driver is two or three collisions involving objects with up to five Earth masses," Izidoro said.

"Imagine a situation in which Jupiter and Saturn are formed but we have five to ten super-Earths instead of Uranus and Neptune. The super-Earths are forced by the gas to synchronize with Jupiter and Saturn, but being numerous their synchronicity fluctuates and they end up colliding. The collisions reduce their number, making synchronicity possible. Eventually Uranus and Neptune are left.

"While the two ice giants were forming in the gas, the planetesimal disk was being consumed. Part of the matter was accreted to Uranus and Neptune, and part was propelled to the outskirts of the Solar System. The growth of Uranus and Neptune therefore defined the position of the inner boundary of the planetesimal disk. What was left of the disk is now the Kuiper Belt. The Kuiper Belt is basically a relic of the primordial planetesimal disk, which was once far more massive."

The proposed model is consistent with the giant planets' current orbits and with the structure observed in the Kuiper Belt. It is also consistent with the motion of the Trojans, a large group of asteroids that share Jupiter's orbit and were presumably captured during the disruption of synchronicity.

According to a paper published by Izidoro in 2017 (read more at agencia.fapesp.br/26583), Jupiter and Saturn were still in formation, with their growth contributing to displacement of the asteroid belt. The latest paper is a kind of continuation, starting from a stage in which Jupiter and Saturn were fully formed but still synchronized, and describing the evolution of the Solar System from there on.

"Gravitational interaction between the giant planets and the planetesimal disk produced disturbances in the gas disk that spread in the form of waves. The waves produced compact and synchronous planetary systems. When the gas ran out, interaction between the planets and planetesimal disk disrupted the synchronicity and gave rise to the chaotic phase. Taking all this into account, we discovered that the conditions simply didn't exist for the distance between Neptune's orbit and the inner boundary of the planetesimal disk to become large enough to sustain the late instability hypothesis. This is the main contribution of our study, which shows that the instability occurred in the first hundred million years and may have occurred, for example, before the formation of Earth and the Moon," Ribeiro said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Researchers detail how antineutrino detectors could aid nuclear nonproliferation

image: Patrick Huber a professor in the Virginia Tech Department of Physics,

Image: 
Virginia Tech

Patrick Huber, a professor in the Virginia Tech Department of Physics, has co-authored an article that describes the potential uses and limitations of antineutrino detectors for nuclear security applications related to reactor, spent fuel, and explosion monitoring.

The article appears in the latest issue of Reviews of Modern Physics. In the paper, the scientists review current and projected readiness of various antineutrino-based monitoring technologies. Huber's co-authors include Adam Bernstein and Nathaniel Bowden, physicists at Lawrence Livermore National Laboratory (LLNL), part of the University of California, Berkeley; as well as Bethany Goldblum, also from U.C. Berkeley; Igor Jovanovic, of the University of Michigan; and John Mattingly, of North Carolina State University.

In the paper, Huber and cohorts argue that a tiny particle could offer help for a big problem - the threat of nuclear proliferation. "For more than six decades, scientists have been developing instruments for fundamental physics that can detect antineutrinos, particles that have no electric charge, almost no mass and easily pass through matter," the team said. "Antineutrinos are emitted in vast quantities by nuclear reactors, and since the 1970s, scientists have considered turning antineutrino detection into a tool for nuclear security."

With advances by scientists at LLNL and other institutions, researchers are moving closer to deploying technology to remotely monitor these subatomic particles from nuclear power plants at long distances. Such a breakthrough would allow them to warn international authorities about the illicit production of plutonium, a key material for nuclear weapons. It also could help with verification of existing and planned treaties that seek to limit nuclear weapons materials production worldwide.

Antineutrinos, the antimatter counterpart to neutrinos, are produced in nuclear power plants when the fissile materials of uranium and plutonium break apart, creating fission products that emit antineutrinos in the process.

"At close range from a reactor, antineutrinos allow the measurement of plutonium content and the production rate," said Huber, director of the Center for Neutrino Physics at Virginia Tech and a member of the Virginia Tech College of Science faculty. "This capability would provide high-level assurances of treaty compliance while being less intrusive to the facility."

The study was initiated as part of an ongoing research effort led by LLNL and supported by the National Nuclear Security Administration's Office of Defense Nuclear Nonproliferation Research and Development. Huber and team contend that advances in applied antineutrino physics have the potential to strengthen the existing Treaty on the Nonproliferation of Nuclear Weapons, which provides a framework for facilitating the peaceful use of nuclear technology while reducing nuclear weapons proliferation risks through safeguards, monitoring, and verification.

In their paper, the researchers see potential for three applications of antineutrino technology -- near-field nuclear reactor monitoring, far-field monitoring, and monitoring spent nuclear fuel. They conclude that antineutrino technology stationed within about 100 meters of a nuclear reactor could ensure that nations are not making and diverting weapons-usable material under the cover of civilian energy production. By measuring the quantity of antineutrinos produced during a set period, it is possible to approximately quantify the amount of plutonium or uranium in a reactor.

In the area of far-field monitoring, the researchers also said technology for detecting nuclear reactor activity at discovery or exclusion at ranges of 120 miles is possible. A third application for antineutrino technology to detect diversion of material could be to monitor the spent fuel that has been used to operate nuclear reactors.

Several of the article's authors are involved in efforts to advance antineutrino detection technology.

Credit: 
Virginia Tech

Long-term analysis shows GM cotton no match for insects in India

image: An Indian farmer applies pesticide to his cotton field.

Image: 
Glenn Davis Stone/Washington University

Genetically modified (GM) Bt cotton produces its own insecticide. The seeds were introduced in India in 2002 and today account for 90% of all cotton planting in the country. Bt cotton is now the most widely planted GM crop on small farms in the developing world.

In India, Bt cotton is the most widely planted cotton crop by acreage, and it is hugely controversial. Supporters long touted increased yields and reduced pesticides to justify its pickup. But that argument does not hold up under the first long-term study of Bt cotton impacts in India. The analysis is co-authored by a Washington University in St. Louis anthropologist in the journal Nature Plants.

Bt cotton is explicitly credited with tripling cotton production during 2002-2014. But the largest production gains came prior to widespread seed adoption and must be viewed in line with changes in fertilization practices and other pest population dynamics, according to Glenn Davis Stone, professor of sociocultural anthropology and environmental studies, both in Arts & Sciences.

"Since Bt cotton first appeared in India there has been a stream of contradictory reports that it has been an unmitigated disaster -- or a triumph," Stone said, noting the characteristic deep divide in conversation about GM crops. "But the dynamic environment in Indian cotton fields turns out to be completely incompatible with these sorts of simplistic claims."

Many economists and other observers based their assessments on much shorter time frames than Stone's new study, which spans 20 years.

"There are two particularly devastating caterpillar pests for cotton in India, and, from the beginning, Bt cotton did control one of them: the (misnamed) American bollworm," Stone continued. "It initially controlled the other one, too -- the pink bollworm -- but that pest quickly developed resistance and now it is a worse problem than ever.

"Bt plants were highly vulnerable to other insect pests that proliferated as more and more farmers adopted the crop. Farmers are now spending much more on insecticides than before they had ever heard of Bt cotton. And the situation is worsening."

Stone, an internationally recognized expert on the human side of global agricultural trends, has published extensively on GM crops in the developing world. His previous work has been funded by the Templeton Foundation and the National Science Foundation.

To prepare this new analysis, Stone partnered with entomologist K.R. Kranthi, the former director of India's Central Institute for Cotton Research. Kranthi is now the head of a technical division at the Washington-based International Cotton Advisory Committee.

"Yields in all crops jumped in 2003, but the increase was especially large in cotton," Stone said. "But Bt cotton had virtually no effect on the rise in cotton yields because it accounted for less than 5% of India's cotton crop at the time."

Instead, huge increases in insecticides and fertilizers may have been the most significant changes.

"Now farmers in India are spending more on seeds, more on fertilizer and more on insecticides," Stone said. "Our conclusion is that Bt cotton's primary impact on agriculture will be its role in making farming more capital-intensive -- rather than any enduring agronomic benefits."

Credit: 
Washington University in St. Louis

Molds damage the lung's protective barrier to spur future asthma attacks

image: A model of how fungal Alp1 can damage the lung's protective barrier and generate an overreaction that becomes asthma.

Image: 
Bruce Klein

MADISON, Wis. - University of Wisconsin-Madison researchers have identified a new way that common Aspergillus molds can induce asthma, by first attacking the protective tissue barrier deep in the lungs.

In both mice and humans, an especially strong response to this initial damage was associated with developing an overreaction to future mold exposure and the constricted airways characteristic of asthma.

The work provides a new avenue of research for understanding and potentially preventing the development of asthma, which affects 25 million Americans. Mold sensitivities account for a quarter to half of asthma responses, so preventing the body from establishing allergic reactions to mold could significantly reduce the burden of the disease.

UW-Madison Professor of Pediatrics, Medicine, and Medical Microbiology and Immunology Bruce Klein and postdoctoral researcher Darin Wiesner published their findings March 3 in the journal Cell Host and Microbe. They collaborated with researchers at the University of Chicago, University of Minnesota and Harvard Medical School to complete the work.

"Aspergillus is ubiquitous, it's everywhere, and we're inhaling spores with every breath we take," says Klein. The team set out to understand how these otherwise harmless molds sensitize some individuals to develop a strong, asthmatic response to their spores.

The mold's digestive enzymes were a natural target. Molds secrete these enzymes to digest proteins in their environment as they feed on decaying matter. One such enzyme, a protease called Alp1, is a known lung allergen and is secreted in large amounts by Aspergillus molds. But how Alp1 induces asthma has been a mystery for years.

Wiesner investigated if Alp1 could trigger a series of well-known allergic response pathways in the body. But he couldn't find any evidence that Aspergillus Alp1 activated these allergic responses, which are often primed to respond to unique signatures of damaging microorganisms, such as pathogens.

"This idea that these ubiquitous fungi that aren't primary pathogens could have evolved highly specific components just didn't seem to make sense," says Wiesner. "So it seemed more reasonable that these proteases one inhales into the lungs just cause damage. And the first thing that they interact with when they enter the lungs of both humans and mice are the epithelial cells."

So Wiesner went looking for which of the 10 types of lung epithelial cells, which make up the lung surface, responded most strongly to Alp1 damage. He zeroed in on those known as club cells. Club cells reside mostly in the bronchioles, the smallest airway passages near where gases are exchanged with the blood. Club cells are known for trying to scrub pollutants from the lungs, so a role in responding to environmental assaults like molds made sense.

Like all lung cells, club cells bind themselves tightly to their neighbors to form a barrier between the lungs and the rest of the body. Those connections are made of proteins, which Alp1 is designed to attack and digest. Wiesner found that exposing mice to Alp1 caused the lung barrier to become leaky, evidence that Alp1 disrupted these cell junctions.

In looking for how the body sensed this damaged barrier, Klein's team turned to the Childhood Origins of Asthma study, led by UW-Madison scientists, which followed hundreds of children for years to identify the genetic and environmental causes of asthma. They found that a mutation near a gene known as TRPV4, which increases the amount of TRPV4 protein produced by the body, was associated with mold-sensitive asthma in children.

Mice also make TRPV4. When Wiesner deleted the gene in mouse club cells, they were much less sensitive to Alp1. When he induced club cells to produce more TRPV4, mice were hypersensitive to the mold enzyme.

TRPV4 senses physical changes in a cell and unleashes a wave of calcium, which is then sensed by other cellular components. Calcium is a common cellular signal, but there's been little evidence of calcium playing a role in producing asthma in the past.

Klein's team now believes that Alp1 attacks the joints between lung cells, which jostles the cells. TRPV4 senses that motion and signals for help repairing the damage to the lung's important barrier. In mice or humans with extra TRPV4, that response is strong enough to elicit an overreaction from the body. That excessive response primes the lungs to respond too strongly the next time they encounter Alp1. The resulting inflammation is felt as asthma.

This study is the first to implicate the TRPV4-calcium pathway in developing asthma, which could provide productive new lines of investigation to calming and preventing the asthmatic response to molds. Drugs exist that can block this calcium pathway but targeting them to the right cells at the right time to prevent the lungs from overreacting to harmless molds would require much more work.

"Previous drug trials have had disappointing results, but those have really been sledgehammer approaches where they just administer calcium channel blockers that would block every calcium channel in the airway," says Wiesner. "Here, we suggest that targeting those drugs to a specific cell could give the specificity needed to just target the detrimental overresponse that leads to asthma."

Credit: 
University of Wisconsin-Madison

Astronomers use slime mold model to reveal dark threads of the cosmic web

image: This reconstruction of the cosmic web using 37,662 galaxies from the Sloan Digital Sky Survey (SDSS) was generated by the Monte Carlo Physarum Machine, an algorithm based on the growth patterns of a slime mold. Top: Large-scale visualization of the emergent structure identified by the slime mold algorithm. This intricate filamentary network is reconstructed given only the SDSS galaxy coordinates, redshifts, and masses. Bottom: Three individual regions showing the underlying SDSS galaxies on the left and the superimposed filament density field on the right.

Image: 
Burchett et al., ApJL, 2020

A computational approach inspired by the growth patterns of a bright yellow slime mold has enabled a team of astronomers and computer scientists at UC Santa Cruz to trace the filaments of the cosmic web that connects galaxies throughout the universe.

Their results, published March 10 in Astrophysical Journal Letters, provide the first conclusive association between the diffuse gas in the space between galaxies and the large-scale structure of the cosmic web predicted by cosmological theory.

According to the prevailing theory, as the universe evolved after the big bang, matter became distributed in a web-like network of interconnected filaments separated by huge voids. Luminous galaxies full of stars and planets formed at the intersections and densest regions of the filaments where matter is most concentrated. The filaments of diffuse hydrogen gas extending between the galaxies are largely invisible, although astronomers have managed to glimpse parts of them.

None of which seems to have anything to do with a lowly slime mold called Physarum polycephalum, typically found growing on decaying logs and leaf litter on the forest floor and sometimes forming spongy yellow masses on lawns. But Physarum has a long history of surprising scientists with its ability to create optimal distribution networks and solve computationally difficult spatial organization problems. In one famous experiment, a slime mold replicated the layout of Japan's rail system by connecting food sources arranged to represent the cities around Tokyo.

Joe Burchett, a postdoctoral researcher in astronomy and astrophysics at UC Santa Cruz, had been looking for a way to visualize the cosmic web on a large scale, but he was skeptical when Oskar Elek, a postdoctoral researcher in computational media, suggested using a Physarum-based algorithm. After all, completely different forces shape the cosmic web and the growth of a slime mold.

But Elek, who has always been fascinated by patterns in nature, had been impressed by the Physarum "biofabrications" of Berlin-based artist Sage Jenson. Starting with the 2-dimensional Physarum model Jenson used (originally developed in 2010 by Jeff Jones), Elek and a friend (programmer Jan Ivanecky) extended it to three dimensions and made additional modifications to create a new algorithm they called the Monte Carlo Physarum Machine.

Burchett gave Elek a dataset of 37,000 galaxies from the Sloan Digital Sky Survey (SDSS), and when they applied the new algorithm to it, the result was a pretty convincing representation of the cosmic web.

"That was kind of a Eureka moment, and I became convinced that the slime mold model was the way forward for us," Burchett said. "It's somewhat coincidental that it works, but not entirely. A slime mold creates an optimized transport network, finding the most efficient pathways to connect food sources. In the cosmic web, the growth of structure produces networks that are also, in a sense, optimal. The underlying processes are different, but they produce mathematical structures that are analogous."

Elek also noted that "the model we developed is several layers of abstraction away from its original inspiration."

Of course, a strong visual resemblance of the model results to the expected structure of the cosmic web doesn't prove anything. The researchers performed a variety of tests to validate the model as they continued to refine it.

Until now, the best representations of the cosmic web have emerged from computer simulations of the evolution of structure in the universe, showing the distribution of dark matter on large scales, including the massive dark matter halos in which galaxies form and the filaments that connect them. Dark matter is invisible, but it makes up about 85 percent of the matter in the universe, and gravity causes ordinary matter to follow the distribution of dark matter.

Burchett's team used data from the Bolshoi-Planck cosmological simulation--developed by Joel Primack, professor emeritus of physics at UC Santa Cruz, and others--to test the Monte Carlo Physarum Machine. After extracting a catalog of dark matter halos from the simulation, they ran the algorithm to reconstruct the web of filaments connecting them. When they compared the outcome of the algorithm to the original simulation, they found a tight correlation. The slime mold model essentially replicated the web of filaments in the dark matter simulation, and the researchers were able to use the simulation to fine-tune the parameters of their model.

"Starting with 450,000 dark matter halos, we can get an almost perfect fit to the density fields in the cosmological simulation," Elek said.

Burchett also performed what he called a "sanity check," comparing the observed properties of the SDSS galaxies with the gas densities in the intergalactic medium predicted by the slime mold model. Star formation activity in a galaxy should correlate with the density of its galactic environment, and Burchett was relieved to see the expected correlations.

Now the team had a predicted structure for the cosmic web connecting the 37,000 SDSS galaxies, which they could test against astronomical observations. For this, they used data from the Hubble Space Telescope's Cosmic Origins Spectrograph. Intergalactic gas leaves a distinctive absorption signature in the spectrum of light that passes through it, and the sight-lines of hundreds of distant quasars pierce the volume of space occupied by the SDSS galaxies.

"We knew where the filaments of the cosmic web should be thanks to the slime mold, so we could go to the archived Hubble spectra for the quasars that probe that space and look for the signatures of the gas," Burchett explained. "Wherever we saw a filament in our model, the Hubble spectra showed a gas signal, and the signal got stronger toward the middle of filaments where the gas should be denser."

In the densest regions, however, the signal dropped off. This too matched expectations, he said, because heating of the gas in those regions ionizes the hydrogen, stripping off electrons and eliminating the absorption signature.

"For the first time now, we can quantify the density of the intergalactic medium from the remote outskirts of cosmic web filaments to the hot, dense interiors of galaxy clusters," Burchett said. "These results not only confirm the structure of the cosmic web predicted by cosmological models, they also give us a way to improve our understanding of galaxy evolution by connecting it with the gas reservoirs out of which galaxies form."

Burchett and Elek met through coauthor Angus Forbes, an associate professor of computational media and director of the UCSC Creative Coding lab in the Baskin School of Engineering. Burchett and Forbes had begun collaborating after meeting at an open mic night for musicians in Santa Cruz, focusing initially on a data visualization app, which they published last year.

Forbes also introduced Elek to the work of Sage Jenson, not because he thought it would apply to Burchett's cosmic web project, but because "he knew I was a nature pattern freak," Elek said.

Coauthor J. Xavier Prochaska, a professor of astronomy and astrophysics at UCSC who has done pioneering work using quasars to probe the structure of the intergalactic medium, said, "This creative technique and its unanticipated success highlight the value of interdisciplinary collaborations, where completely different perspectives and expertise are brought to bear on scientific problems."

Forbes' Creative Coding lab combines approaches from media arts, design, and computer science. "I think there can be real opportunities when you integrate the arts into scientific research," Forbes said. "Creative approaches to modeling and visualizing data can lead to new perspectives that help us make sense of complex systems."

Credit: 
University of California - Santa Cruz

'Strange' glimpse into neutron stars and symmetry violation

image: Inner vertex components of the STAR detector at the Relativistic Heavy Ion Collider (righthand view) allow scientists to trace tracks from triplets of decay particles picked up in the detector's outer regions (left) to their origin in a rare "antihypertriton" particle that decays just outside the collision zone. Measurements of the momentum and known mass of the decay products (a pi+ meson, antiproton, and antideuteron) can then be used to calculate the mass and binding energy of the parent particle. Doing the same for the hypertriton (which decays into different "daughter" particles) allows precision comparisons of these matter and antimatter varieties.

Image: 
Brookhaven National Laboratory

UPTON, NY--New results from precision particle detectors at the Relativistic Heavy Ion Collider (RHIC) offer a fresh glimpse of the particle interactions that take place in the cores of neutron stars and give nuclear physicists a new way to search for violations of fundamental symmetries in the universe. The results, just published in Nature Physics, could only be obtained at a powerful ion collider such as RHIC, a U.S. Department of Energy (DOE) Office of Science user facility for nuclear physics research at DOE's Brookhaven National Laboratory.

The precision measurements reveal that the binding energy holding together the components of the simplest "strange-matter" nucleus, known as a "hypertriton," is greater than obtained by previous, less-precise experiments. The new value could have important astrophysical implications for understanding the properties of neutron stars, where the presence of particles containing so-called "strange" quarks is predicted to be common.

The second measurement was a search for a difference between the mass of the hypertriton and its antimatter counterpart, the antihypertriton (the first nucleus containing an antistrange quark, discovered at RHIC in 2010). Physicists have never found a mass difference between matter-antimatter partners so seeing one would be a big discovery. It would be evidence of "CPT" violation--a simultaneous violation of three fundamental symmetries in nature pertaining to the reversal of charge, parity (mirror symmetry), and time.

"Physicists have seen parity violation, and violation of CP together (each earning a Nobel Prize for Brookhaven Lab[--), but never CPT," said Brookhaven physicist Zhangbu Xu, co-spokesperson of RHIC's STAR experiment, where the hypertriton research was done.

But no one has looked for CPT violation in the hypertriton and antihypertriton, he said, "because no one else could yet."

The previous CPT test of the heaviest nucleus was performed by the ALICE collaboration at Europe's Large Hadron Collider (LHC), with a measurement of the mass difference between ordinary helium-3 and antihelium-3. The result, showing no significant difference, was published in Nature Physics in 2015.

Spoiler alert: The STAR results also reveal no significant mass difference between the matter-antimatter partners explored at RHIC, so there's still no evidence of CPT violation. But the fact that STAR physicists could even make the measurements is a testament to the remarkable capabilities of their detector.

Strange matter

The simplest normal-matter nuclei contain just protons and neutrons, with each of those particles made of ordinary "up" and "down" quarks. In hypertritons, one neutron is replaced by a particle called a lambda, which contains one strange quark along with the ordinary up and down varieties.

Such strange matter replacements are common in the ultra-dense conditions created in RHIC's collisions--and are also likely in the cores of neutron stars where a single teaspoon of matter would weigh more than 1 billion tons. That's because the high density makes it less costly energy-wise to make strange quarks than the ordinary up and down varieties.

For that reason, RHIC collisions give nuclear physicists a way to peer into the subatomic interactions within distant stellar objects without ever leaving Earth. And because RHIC collisions create hypertritons and antihypertritons in nearly equal amounts, they offer a way to search for CPT violation as well.

But finding those rare particles among the thousands that stream from each RHIC particle smashup--with collisions happening thousands of times each second--is a daunting task. Add to the challenge the fact that these unstable particles decay almost as soon as they form--within centimeters of the center of the four-meter-wide STAR detector.

Precision detection

Fortunately, detector components added to STAR for tracking different kinds of particles made the search a relative cinch. These components, called the "Heavy-Flavor Tracker," are located very close to the STAR detector's center. They were developed and built by a team of STAR collaborators led by scientists and engineers at DOE's Lawrence Berkeley National Laboratory (Berkeley Lab). These inner components allow scientists to match up tracks created by decay products of each hypertriton and antihypertriton with their point of origin just outside the collision zone.

"What we look for are the 'daughter' particles--the decay products that strike detector components at the outer edges of STAR," said Berkeley Lab physicist Xin Dong. Identifying tracks of pairs or triplets of daughter particles that originate from a single point just outside the primary collision zone allows the scientists to pick these signals out from the sea of other particles streaming from each RHIC collision.

"Then we calculate the momentum of each daughter particle from one decay (based on how much they bend in STAR's magnetic field), and from that we can reconstruct their masses and the mass of the parent hypertriton or antihypertriton particle before it decayed," explained Declan Keane of Kent State University (KSU). Telling the hypertriton and antihypertriton apart is easy because they decay into different daughters, he added.

"Keane's team, including Irakli Chakeberia, has specialized in tracking these particles through the detectors to 'connect the dots,'" Xu said. "They also provided much needed visualization of the events."

As noted, compiling data from many collisions revealed no mass difference between the matter and antimatter hypernuclei, so there's no evidence of CPT violation in these results.

But when STAR physicists looked at their results for the binding energy of the hypertriton, it turned out to be larger than previous measurements from the 1970s had found.

The STAR physicists derived the binding energy by subtracting their value for the hypertriton mass from the combined known masses of its building-block particles: a deuteron (a bound state of a proton and a neutron) and one lambda.

"The hypertriton weighs less than the sum of its parts because some of that mass is converted into the energy that is binding the three nucleons together," said Fudan University STAR collaborator Jinhui Chen, whose PhD student, Peng Liu, analyzed the large datasets to arrive at these results. "This binding energy is really a measure of the strength of these interactions, so our new measurement could have important implications for understanding the 'equation of state' of neutron stars," he added.

For example, in model calculations, the mass and structure of a neutron star depends on the strength of these interactions. "There's great interest in understanding how these interactions--a form of the strong force--are different between ordinary nucleons and strange nucleons containing up, down, and strange quarks," Chen said. "Because these hypernuclei contain a single lambda, this is one of the best ways to make comparisons with theoretical predictions. It reduces the problem to its simplest form."

Credit: 
DOE/Brookhaven National Laboratory