Heavens

Illuminating cell surface receptors

image: Site-specific incorporation of two noncanonical amino acids for two-color bioorthogonal labeling and crosslinking of proteins on live mammalian cells

Image: 
Simon Elsässer

Human cells sense and communicate via cell surface receptors on their surface. Information about the environment is relayed to the inside of the cell through dynamic changes in their arrangement or conformation. To gain better understanding of these dynamics, researchers have developed a variety of imaging methods that allow them to observe receptors in real time.

Birthe Meineke and Johannes Heimgärtner have jointly developed a method in Simon Elsässer's lab at Science For Life Laboratory Stockholm to label and image receptors on live cells with two different colors.

Unlike existing live-cell microscopy methods, which typically rely on sizeable fluorescent proteins fused to the receptor, here only two amino acids in the receptor of interest are exchanged for noncanonical amino acids. These two synthetic building blocks act as chemical handles, each compatible with a different fluorescent dye, using so-called bioorthogonal chemistry. The method allows for two differently colored dyes to be installed on live cells in almost any position of the receptor. Hence, fluorescent assays can be developed that not only read out the localization of the receptor, but also report on conformational changes involved in sensing the environment and transducing extracellular signals to the cells' interior.

Many receptors, such as the large family of G-protein coupled receptors, are important drug targets. Mechanistic insight into the action of drugs on those receptors has so far predominantly been collected on purified receptors in an in vitro environment. The new method, successfully demonstrated by the team on a G-protein coupled receptor and a Notch receptor, expands the possibilities to study natural behavior and pharmacology of receptors in their native setting in the membrane of live cells.

Credit: 
Science For Life Laboratory

Scientists provide new explanation for the far side of the Moon's strange asymmetry

image: The composition of the Moon's near side is oddly different from that of its far side, and scientists think they finally understand why.

Image: 
NASA/NOAA

The Earth‐Moon system’s history remains mysterious. Scientists believe the two formed when a Mars‐sized body collided with the proto‐Earth. Earth ended up being the larger daughter of this collision and retained enough heat to become tectonically active. The Moon, being smaller, likely cooled down faster and geologically ‘froze’. The apparent early dynamism of the Moon challenges this idea.

New data suggest this is because radioactive elements were distributed uniquely after the catastrophic Moon‐ forming collision. Earth’s Moon, together with the Sun, is a dominant object in our sky and offers many observable features which keep scientists busy trying to explain how our planet and the Solar System formed. Most planets in our solar system have satellites. For example, Mars has two moons, Jupiter has 79 and Neptune has 14. Some moons are icy, some are rocky, some are still geologically active and some relatively inactive. How planets got their satellites and why they have the properties they do are questions which could shed light on many aspects of the evolution of the early Solar System.

The Moon is a relatively cold rocky body, with a limited amount of water and little tectonic processing. Scientists presently believe the Earth‐Moon system formed when a Mars‐sized body dubbed Theia – who in Greek mythology was the mother of Selene, the goddess of the Moon – catastrophically collided with the proto‐Earth, causing the components of both bodies to mix.

The debris of this collision are thought to have fairly rapidly, perhaps over a few million years, separated to form the Earth and Moon. The Earth ended up being larger and evolved in a sweet spot in terms of its size being just right for it to become a dynamic planet with an atmosphere and oceans. Earth’s Moon ended up being smaller and did not have sufficient mass to host these characteristics. Thus retaining volatile substances like water or the gases that form our atmosphere, or retaining sufficient internal heat to maintain long‐term planetary volcanism and tectonics, are idiosyncratic to how the Earth‐Moon forming collision occurred. Decades of observations have demonstrated that lunar history was much more dynamic than expected with volcanic and magnetic activity occurring as recently as 1 billion years ago, much later than expected.

A clue as to why the near and far side of the Moon are so different comes from strong asymmetry observable in its surface features. On the Moon’s perpetually Earth‐facing near side, on any given night, or day, one can observe dark and light patches with the naked eye. Early astronomers named these dark regions ‘maria’, Latin for ‘seas’, thinking they were bodies of water by analogy with the Earth. Using telescopes, scientists were able to figure out over a century ago that these were not in fact seas, but more likely craters or volcanic features.

Back then, most scientists assumed the far side of the Moon, which they would never have been able to see, was more or less like the near side.

However, because the Moon is relatively close to the Earth, only about 380,000 km away, the Moon was the first Solar System body humans were able to explore, first using non‐crewed spacecraft and then ‘in person’. In the late 1950s and early 1960s, non‐crewed space probes launched by the USSR returned the first images of the far side of the Moon, and scientists were surprised to find that the two sides were very different. The far side had almost no maria. Only 1% of the far side was covered with maria compared with ~31% for the near side. Scientists were puzzled, but they suspected this asymmetry was offering clues as to how the Moon formed.

In the late 1960s and early 1970s, NASA’s Apollo missions landed six spacecraft on the Moon, and astronauts brought back 382 kg of Moon rocks to try to understand the origin of the Moon using chemical analysis. Having samples in hand, scientists quickly figured out the relative darkness of these patches was due to their geological composition and they were, in fact, attributable to volcanism. They also identified a new type of rock signature they named KREEP – short for rock enriched in potassium (chemical symbol K), rare‐earth elements (REE, which include cerium, dysprosium, erbium, europium, and other elements which are rare on Earth) and phosphorus (chemical symbol P) – which was associated with the maria. But why volcanism and this KREEP signature should be distributed so unevenly between the near and far sides of the Moon again presented a puzzle.

Now, using a combination of observation, laboratory experiments and computer modelling, scientists from the Earth‐Life Science Institute at Tokyo Institute of Technology, the University of Florida, the Carnegie Institution for Science, Towson University, NASA Johnson Space Center and the University of New Mexico have brought some new clues as to how the Moon gained its near‐ and far‐side asymmetry. These clues are linked to an important property of KREEP.

Potassium (K), thorium (Th) and uranium (U) are, importantly for this story, radioactively unstable elements. This means that they occur in a variety of atomic configurations that have variable numbers of neutrons. These variable composition atoms are known as ‘isotopes’, some of which are unstable and fall apart to yield other elements, producing heat.

The heat from the radioactive decay of these elements can help melt the rocks they are contained in, which may partly explain their co‐localisation.

This study shows that, in addition to enhanced heating, the inclusion of a KREEP component to rocks also lowers their melting temperature, compounding the expected volcanic activity from simply radiogenic decay models. Because most of these lava flows were emplaced early in lunar history, this study also adds constraints about the timing of the Moon’s evolution and the order in which various processes occurred on the Moon.

This work required collaboration among scientists working on theory and experiment. After conducting high temperature melting experiments of rocks with various KREEP components, the team analysed the implications this would have on the timing and volume of volcanic activity at the lunar surface, providing important insight about the early stages of evolution of the Earth‐Moon system.

ELSI co‐author Matthieu Laneuville comments, ‘Because of the relative lack of erosion processes, the Moon’s surface records geological events from the Solar System’s early history. In particular, regions on the Moon’s near side have concentrations of radioactive elements like U and Th unlike anywhere else on the Moon. Understanding the origin of these local U and Th enrichments can help explain the early stages of the Moon’s formation and, as a consequence, conditions on the early Earth.’

The results from this study suggest that the Moon’s KREEP‐enriched maria have influenced lunar evolution since the Moon formed. Laneuville thinks evidence for these kinds of non‐symmetric, self‐amplifying processes might be found in other moons in our Solar System, and may be ubiquitous on rocky bodies throughout the Universe.

Credit: 
Tokyo Institute of Technology

Quantum rings in the hold of laser light

image: Ultracold atoms caught in an optical trap form suprisingly complex structures. Dependently on mutual interactions between particles with opposite spins, phases with various properties can be created locally.

Image: 
Source: IFJ PAN

Ultracold atoms trapped in appropriately prepared optical traps can arrange themselves in surprisingly complex, hitherto unobserved structures, according to scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences in Cracow. In line with their most recent predictions, matter in optical lattices should form tensile and inhomogeneous quantum rings in a controlled manner.

An optical lattice is a structure built of light, i.e. electromagnetic waves. Lasers play a key role in the construction of such lattices. Each laser generates an electromagnetic wave with strictly defined, constant parameters which can be almost arbitrary modified. When the laser beams are matched properly, it is possible to create a lattice with well know properties. By overlapping of waves, the minima of potential can be obtained, whose arrangement enables simulation of the systems and models well-known from solid states physics. The advantage of such prepared systems is the relatively simply way to modify positions of these minima, what in practice means the possibility of preparing various type of lattices.

"If we introduce appropriately selected atoms into an area of space that has been prepared in this way, they will congregate in the locations of potential minima. However, there is an important condition: the atoms must be cooled to ultra-low temperatures. Only then will their energy be small enough not to break out of the subtle prepared trap," explains Dr. Andrzej Ptok from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow.

Structures formed by atoms (or groups of atoms) trapped in the optical lattice resemble crystals. Depending on the configuration of the laser beams, they can be one-, two- or three-dimensional. Unlike crystals, they are defect-free. What's more, while in crystals the possibility of modifying the structure of the lattice is negligible, optical lattices are quite easy to configure. All that's needed to change the properties of the laser light or the cutting angles of the beams. These features make optical lattices popular as quantum simulators. They can be used to reproduce various spatial configurations of atoms or groups of atoms, including even those that do not exist in nature.

In their research, the scientists from the IFJ PAN works with trapped atoms in optical lattices. Groups of fermions, i.e. atoms with spin of 1/2 (spin is a quantum feature describing the rotation of particles) were placed in their sites. In each site a certain number of atoms had the spin oriented in one direction (up), and the rest - in the opposite direction (down). Modification of interaction between atoms in such way to be attractive leads to creation of pairs of atoms, which correspond to the Cooper pairs in superconductors - pairs of electrons with opposite spins in the same site of lattice.

"The parameters of the optical lattice can be used to influence the interaction between atoms of different spin trapped in individual sites. Moreover, in such way a state can be prepared, which mimic applied external magnetic fields on the system. It is given by control the proportions between the numbers of atoms of different spin," says Dr. Konrad J. Kapcia from IFJ PAN and notes that systems prepared in this way can reproduce the effects of relatively large magnetic fields without needing to use these fields. "This is possible because we know how a given magnetic field would impact into the difference between numbers of particles with opposite spins," explains researchers.

According to the predictions of the Cracow-based physicists, an interesting phase separation should take place in systems prepared in this manner. As a result, core-shell structure formed by matter trapped in an optical lattice, a core of paired atoms of one phase, surrounded by a shell of paired atoms of the second phase, will automatically form.

"The whole situation can be represented by a tasty example. Imagine a plate of rice with a thick sauce. By proper preparation of the plate, we can affect the relative position between the rice and the sauce. For example, we can prepare system in such way, that the rice will be in the center, while the sauce forms a ring around it. From the same ingredients we can also construct the reverse system: in the middle of the plate there will be the sauce surrounded by a ring of the rice. In our case, the plate is the optical trap with atoms and their pairs, and the rice and sauce are the two phases, grouping different types of atom pairs", Dr. Ptok describes.

The work of the physicists from IFJ PAN, published in Scientific Reports, is of a theoretical nature. Due to their simplicity, however, the described systems of ultracold atoms in optical traps can be quickly verified in laboratory experiments. Physicists from the IFJ PAN predicted that ultracold atoms trapped in optical lattices can form quantum rings with an inhomogeneous structure.

The Henryk Niewodniczanski Institute of Nuclear Physics (IFJ PAN) is currently the largest research institute of the Polish Academy of Sciences. The broad range of studies and activities of IFJ PAN includes basic and applied research, ranging from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of methods of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly yield of the IFJ PAN encompasses more than 600 scientific papers in the Journal Citation Reports published by the Clarivate Analytics. The part of the Institute is the Cyclotron Centre Bronowice (CCB) which is an infrastructure, unique in Central Europe, to serve as a clinical and research centre in the area of medical and nuclear physics. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: "Matter-Energy-Future" which possesses the status of a Leading National Research Centre (KNOW) in physics for the years 2012-2017. In 2017 the European Commission granted to the Institute the HR Excellence in Research award. The Institute is of A+ Category (leading level in Poland) in the field of sciences and engineering.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Stunning new hubble images reveal stars gone haywire

image: These two new images from the Hubble Space Telescope depict two nearby young planetary nebulae, NGC 6302, dubbed the Butterfly Nebula, and NGC 7027, which resembles a jewel bug. Both are among the dustiest planetary nebulae known and both contain unusually large masses of gas.

Image: 
NASA, ESA, and J. Kastner (RIT)

The NASA/Hubble Space Telescope demonstrates its full range of imaging capabilities with two new images of planetary nebulae. The images depict two nearby young planetary nebulae, NGC 6302, dubbed the Butterfly Nebula, and NGC 7027. Both are among the dustiest planetary nebulae known and both contain unusually large masses of gas, which made them an interesting pair for study in parallel by a team of researchers.

As nuclear fusion engines, most stars live placid lives for hundreds of millions to billions of years. But near the end of their lives they can turn into crazy whirligigs, puffing off shells and jets of hot gas. Astronomers have used Hubble to dissect such crazy fireworks happening in these two planetary nebulae. The researchers have found unprecedented levels of complexity and rapid changes in the jets and gas bubbles blasting off of the stars at the center of each nebula [1]. Hubble is now allowing the researchers to converge on an understanding of the mechanisms underlying this chaos.

The Hubble Space Telescope has imaged these objects before, but not for many years and never before with the Wide Field Camera 3 instrument across its full wavelength range -- making observations in near-ultraviolet to near-infrared light. "These new multi-wavelength Hubble observations provide the most comprehensive view to date of both of these spectacular nebulae," said Joel Kastner of the Rochester Institute of Technology, Rochester, New York, leader of the new study. "As I was downloading the resulting images, I felt like a kid in a candy store."

The new Hubble images reveal in vivid detail how both nebulae are splitting themselves apart on extremely short timescales -- allowing astronomers to see changes over the past couple of decades. In particular, Hubble's broad multi-wavelength views of each nebula are helping the researchers to trace the histories of shock waves in them. Such shocks are typically generated when fresh, fast stellar winds slam into and sweep up more slowly expanding gas and dust ejected by the star in its recent past, generating bubble-like cavities with well-defined walls.

Researchers suspect that at the heart of each nebula were two stars orbiting around each other. Evidence for such a central "dynamic duo" comes from the bizarre shapes of these nebulas. Each has a pinched, dusty waist and polar lobes or outflows, as well as other, more complex symmetrical patterns.

A leading theory for the generation of such structures in planetary nebulae is that the mass-losing star is one of two stars in a binary system. The two stars orbit one another closely enough that they eventually interact, producing a gas disc around one or both stars. The disc then launches jets that inflate polar-directed lobes of outflowing gas.

Another, related, popular hypothesis is that the smaller star of the pair may merge with its bloated, more rapidly evolving stellar companion. This very short-lived "common envelope" binary star configuration can also generate wobbling jets, forming the trademark bipolar outflows commonly seen in planetary nebulae. However, the suspect companion stars in these planetary nebulae have not been directly observed. Researchers suggest this may be because these companions are next to, or have already been swallowed by, far larger and brighter red giant stars.

NGC 6302, commonly known as the Butterfly Nebula, exhibits a distinct S-shaped pattern seen in reddish-orange in the image. Imagine a lawn sprinkler spinning wildly, throwing out two S-shaped streams. In this case it is not water in the air, but gas blown out at high speed by a star. And the "S" only appears when captured by the Hubble camera filter that records near-infrared emission from singly ionised iron atoms. This iron emission is indicative of energetic collisions between both slow and fast winds, which is most commonly observed in active galactic nuclei and supernova remnants.

"This is very rarely seen in planetary nebulae," explained team member Bruce Balick of the University of Washington in Seattle. "Importantly, the iron emission image shows that fast, off-axis winds penetrate far into the nebula like tsunamis, obliterating former clumps in their paths and leaving only long tails of debris."

The accompanying image of NGC 7027, which resembles a jewel bug, indicates that it had been slowly puffing away its mass in quiet, spherically symmetric or perhaps spiral patterns for centuries -- until relatively recently. "Something recently went haywire at the very centre, producing a new cloverleaf pattern, with bullets of material shooting out in specific directions," Kastner explained.

Credit: 
ESA/Hubble Information Centre

Observation of Excess Events in the XENON1T Dark Matter Experiment

image: The bottom of the XENON1T time projection chamber from below. The back ends of the photomultiplier tubes recording the scintillation light from events inside the chamber are clearly visible in their PTFE holding structure, as are the copper rings in the cylinder walls that shape the drift field which guides the ionisation signal electrons to the top of the chamber.

Image: 
XENON collaboration

June 16, 2020
Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU)

The content of the press release is embargoed until 16:00 Central European Summer Time (23:00, Japan Standard Time)on 17, June 2020. Journalists should credit the XENON collaboration as the source of stories covered.

Observation of Excess Events in the XENON1T Dark Matter Experiment

Scientists from the international XENON collaboration, an international experimental group including the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), University of Tokyo; the Institute for Cosmic Ray Research (ICRR), University of Tokyo; the Institute for Space-Earth Environmental Research (ISEE), Nagoya University; the Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI), Nagoya University; and the Graduate School of Science, Kobe University, announced today that data from their XENON1T, the world's most sensitive dark matter experiment, show a surprising excess of events. The scientists do not claim to have found dark matter. Instead, they say to have observed an unexpected rate of events, the source of which is not yet fully understood. The signature of the excess is similar to what might result from a tiny residual amount of tritium (a hydrogen atom with one proton and two neutrons), but could also be a sign of something more exciting--such as the existence of a new particle known as the solar axion or the indication of previously unknown properties of neutrinos.

XENON1T was operated deep underground at the INFN Laboratori Nazionali del Gran Sasso in Italy, from 2016 to 2018. It was primarily designed to detect dark matter, which makes up 85% of the matter in the universe. So far, scientists have only observed indirect evidence of dark matter, and a definitive, direct detection is yet to be made. So-called WIMPs (Weakly Interacting Massive Particles) are among the theoretically preferred candidates, and XENON1T has thus far set the best limit on their interaction probability over a wide range of WIMP masses. In addition to WIMP dark matter, XENON1T was also sensitive to different types of new particles and interactions that could explain other open questions in physics. Last year, using the same detector, these scientists published in Nature the observation of the rarest nuclear decay ever directly measured.

The XENON1T detector was filled with 3.2 tonnes of ultra-pure liquefied xenon, 2.0 t of which served as a target for particle interactions. When a particle crosses the target, it can generate tiny signals of light and free electrons from a xenon atom. Most of these interactions occur from particles that are known to exist. Scientists therefore carefully estimated the number of background events in XENON1T. When data of XENON1T were compared to known backgrounds, a surprising excess of 53 events over the expected 232 events was observed.

This raises the exciting question: where is this excess coming from?

One explanation could be a new, previously unconsidered source of background, caused by the presence of tiny amounts of tritium in the XENON1T detector. Tritium, a radioactive isotope of hydrogen, spontaneously decays by emitting an electron with an energy similar to what was observed. Only a few tritium atoms for every 1025 (10,000,000,000,000,000,000,000,000!) xenon atoms would be needed to explain the excess. Currently, there are no independent measurements that can confirm or disprove the presence of tritium at that level in the detector, so a definitive answer to this explanation is not yet possible.

More excitingly, another explanation could be the existence of a new particle. In fact, the excess observed has an energy spectrum similar to that expected from axions produced in the Sun. Axions are hypothetical particles that were proposed to preserve a time-reversal symmetry of the nuclear force, and the Sun may be a strong source of them. While these solar axions are not dark matter candidates, their detection would mark the first observation of a well-motivated but never observed class of new particles, with a large impact on our understanding of fundamental physics, but also on astrophysical phenomena. Moreover, axions produced in the early universe could also be the source of dark matter.

Alternatively, the excess could also be due to neutrinos, trillions of which pass through your body, unhindered, every second. One explanation could be that the magnetic moment (a property of all particles) of neutrinos is larger than its value in the Standard Model of elementary particles. This would be a strong hint to some other new physics needed to explain it.

Of the three explanations considered by the XENON collaboration, the observed excess is most consistent with a solar axion signal. In statistical terms, the solar axion hypothesis has a significance of 3.5 sigma, meaning that there is about a 2/10,000 chance that the observed excess is due to a random fluctuation rather than a signal. While this significance is fairly high, it is not large enough to conclude that axions exist. The significance of both the tritium and neutrino magnetic moment hypotheses corresponds to 3.2 sigma, meaning that they are also consistent with the data.

XENON1T is now upgrading to its next phase-XENONnT-with an active xenon mass three times larger and a background that is expected to be lower than that of XENON1T. With better data from XENONnT, the XENON collaboration is confident it will soon find out whether this excess is a mere statistical fluke, a background contaminant, or something far more exciting: a new particle or interaction that goes beyond known physics.

Credit: 
Kavli Institute for the Physics and Mathematics of the Universe

Crop residue decisions affect soil life

image: Controlled burns can be used to remove crop residue from farm fields between growing seasons.

Image: 
Rachel Schutte

In some ways, farming is like cooking. Cooking would be much easier if we could leave the kitchen after eating and not come back until we make the next meal. But someone needs to put away the leftovers, do the dishes, and clean up the table.

Similarly, there's work to do in farm fields after harvest and before planting the next spring.

After harvest in the fall, farmers take the harvested crops to market or store them on their farm. They don't take the whole plant from the field, though.

The leftover parts of the plant, like the stalk and leaves from corn, remain in the field. This debris is called crop residue.

Using no-till and prescribed fire management are two potential ways to manage crop residue. Both practices help keep organic matter and nitrogen in the soil. However, research was needed to understand how these two practices can affect long-term soil health.

Lisa Fultz and her team want to help farmers determine the best way to manage their residue between growing seasons. To do this, her team decided to learn more about how no-till and prescribed fire management affect nutrients and microbes in the soil. Fultz is a researcher at Louisiana State University AgCenter.

No-till is a practice where farmers plant directly into the crop debris from the previous year. Prescribed fires are used to purposely burn off the previous crop debris with controlled fire. "Both of these practices have minimal physical disturbance to the soil," says Fultz.

Both of these practices also come with drawbacks. No-till can cause poor conditions for crop growth like low spring temperatures and increased moisture, which promotes disease. Prescribed fire can leave bare soil vulnerable to erosion.

The team focused the research on wheat and soybean rotations and continuous corn production systems. "These are common practices not only in the mid-south, but across many areas of the world," explains Fultz.

"Wheat and corn production leave behind residue," she says. "Common practices, like conventional tillage, are highly disruptive. The need to identify viable conservation practices is growing in importance."

Crop residue and its degradation by soil microbes is an important part of the carbon cycle. Plants store carbon during the growing season, then microbes use the plant residue for food. The carbon then gets stored in the soil in a chemically stable form.

"Fresh, green material in no-till fields is easy to breakdown and provides rich nutrients for soil microbes," says Fultz. "Ash from burned residue is more chemically stable, but it doesn't provide a nutrient source for microbes."

The team found that impacts from crop management practices, like crop rotation or fertilization, outweighed the influence of prescribed fire for residue management. Researchers found some decreases in microbial activity after yearly prescribed burns.

Findings show prescribed fire had some possible short-term benefits for soil nutrient availability, but timing is crucial. Prescribed burning of wheat residue provided an increase of nitrogen for about 7 days. These benefits should be weighed against other possible impacts, like carbon dioxide production and crop yield.

We still need to learn the long-term influence of prescribed fire on the soil biological community," says Fultz. "While short-term impacts were measured, the long-term influence on soil nutrients, biological cycles and soil health are not known."

No two farm management systems are the same, and their success is defined by the user. Scientists continue to examine possible scenarios to provide accurate and sustainable recommendations to farmers.

"I have always been interested in soil conservation and the potential it has to impact many facets of life," says Fultz. "By improving soil health, we can improve air and water quality, store carbon, and provide stable resources for food production."

Credit: 
American Society of Agronomy

Melting a crystal topologically

image: Upper panel: spin configuration of a skyrmion. Lower panel: Voronoi tessellation of representative skyrmion lattice configurations in the solid, the hexatic and the liquid phases respectively.

Image: 
Huang Ping (Xi'an Jiaotong University)

The introduction of topology, a branch of mathematics focusing on the properties of "knots", into physics has inspired revolutionary concepts such as topological phases of matter and topological phase transitions, which results in the Nobel Prize in Physics in 2016.

Magnetic skyrmions, spin "nano-tornados" named after particle physicist Tony Skyrme, with unique topology (winding configurations), have been attracting sharply increasing attentions in the last decade due both to their importance in fundamental physics and their promising applications in the next generation magnetic storage. These nano-tornados, also known as quasi-particles (in contrast with real-matter particles like atoms and electrons), can form crystalline structures, that is, they arrange in a periodic and symmetric manner, the same way as atoms in a quartz crystal.

From experiences of everyday life, we are aware that a crystalline solid, such as ice, can melt upon heating. One may also have noted that all such melting transitions happen in a single step, i.e. from the solid state directly to the liquid state. In the framework of topological phase transition in a very thin crystal, however, a melting process may take two steps, via a topological phase called the hexatic phase. How does such a topological phase look like, and how does this melting process happen?

Now, EPFL physicists have found a way to visualize the whole melting process, as reported recently in Nature Nanotechnology. Researchers from Laboratory for Quantum Magnetism (LQM), Laboratory for Ultrafast Microscopy and Electron Scattering (LUMES), Centre Interdisciplinaire de Microscopie Électronique (CIME) and Crystal Growth Facility have demonstrated that the skyrmion crystals in the compound Cu2OSeO3 can be melted by varying magnetic field through two steps, with each step associated with a specific type of topological defects.

The researchers used a state-of-the-art technique called Lorentz Transmission Electron Microscopy (LTEM) that can image magnetic textures in nanometric resolution to visualize skyrmions embedded in a very thin slab of Cu2OSeO3 crystal at -250 degree Celsius. They recorded massive images and videos when varying the magnetic field. By comprehensive quantitative analysis, two novel phases, the skyrmion hexatic phase and the skyrmion liquid phase, have been demonstrated. New phases of matter often bear the opportunities of novel functionalities, and this work, by clearly viewing them, paves the way of further R&D.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Which factors control the height of mountains?

image: The height of the Andes, like the height of other mountain ranges on Earth, is determined by tectonic forces.

Image: 
NASA; Astronaut photograph ISS059-E-517

Which forces and mechanisms determine the height of mountains? A group of researchers from Münster and Potsdam has now found a surprising answer: It is not erosion and weathering of rocks that determine the upper limit of mountain massifs, but rather an equilibrium of forces in the Earth's crust. This is a fundamentally new and important finding for the earth sciences. The researchers report on it in the scientific journal Nature.

The highest mountain ranges on Earth - such as the Himalayas or the Andes - arise along convergent plate boundaries. At such plate boundaries two tectonic plates move toward each other, and one of the plates is forced beneath the other into the Earth's mantle. During this process of subduction, strong earthquakes repeatedly occur on the plate interface, and over millions of years mountain ranges are built at the edges of the continents.

Whether the height of mountain ranges is mainly determined by tectonic processes in the Earth's interior or by erosional processes sculpturing the Earth's surface has long been debated in geosciences.

A new study led by Armin Dielforder of GFZ German Research Centre for Geoscience now shows that erosion by rivers and glaciers has no significant influence on the height of mountain ranges. Together with scientists from the GFZ and the University of Münster (Germany), he resolved the longstanding debate by analysing the strength of various plate boundaries and calculating the forces acting along the plate interfaces.

The researchers arrived at this surprising result by calculating the forces along different plate boundaries on the Earth. They used data that provide information about the strength of plate boundaries. These data are derived, for example, from heat flow measurements in the subsurface. The heat flow at convergent plate boundaries is in turn influenced by the frictional energy at the interfaces of the continental plates.

One can imagine the formation of mountains using a tablecloth. If you place both hands under the cloth on the table top and push it, the cloth folds and at the same time it slides a little over the back of your hands. The emerging folds would correspond, for instance, to the Andes, the sliding over the back of the hands to the friction in the underground. Depending on the characteristics of the rock, tensions also build up in the deep underground which are discharged in severe earthquakes, especially in subduction zones.

The researchers collected worldwide data from the literature on friction in the subsurface of mountain ranges of different heights (Himalayas, Andes, Sumatra, Japan) and calculated the resulting stress and thus the forces that lead to the uplift of the respective mountains. In this way they showed that in active mountains the force on the plate boundary and the forces resulting from the weight and height of the mountains are in balance.

Such a balance of forces exists in all the mountain ranges studied, although they are located in different climatic zones with widely varying erosion rates. This result shows that mountain ranges are able to react to processes on the Earth's surface and to grow with rapid erosion in such a way that the balance of forces and the height of the mountain range are maintained. This fundamentally new finding opens up numerous opportunities to study the long-term development and growth of mountains in greater detail.

Credit: 
GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre

Nutraceuticals for promoting longevity

Aging is considered to be synonymous with the appearance of major diseases and an overall decline in physical and mental performance. This mini-review summarizes the main findings on nutraceuticals that are believed to slow aging processes by delaying and even preventing the development of multiple chronic diseases. These nutraceuticals may help improve productivity and quality of life in the elderly.
Researchers from Migal-Galilee Research Institute (Israel), University of Ljubljana (Slovenia) and University of Belgrade (Serbia) have contributed their review after conducting a literature review work published on of nutraceuticals. The research found that the most robust environmental manipulation for extending lifespan is caloric restriction without malnutrition. Some nutraceuticals can mimic caloric restriction effects. Caloric restriction is well established as a strategy to extend lifespan without malnutrition. A variety of nutraceuticals were reported to mimic the effect of caloric restriction by modulating the activity of insulin-like growth factor 1 receptor signaling and sirtuin activity and consequently promote longevity.
The review, published in Current Nutraceuticals, offers a special focus on the nutraceuticals that impact insulin-like growth factor 1 receptor signaling and sirtuin activity in mediating longevity and healthspan.

Keywords: Nutraceuticals, longevity, caloric restriction, insulin-like growth factor 1 receptor (IGF1R), silent mating type information regulation 2 homology 1 (SIRT1)

For further information, please visit: bit.ly/NutraceuticalsforPromotingLongevity

Credit: 
Bentham Science Publishers

Shock waves created in the lab mimic supernova particle accelerators

image: To study the powerful shock waves in supernova remnants, Frederico Fiuza and colleagues created similar plasma shock waves in the lab. Here, computer simulations reveal the turbulent structure of the magnetic field in two shock waves moving away from each other.

Image: 
Courtesy Frederico Fiuza/SLAC National Accelerator Laboratory

When stars explode as supernovas, they produce shock waves in the plasma surrounding them. So powerful are these shock waves, they can act as particle accelerators that blast streams of particles, called cosmic rays, out into the universe at nearly the speed of light. Yet how exactly they do that has remained something of a mystery.

Now, scientists have devised a new way to study the inner workings of astrophysical shock waves by creating a scaled-down version of the shock in the lab. They found that astrophysical shocks develop turbulence at very small scales - scales that can't be seen by astronomical observations - that helps kick electrons toward the shock wave before they're boosted up to their final, incredible speeds.

"These are fascinating systems, but because they are so far away it's hard to study them," said Frederico Fiuza, a senior staff scientist at the Department of Energy's SLAC National Accelerator Laboratory, who led the new study. "We are not trying to make supernova remnants in the lab, but we can learn more about the physics of astrophysical shocks there and validate models."

The injection problem

Astrophysical shock waves around supernovas are not unlike the shockwaves and sonic booms that form in front of supersonic jets. The difference is that when a star blow up, it forms what physicists call a collisionless shock in the surrounding gas of ions and free electrons, or plasma. Rather than running into each other as air molecules would, individual electrons and ions are forced this way and that by intense electromagnetic fields within the plasma. In the process, researchers have worked out, supernova remnant shocks produce strong electromagnetic fields that bounce charged particles across the shock multiple times and accelerate them to extreme speeds.

Yet there's a problem. The particles already have to be moving pretty fast to be able to cross the shock in first place, and no one's sure what gets the particles up to speed. The obvious way to address that issue, known as the injection problem, would be to study supernovas and see what the plasmas surrounding them are up to. But with even the closest supernovas thousands of light years away, it's impossible to simply point a telescope at them and get enough detail to understand what's going on.

Fortunately, Fiuza, his postdoctoral fellow Anna Grassi and colleagues had another idea: They'd try to mimic the shock wave conditions of supernova remnants in the lab, something Grassi's computer models indicated could be feasible.

Most significantly, the team would need to create a fast, diffuse shock wave that could imitate supernova remnant shocks. They would also need to show that the density and temperature of the plasma increased in ways consistent with models of those shocks - and, of course, they wanted to understand if the shock wave would shoot out electrons at very high speeds.

Igniting a shock wave

To achieve something like that, the team went to the National Ignition Facility, a DOE user facility at Lawrence Livermore National Laboratory. There, the researchers shot some of the world's most powerful lasers at a pair of carbon sheets, creating a pair of plasma flows headed straight into each other. When the flows met, optical and X-ray observations revealed all the features the team were looking for, meaning they had produced in the lab a shock wave in conditions similar to a supernova remnant shock.

Most importantly, they found that when the shock was formed it was indeed capable of accelerating electrons to nearly the speed of light. They observed maximum electron velocities that were consistent with the acceleration they expected based on the measured shock properties. However, the microscopic details of how these electrons reached these high speeds remained unclear.

Fortunately, the models could help reveal some of the fine points, having first been benchmarked against experimental data. "We can't see the details of how particles get their energy even in the experiments, let alone in astrophysical observations, and this is where the simulations really come into play," Grassi said.

Indeed, the computer model revealed what may be a solution to the electron injection problem. Turbulent electromagnetic fields within the shock wave itself appear to be able to boost electron speeds up to the point where the particles can escape the shock wave and cross back again to gain even more speed, Fiuza said. In fact, the mechanism that gets particles going fast enough to cross the shock wave seems to be fairly similar to what happens when the shock wave gets particles up to astronomical speeds, just on a smaller scale.

Toward the future

Questions remain, however, and in future experiments the researchers will do detailed measurements of the X-rays emitted by the electrons the moment they are accelerated to investigate how electron energies vary with distance from the shock wave. That, Fiuza said, will further constrain their computer simulations and help them develop even better models. And perhaps most significantly, they will also look at protons, not just electrons, fired off by the shock wave, data which the team hopes will reveal more about the inner workings of these astrophysical particle accelerators.

More generally, the findings could help researchers go beyond the limitations of astronomical observations or spacecraft-based observations of the much tamer shocks in our solar system. "This work opens up a new way to study the physics of supernova remnant shocks in the lab," Fiuza said.

Credit: 
DOE/SLAC National Accelerator Laboratory

Small see-through container improves plant micrografting

image: Each unit in the device is designed to control and support plant growth, cutting, and grafting.

Image: 
The Plant Journal

A type of plant grafting needing a tremendous amount of precision and skill has now been made faster and easier thanks to a simple transparent container. Researchers at Nagoya University have developed a micrografting device that guides seedling growth and facilitates the grafting of the embryonic shoots of one plant onto the tiny stalks of another. The new device shows potential for facilitating research into plant signalling. The details were published in The Plant Journal. The concept can be expanded to crop grafting to develop more resilient crop varieties. The system has already been developed and applied in tomato grafting by GRA&GREEN Inc., a start-up venture company from Nagoya University.

Plant grafting is a centuries-old technique that involves attaching the growth of the upper part of one plant, called the scion, onto the lower part of another, called the rootstock. In more recent years, some plant experts have started using micrografting: transferring a very tiny part of a newly forming shoot onto the rootstock of a very young plant. Plant experts use this technique because it facilitates studies on the signalling system that controls plant growth and development. The problem is that this technique needs personnel who can skilfully and precisely do this under a microscope.

"We developed a silicon-based chip to improve the ease-of-use, efficiency, and success rate of micrografting, even for untrained users," says Nagoya University bioscientist Michitaka Notaguchi.

The device, made from a silicon elastomer called polydimethylsiloxane, is 3.6mm wide by 17mm long and contains four identical units. Each unit is formed of a small pocket in which the plant seed is placed, a lower pathway for root growth, and an upper pathway that guides shoot growth. A horizontal slot passes through the top parts of the four units. It is used to insert a tiny blade that cuts off the scions of the developing plants. Scions from other plants are then placed into the upper pathways for grafting onto the rootstocks.

The device makes grafting much easier by facilitating the precise and delicate contact between the new scions and the original rootstocks. The researchers achieved a 48-88% grafting success rate using the device. Grafting was most successful when performed at 27°C and when the agar medium used to support seed sprouting contained 0.5% sucrose.

They then used the device to understand how the compound nicotianamine transports iron across different parts of a plant. In separate devices, they grew normal Arabidopsis plants, which are commonly used in plant research, and a mutant Arabidopsis that lacks nicotianamine. They then grafted normal Arabidopsis scions onto mutant rootstocks and vice versa. Their tests showed nicotianamine originating from the shoot or root can move to other areas and mobilize iron necessary for plant growth and development.

The team next aims to further develop their device to make it suitable for grafting other types of plants with different seed sizes. "Developing supportive devices for grafting experiments will help activate research in this area," says Notaguchi.

Credit: 
Nagoya University

Discovery of ancient super-eruptions indicates the yellowstone hotspot may be waning

image: Fountain Paint Pot, Yellowstone National Park.

Image: 
Courtesy National Park Service.

Boulder, Colo., USA: Throughout Earth's long history, volcanic super-eruptions have been some of the most extreme events ever to affect our planet's rugged surface. Surprisingly, even though these explosions eject enormous volumes of material--at least 1,000 times more than the 1980 eruption of Mount St. Helens--and have the potential to alter the planet's climate, relatively few have been documented in the geologic record.

Now, in a study published in Geology, researchers have announced the discovery of two newly identified super-eruptions associated with the Yellowstone hotspot track, including what they believe was the volcanic province's largest and most cataclysmic event. The results indicate the hotspot, which today fuels the famous geysers, mudpots, and fumaroles in Yellowstone National Park, may be waning in intensity.

The team used a combination of techniques, including bulk chemistry, magnetic data, and radio-isotopic dates, to correlate volcanic deposits scattered across tens of thousands of square kilometers. "We discovered that deposits previously believed to belong to multiple, smaller eruptions were in fact colossal sheets of volcanic material from two previously unknown super-eruptions at about 9.0 and 8.7 million years ago," says Thomas Knott, a volcanologist at the University of Leicester and the paper's lead author.

"The younger of the two, the Grey's Landing super-eruption, is now the largest recorded event of the entire Snake-River-Yellowstone volcanic province," says Knott. Based on the most recent collations of super-eruption sizes, he adds, "It is one of the top five eruptions of all time."

The team, which also includes researchers from the British Geological Survey and the University of California, Santa Cruz, estimates the Grey's Landing super-eruption was 30% larger than the previous record-holder (the well-known Huckleberry Ridge Tuff) and had devastating local and global effects. "The Grey's Landing eruption enamelled an area the size of New Jersey in searing-hot volcanic glass that instantly sterilized the land surface," says Knott. Anything located within this region, he says, would have been buried and most likely vaporized during the eruption. "Particulates would have choked the stratosphere," adds Knott, "raining fine ash over the entire United States and gradually encompassing the globe."

Both of the newly discovered super-eruptions occurred during the Miocene, the interval of geologic time spanning 23-5.3 million years ago. "These two new eruptions bring the total number of recorded Miocene super-eruptions at the Yellowstone-Snake River volcanic province to six," says Knott. This means that the recurrence rate of Yellowstone hotspot super-eruptions during the Miocene was, on average, once every 500,000 years.

By comparison, Knott says, two super-eruptions have--so far--taken place in what is now Yellowstone National Park during the past three million years. "It therefore seems that the Yellowstone hotspot has experienced a three-fold decrease in its capacity to produce super-eruption events," says Knott. "This is a very significant decline."

These findings, says Knott, have little bearing on assessing the risk of another super-eruption occurring today in Yellowstone. "We have demonstrated that the recurrence rate of Yellowstone super-eruptions appears to be once every 1.5 million years," he says. "The last super-eruption there was 630,000 years ago, suggesting we may have up to 900,000 years before another eruption of this scale occurs." But this estimate, Knott hastens to add, is far from exact, and he emphasizes that continuous monitoring in the region, which is being conducted by the U.S. Geological Survey, "is a must" and that warnings of any uptick in activity would be issued well in advance.

This study, which builds on decades of contributions by many other researchers, grew out of a larger project investigating the productivity of major continental volcanic provinces. Those with super-eruptions are the result of colossal degrees of crustal melting over prolonged periods of time, says Knott, and therefore have a profound impact on the structure and composition of Earth's crust in the regions where they occur.

Because studying these provinces is vital to understanding their role in shaping our planet's crustal processes, Knott hopes this research foreshadows even more revelations. "We hope the methods and findings we present in our paper will enable the discovery of more new super-eruption records around the globe," he says.

Credit: 
Geological Society of America

Electrogenetic device offers on-demand release of cellular insulin

Advancing the use of electrogenetics for remote-controlled medical intervention, researchers report a new device, tested in mouse models of type-1 diabetes, wirelessly coaxed bioengineered cells to release insulin, stabilizing the animals' blood glucose levels within minutes. The approach, which uses external electric fields to trigger on-demand insulin release, opens the door for precisely controlled diabetes therapies. Similar to optogenetics, which uses precise wavelengths of light as a means to control cell function remotely, electrogenetics uses electrical stimulation to directly influence the expression of voltage-dependent receptors in electrosensitive designer cells. Current remote-controlled electrogenetic medical devices make use of sophisticated bioelectronic interfaces that use direct electronic input to control cellular behavior but require electrical conduction between device electrodes and bioengineered cells, limiting their potential. Krzysztof Krawczyk and colleagues developed a bioelectronic interface that uses electric fields to control cell function in vivo via a wearable device. Leveraging a voltage-gated calcium channel, Krawczyk et al. achieved a high degree of control over electrostimulation-driven insulin production and secretion in engineered human pancreatic β cells. Using these cells, the authors designed a cofactor-free subcutaneously implantable device that can be wirelessly triggered to release stored insulin rapidly and on-demand. The modified β cells within the device were able to be reused for several weeks and capable of rapidly restoring normal glycemic levels in mice. "Electrogenetics represents the next tool in an expanding toolbox for engineering remote solutions for human therapeutics," write Matthew Brier and Jonathan Dordick in a related Perspective, which discusses this, as well as other approaches, to remote activation of cellular signaling.

Credit: 
American Association for the Advancement of Science (AAAS)

A single proton can make a heck of a difference

image: The SHARAQ spectrometer in the RI Beam Factory at RIKEN was used in the experiment.

Image: 
RIKEN

Scientists from the RIKEN Nishina Center for Accelerator-Based Science and collaborators have shown that knocking out a single proton from a fluorine nucleus--transforming it into a neutron-rich isotope of oxygen--can have a major effect on the state of the nucleus. This work could help to explain a phenomenon known as the oxygen neutron dripline anomaly.

The neutron drip-line is a point where adding a single neutron to a nucleus will lead to it immediately drip a neutron, and this sets a limit on how neutron rich a nucleus can be. This is important for understanding neutron rich environments as supernovae and neutron stars, since nuclei at the dripline will often undergo beta-decay, where a proton is converted into a neutron, driving it up the periodic table.

What was poorly understood is why the dripline for oxygen, with 8 protons, is 16 neutrons, while that of fluorine, with just one extra proton, is 22 neutrons, a much larger number. To try to understand why, the research group used the RI Beam Factory, operated by RIKEN and the University of Tokyo, to create an exotic nucleus, fluorine 25, which has 9 protons and 16 neutrons. The 16 neutrons and 8 of the protons form a complete shell, making it a "doubly magic" nucleus that is especially stable, and the one extra proton--known as a "valence proton"--exists outside that core. The beam was then collided with a target to knock out the proton, leaving oxygen 24, and the SHARAQ spectrometer was used to analyze the resulting nucleus.

The researchers analyzed what is known as the "spectroscopic factor," which is used to gauge the effects of interactions among nucleons in a nucleus on individual particles.

Conventional wisdom would be that knocking out the protons would leave the core--oxygen 24--in the lowest energy state called the ground state. However, the experiment found that this was not true, and that the oxygen 24 at the core of the fluorine isotope mostly existed in excited states quite different from oxygen 24 itself.

According to Tsz Leung Tang, the main author of the study, published in Physical Review Letters, "This is quite an exciting result, and it tells us that the addition of a single valence proton to a nucleus core--a doubly magic one in this case--can have a significant effect on the state of the core. Calculations showed that known interactions, including tensor force effects, were insufficient to explain this result. We plan to conduct further experiments to determine the mechanism responsible for the extension of the dripline in fluorine."

Credit: 
RIKEN

Convenient location of a near-threshold proton-emitting resonance in 11B

image: The ß-delayed proton emission of 11Be. The neutron halo ground state of 11Be undergoes beta decay to an excited state of 10B, which lies just above the proton-decay threshold. This state subsequently decays to 10Be by emitting a proton. (Source: IFJ PAN)

Image: 
Source: IFJ PAN

Polish scientists working in Poland, France and USA explained the mysterious β-delayed proton decay of the neutron halo ground state of 11Be. Studies within the SMEC model suggest the existence of collective resonance, carrying many characteristics of a nearby proton-decay channel, which explains this puzzling decay. It was argued that the appearance of such near-threshold resonant states is a generic phenomenon in any open quantum system, in which bound and unbound states strongly mix.

Nuclear clustering is one of the most puzzling phenomena in subatomic physics. Numerous examples of such structures include the ground state of the 11Li nucleus with a halo of two neutrons or the famous Hoyle resonance at 12C, which plays a vital role in the synthesis of heavier elements in stars. Narrow resonances near the threshold are fundamental in astrophysical conditions, in which most reactions occur at very low energies. For these states, particle emission channels can effectively compete with other types of decay, such as photon emissions. The widespread presence of narrow resonances near the particle emission threshold suggests that this is a universal phenomenon in open quantum systems in which bound and unbound states strongly mix, resulting in the appearance of a collective state with the features of a nearby decay channel.

In a recently published paper (Physical Review Letters 124, 042504 (2020)), physicists from the IFJ PAN in Krakow (Poland), GANIL in Caen (France) and FRIB Facility (USA) provided an explanation for proton emission delayed by ß- decay from the weakly bound ground state of the 11Be nucleus. In the first stage of this enigmatic, two-stage process, the neutron in the ground state of 11Be with the halo structure decays into electron, anti-neutrino and proton, causing the transformation of 11Be ground state into the resonance in 11B. In the second stage, a proton is emitted from this resonance (see attached diagram) to the 10Be state. The possibility of such a halo decay process in 11Be has been explained by the existence of resonance in 11B with 1/2+ total angular momentum and parity, which resembles many features of a nearby proton emission channel. The proximity of proton and tritium emission thresholds in 11B suggests that this resonance may also contain an admixture of the tritium cluster configuration.

"The study was carried out based on the shell model embedded in the continuum (SMEC). The measure of state collectivization near the threshold for particle emissions (nucleon, deuteron, α particle, etc.) is the correlation energy, which is calculated for each eigenstate of the SMEC. Competing effects determine the excitation energy at maximum collectivization: coupling to decay channels and the Coulomb and centrifugal barriers. For higher angular momentum values (L>1) and/or for coupling to the charged particle emission channel, the correlation energy extremum is above the threshold energy of this channel," explains Prof. Jacek Okolowicz from the Institute of Nuclear Physics of the Polish Academy of Sciences.

In the latest experimental work of the group at Michigan State University, proton emission was observed in 11B from a state with a total angular momentum of 1/2+ or 3/2+, the energy of 11.425(20) MeV and a width of 12(5) keV, which is populated in the ß- decay of 11Be ground state. The resonance at 11B proposed in this experiment is 197(20) keV above the threshold for proton emission and 29(20) keV below the threshold for neutron emission.

Theoretical studies using the SMEC model include the effective nucleon-nucleon interaction in discrete states of the shell model, and the Wigner-Bartlett interaction describing the coupling between nucleons in discrete bound states and continuum states. The calculations were made for the Jπ = 1/2+ and 3/2+ states in 11B to determine the most likely angular momentum of the proposed resonance. The shell model states are mixed via coupling with a proton and neutron reaction channels. Collectivization of the wave function was found only for the third excited 1/2+ state, for which the maximum correlation energy lies 142 keV above the proton emission threshold. Hence, it was concluded that the resonance in 11B, mediating in the decay of the ground state of 11Be, must have a total angular momentum and parity Jπ = 1/2+.

The narrow 5/2+ resonance at 11.600(20) MeV, which lies slightly above the neutron emission threshold and breaks down by the emission of the neutron or α particle, has a significant effect on the value of the 10B neutron capture cross-section. This huge cross-section suggests that the 5/2+ resonance wave function is strongly modified by coupling to a nearby neutron emission channel. Indeed, in the SMEC model calculations, there is a sixth 5/2+ state near the neutron emission threshold, which strongly couples in the L=2 partial wave to the channel [10B(3+) + n]5/2+. The theoretically determined maximal collectivization for this state is 113 keV above the neutron emission threshold and close to the experimental energy of the 5/2+ state.

"We investigated the puzzling case of β-p+ decay of 11Be with a neutron halo. Analysis carried out within the SMEC model confirms the existence of collective resonance in 11B near the proton emission threshold and favors assignment of Jπ = 1/2+ quantum numbers. The wave function of this resonance resembles a nearby proton emission channel. It means that in this process β- decay can be interpreted as quasi-free decay of a neutron from the 11Be halo to resonance in 11B, in which a single proton is coupled with the 10Be core. The similarity of Jπ = 1/2+ resonance to the channel [10Be + p] also explains the large spectroscopic factor for proton decay and the very small partial width of the α decay of this state. However, the properties of the nearby Jπ = 3/2+ state, which mainly decays by the emission of the α particle, can be explained by the fourth 3/2+ state of the SMEC model. This state poorly couples to the emission channels of one neutron or proton. Above the neutron emission threshold [10B + n] is a 5/2+ resonance, which is crucial for 10B neutron capture. The wave function of the sixth 5/2+ state of the SMEC model shows very strong collectivization near the threshold of neutron emission, which is the explanation of the huge observed cross-section for neutron capture by 10B," says Prof. Okolowicz.

The reason for the emergence of a collective proton (neutron) resonance around the proton (neutron) emission threshold is the L=0 (L=2) coupling with the proton (neutron) scattering state space. In this regard, the 11B case follows other splendid examples of threshold states in 12C, 11Li, or 15F. In the future, experimental studies of the 10Be(p,p)10Be reaction will be needed to understand the nature of proton resonance at 11.425 MeV. To better figure out the nature of the neutron reaction channel and neighboring neutron resonances, 10B(d,p)11Be reactions need to be examined. Besides, an extensive experimental and theoretical analysis will be required to determine the branching ratio for the β-p+ channel, as the currently suggested experimental value is greater by a factor of 2 than the predictions of the SMEC model. Future theoretical studies should also explain the effect of L=0 virtual neutron state on the reaction channel [10B + n].

The Henryk Niewodniczanski Institute of Nuclear Physics (IFJ PAN) is currently the largest research institute of the Polish Academy of Sciences. The broad range of studies and activities of IFJ PAN includes basic and applied research, ranging from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of methods of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly yield of the IFJ PAN encompasses more than 600 scientific papers in the Journal Citation Reports published by the Clarivate Analytics. The part of the Institute is the Cyclotron Centre Bronowice (CCB) which is an infrastructure, unique in Central Europe, to serve as a clinical and research centre in the area of medical and nuclear physics. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: "Matter-Energy-Future" which possesses the status of a Leading National Research Centre (KNOW) in physics for the years 2012-2017. In 2017 the European Commission granted to the Institute the HR Excellence in Research award. The Institute is of A+ Category (leading level in Poland) in the field of sciences and engineering.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences