Heavens

UBCO researcher uses geology to help astronomers find habitable planets

image: UBCO's Brendan Dyck is using his geology expertise about planet formation to help identify other planets that might support life.

Image: 
Image NASA/Goddard Space Flight Center.

Astronomers have identified more than 4,000, and counting, confirmed exoplanets -- planets orbiting stars other than the sun -- but only a fraction have the potential to sustain life.

Now, new research from UBC's Okanagan campus is using the geology of early planet formation to help identify those that may be capable of supporting life.

"The discovery of any planet is pretty exciting, but almost everyone wants to know if there are smaller Earth-like planets with iron cores," says Dr. Brendan Dyck, assistant professor of geology in the Irving K. Barber Faculty of Science and lead author on the study.

"We typically hope to find these planets in the so-called 'goldilocks' or habitable zone, where they are the right distance from their stars to support liquid water on their surfaces."

Dr. Dyck says that while locating planets in the habitable zone is a great way to sort through the thousands of candidate planets, it's not quite enough to say whether that planet is truly habitable.

"Just because a rocky planet can have liquid water doesn't mean it does," he explains. "Take a look right in our own solar system. Mars is also within the habitable zone and although it once supported liquid water, it has long since dried up."

That, according to Dr. Dyck, is where geology and the formation of these rocky planets may play a key role in narrowing down the search. His research was recently published in the Astrophysical Journal Letters.

"Our findings show that if we know the amount of iron present in a planet's mantle, we can predict how thick its crust will be and, in turn, whether liquid water and an atmosphere may be present," he says. "It's a more precise way of identifying potential new Earth-like worlds than relying on their position in the habitable zone alone."

Dr. Dyck explains that within any given planetary system, the smaller rocky planets all have one thing in common -- they all have the same proportion of iron as the star they orbit. What differentiates them, he says, is how much of that iron is contained in the mantle versus the core.

"As the planet forms, those with a larger core will form thinner crusts, whereas those with smaller cores form thicker iron-rich crusts like Mars."

The thickness of the planetary crust will then dictate whether the planet can support plate tectonics and how much water and atmosphere may be present, key ingredients for life as we know it.

"While a planet's orbit may lie within the habitable zone, its early formation history might ultimately render it inhabitable," says Dr. Dyck. "The good news is that with a foundation in geology, we can work out whether a planet will support surface water before planning future space missions."

Later this year, in a joint project with NASA, the Canadian Space Agency and the European Space Agency, the James Webb Space Telescope (JWST) will launch. Dr. Dyck describes this as the golden opportunity to put his findings to good use.

"One of the goals of the JWST is to investigate the chemical properties of extra-solar planetary systems," says Dr. Dyck. "It will be able to measure the amount of iron present in these alien worlds and give us a good idea of what their surfaces may look like and may even offer a hint as to whether they're home to life."

"We're on the brink of making huge strides in better understanding the countless planets around us and in discovering how unique the Earth may or may not be. It may still be some time before we know whether any of these strange new worlds contain new life or even new civilizations, but it's an exciting time to be part of that exploration."

Credit: 
University of British Columbia Okanagan campus

Articles for Geosphere posted online in April

Boulder, Colo., USA: GSA's dynamic online journal, Geosphere, posts articles online regularly. Locations and topics studied this month include the Central Anatolian Plateau; the Southern Rocky Mountain Volcanic Field; petrogenesis in the Grand Canyon; and the evolution of the Portland and Tualatin forearc basins, Oregon.

A physical and chemical sedimentary record of Laramide tectonic shifts in the Cretaceous-Paleogene San Juan Basin, New Mexico, USA

Kevin M. Hobbs; Peter J. Fawcett

Abstract: Fluvial siliciclastic rocks bracketing the Cretaceous-Paleogene (K-Pg) boundary in the San Juan Basin, New Mexico (USA), provide records of regional fluvial and tectonic evolution during the Laramide orogeny. Petrographic analyses of sandstones from the Upper Cretaceous Fruitland Formation and Kirtland Formation and the Paleocene Ojo Alamo Sandstone and Nacimiento Formation show that the rivers depositing these sediments were sourced in areas where unroofing of crystalline basement rocks took place, introducing an increasing proportion of immature detrital grains into the fluvial system through time. After the Cretaceous-Paleogene boundary, rivers deposited an increasing amount of microcline and orthoclase feldspar relative to plagioclase feldspar, suggesting a growing source in unique crystalline basement rocks. Geochemical analyses show significant differences between Al- and K-poor Upper Cretaceous sandstones and Al- and K-rich lower Paleocene sandstones in the San Juan Basin. The high proportion of sand-sized material in the Ojo Alamo Sandstone suggests that it was deposited in a basin with a low ratio of sediment supply to accommodation. However, magnetostratigraphic age constraints suggest it had a relatively high sedimentation and/or subsidence rate of as much as 0.38 m/k.y. The sediment supply must have been high in order to deposit a basin-wide coarse sand-dominated package, suggesting rapid creation of topographic relief in the San Juan uplift, the proposed source area of the Ojo Alamo fluvial system. The observed sedimentary architecture and age constraints of the Ojo Alamo Sandstone, including kilometers-wide sand bodies and limited overbank mudstones throughout most of the outcrop area, are difficult to reconcile with accepted models of aggradation and avulsion in large fluvial systems, but available age and lithologic data make difficult a complete understanding of Paleocene San Juan Basin fluvial systems and basin evolution. Here, we present new lithologic, petrographic, and thickness data from San Juan Basin K-Pg fluvial siliciclastic units and interpretations of their origins.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02324.1/596304/A-physical-and-chemical-sedimentary-record-of

The spatial and temporal evolution of the Portland and Tualatin forearc basins, Oregon, USA

Darby P. Scanlon; John Bershaw; Ray E. Wells; Ashley R. Streig

Abstract: The Portland and Tualatin basins are part of the Salish-Puget-Willamette Lowland, a 900-km-long, forearc depression lying between the volcanic arc and the Coast Ranges of the Cascadia convergent margin. Such inland seaways are characteristic of warm, young slab subduction. We analyzed the basins to better understand their evolution and relation to Coast Range history and to provide an improved tectonic framework for the Portland metropolitan area. We model three key horizons in the basins: (1) the top of the Columbia River Basalt Group (CRBG), (2) the bottom of the CRBG, and (3) the top of Eocene basement. Isochore maps constrain basin depocenters during (1) Pleistocene to mid-Miocene time (0-15 Ma), (2) CRBG (15.5-16.5 Ma), and (3) early Miocene to late Eocene (ca. 17-35 Ma) time. Results show that the Portland and Tualatin basins have distinct mid-Miocene to Quaternary depocenters but were one continuous basin from the Eocene until mid-Miocene time. A NW-striking gravity low coincident with the NW-striking, fault-bounded Portland Hills anticline is interpreted as an older graben coincident with observed thickening of CRBG flows and underlying sedimentary rocks. Neogene transpression in the forearc structurally inverted the Sylvan-Oatfield and Portland Hills normal faults as high-angle dextral-reverse faults, separating the Portland and Tualatin basins. An eastward shift of the forearc basin depocenter and ten-fold decrease in accommodation space provide temporal constraints on the emergence of the Coast Range to the west. Clockwise rotation and northward transport of the forearc is deforming the basins and producing local earthquakes beneath the metropolitan area.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02298.1/596305/The-spatial-and-temporal-evolution-of-the-Portland

Petrogenesis of the 91-Mile peridotite in the Grand Canyon: Ophiolite or deep-arc fragment?

S.J. Seaman; M.L. Williams; K.E. Karlstrom; P.C. Low

Abstract: Recognition of fundamental tectonic boundaries has been extremely difficult in the (>1000-km-wide) Proterozoic accretionary orogen of southwestern North America, where the main rock types are similar over large areas, and where the region has experienced multiple postaccretionary deformation events. Discrete ultramafic bodies are present in a number of areas that may mark important boundaries, especially if they can be shown to represent tectonic fragments of ophiolite complexes. However, most ultramafic bodies are small and intensely altered, precluding petrogenetic analysis. The 91-Mile peridotite in the Grand Canyon is the largest and best preserved ultramafic body known in the southwest United States. It presents a special opportunity for tectonic analysis that may illuminate the significance of ultramafic rocks in other parts of the orogen. The 91-Mile peridotite exhibits spectacular cumulate layering. Contacts with the surrounding Vishnu Schist are interpreted to be tectonic, except along one margin, where intrusive relations have been interpreted. Assemblages include olivine, clinopyroxene, orthopyroxene, magnetite, and phlogopite, with very rare plagioclase. Textures suggest that phlogopite is the result of late intercumulus crystallization. Whole-rock compositions and especially mineral modes and compositions support derivation from an arc-related mafic magma. K-enriched subduction-related fluid in the mantle wedge is interpreted to have given rise to a K-rich, hydrous, high-pressure partial melt that produced early magnetite, Al-rich diopside, and primary phlogopite. The modes of silicate minerals, all with high Mg#, the sequence of crystallization, and the lack of early plagioclase are all consistent with crystallization at relatively high pressures. Thus, the 91-Mile peridotite body is not an ophiolite fragment that represents the closure of a former ocean basin. It does, however, mark a significant tectonic boundary where lower-crustal arc cumulates have been juxtaposed against middle-crustal schists and granitoids.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02302.1/596306/Petrogenesis-of-the-91-Mile-peridotite-in-the

Secular variations of magma source compositions in the North Patagonian batholith from the Jurassic to Tertiary: Was mélange melting involved?

Antonio Castro; Carmen Rodriguez; Carlos Fernández; Eugenio Aragón; Manuel Francisco Pereira ...

Abstract: This study of Sr-Nd initial isotopic ratios of plutons from the North Patagonian batholith (Argentina and Chile) revealed that a secular evolution spanning 180 m.y., from the Jurassic to Neogene, can be established in terms of magma sources, which in turn are correlated with changes in the tectonic regime. The provenance and composition of end-member components in the source of magmas are represented by the Sr-Nd initial isotopic ratios (87Sr/86Sr and 143Nd/144Nd) of the plutonic rocks. Our results support the interpretation that source composition was determined by incorporation of varied crustal materials and trench sediments via subduction erosion and sediment subduction into a subduction channel mélange. Subsequent melting of subducted mélanges at mantle depths and eventual reaction with the ultramafic mantle are proposed as the main causes of batholith magma generation, which was favored during periods of fast convergence and high obliquity between the involved plates. We propose that a parental diorite (= andesite) precursor arrived at the lower arc crust, where it underwent fractionation to yield the silicic melts (granodiorites and granites) that formed the batholiths. The diorite precursor could have been in turn fractionated from a more mafic melt of basaltic andesite composition, which was formed within the mantle by complete reaction of the bulk mélanges and the peridotite. Our proposal follows model predictions on the formation of mélange diapirs that carry fertile subducted materials into hot regions of the suprasubduction mantle wedge, where mafic parental magmas of batholiths originate. This model not only accounts for the secular geochemical variations of Andean batholiths, but it also avoids a fundamental paradox of the classical basalt model: the absence of ultramafic cumulates in the lower arc crust and in the continental crust in general.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02338.1/596307/Secular-variations-of-magma-source-compositions-in

Fast Pliocene integration of the Central Anatolian Plateau drainage: Evidence, processes, and driving forces

Gilles Y. Brocard; Maud J.M. Meijers; Michael A. Cosca; Tristan Salles; Jane Willenbring ...

Abstract: Continental sedimentation was widespread across the Central Anatolian Plateau in Miocene-Pliocene time, during the early stages of plateau uplift. Today, however, most sediment produced on the plateau is dispersed by a well-integrated drainage and released into surrounding marine depocenters. Residual long-term (106-107 yr) sediment storage on the plateau is now restricted to a few closed catchments. Lacustrine sedimentation was widespread in the Miocene-Pliocene depocenters. Today, it is also restricted to the residual closed catchments. The present-day association of closed catchments, long-term sediment storage, and lacustrine sedimentation suggests that the Miocene-Pliocene sedimentation also occurred in closed catchments. The termination of sedimentation across the plateau would therefore mark the opening of these closed catchments, their integration, and the formation of the present-day drainage. By combining newly dated volcanic markers with previously dated sedimentary sequences, we show that this drainage integration occurred remarkably rapidly, within 1.5 m.y., at the turn of the Pliocene. The evolution of stream incision documented by these markers and newly obtained 10Be erosion rates allow us to discriminate the respective con­tributions of three potential processes to drainage integration, namely, the capture of closed catch­ments by rivers draining the outer slopes of the plateau, the overflow of closed lakes, and the avul­sion of closed catchments. Along the southern plateau margin, rivers draining the southern slope of the Central Anatolian Plateau expanded into the plateau interior; however, only a small amount of drainage integration was achieved by this process. Instead, avulsion and/or overflow between closed catchments achieved most of the integration, and these top-down processes left a distinctive sedi­mentary signal in the form of terminal lacustrine limestone sequences. In the absence of substantial regional climate wetting during the early Pliocene, we propose that two major tectonic events triggered drainage inte­gration, separately or in tandem: the uplift of the Central Anatolian Plateau and the tectonic com­pletion of the Anatolian microplate. Higher surface uplift of the eastern Central Anatolian Plateau relative to the western Central Anatolian Plateau promoted more positive water balances in the eastern catchments, higher water discharge, and larger sediment fluxes. Overflow/avulsion in some of the eastern catchments triggered a chain of avulsions and/or overflows, sparking sweeping integration across the plateau. Around 5 Ma, the inception of the full escape of the Anatolian microplate led to the disruption of the plateau surface by normal and strike-slip faults. Fault scarps partitioned large catchments fed by widely averaged sediment and water influxes into smaller catchments with more contrasted water balances and sediment fluxes. The evolution of the Central Anatolian Plateau shows that top-down processes of integration can outcompete erosion of outer plateau slopes to reintegrate plateau interior drainages, and this is overlooked in current models, in which drainage evolution is dominated by bottom-up integration. Top-down integration has the advantage that it can be driven by more subtle changes in climatic and tectonic boundary conditions than bottom-up integration.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02247.1/595917/Fast-Pliocene-integration-of-the-Central-Anatolian

Postcaldera intrusive magmatism at the Platoro caldera complex, Southern Rocky Mountain volcanic field, Colorado, USA

Amy K. Gilmer; Ren A. Thompson; Peter W. Lipman; Jorge A. Vazquez; A. Kate Souders

Abstract: The Oligocene Platoro caldera complex of the San Juan volcanic locus in Colorado (USA) features numerous exposed plutons both within the caldera and outside its margins, enabling investigation of the timing and evolution of postcaldera magmatism. Intrusion whole-rock geochemistry and phenocryst and/or mineral trace element compositions coupled with new zircon U-Pb geochronology and zircon in situ Lu-Hf isotopes document distinct pulses of magma from beneath the caldera complex. Fourteen intrusions, the Chiquito Peak Tuff, and the dacite of Fisher Gulch were dated, showing intrusive magmatism began after the 28.8 Ma eruption of the Chiquito Peak Tuff and continued to 24 Ma. Additionally, magmatic-hydrothermal mineralization is associated with the intrusive magmatism within and around the margins of the Platoro caldera complex. After caldera collapse, three plutons were emplaced within the subsided block between ca. 28.8 and 28.6 Ma. These have broadly similar modal mineralogy and whole-rock geochemistry. Despite close temporal relations between the tuff and the intrusions, mineral textures and compositions indicate that the larger two intracaldera intrusions are discrete later pulses of magma. Intrusions outside the caldera are younger, ca. 28-26.3 Ma, and smaller in exposed area. They contain abundant glomerocrysts and show evidence of open-system processes such as magma mixing and crystal entrainment. The protracted magmatic history at the Platoro caldera complex documents the diversity of the multiple discrete magma pulses needed to generate large composite volcanic fields.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02242.1/595918/Postcaldera-intrusive-magmatism-at-the-Platoro

GEOSPHERE articles are available at https://geosphere.geoscienceworld.org/content/early/recent. Representatives of the media may obtain complimentary copies of GEOSPHERE articles by contacting Kea Giles at the address above. Please discuss articles of interest with the authors before publishing stories on their work, and please refer to GEOSPHERE in articles published. Non-media requests for articles may be directed to GSA Sales and Service, gsaservice@geosociety.org.

https://www.geosociety.org/

Credit: 
Geological Society of America

Latest observations by MUSER help clarify solar eruptions

image: MUSER on the grassland in Inner Mongolia, China

Image: 
NAOC

Prof. YAN Yihua and his research team from the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) recently released detailed results of observations by the new generation solar radio telescope--Mingantu Spectral Radio Heliograph (MUSER)--from 2014 to 2019.

The study was published in Frontiers in Astronomy and Space Sciences on March 29. It may help us better understand the basic nature of solar eruptions.

Solar radio bursts are associated with different types of powerful eruptions like solar flares, coronal mass ejections, and various thermal and nonthermal processes. They are prompt indicators of disastrous space weather events.

Solar radio observations, especially at centimeter and decimeter wavelengths, play an important role in revealing the key physics behind primary energy release, particle acceleration and transportation. They also help identify crucial precursors of solar storms.

As the most powerful solar radio telescope in the world today, MUSER consists of 100 antennas spread over three spiral-shaped arms with a maximum baseline length of 3 km on the grassland in Inner Mongolia.

Its configuration is optimized to meet the needs of observing the full solar disk over an ultrawide frequency range of 0.4-15 gigahertz. Its images offer a temporal resolution of 25-200 milliseconds, spatial resolution of 1.3-51.6 arcseconds, spectral resolution of 25 megahertz and a high dynamic range of 25 decibels.

MUSER provides a unique, powerful tool for measuring solar magnetic fields and tracing the dynamic evolution of energetic electrons in a wide frequency range, which will help scientists better understand the origin of various solar activities and the basic drivers of space weather.

From MUSER, scientists can capture the most sensitive radio signals of even very small solar eruptive events. The observations also yield images of solar magnetic fields from the solar chromosphere up to the higher corona.

"MUSER, with its extension to metric and decametric wavelengths, will further play the role of new generation radio heliograph. It will become the leading solar-dedicated radio facility in the world for solar physics and space weather studies," said Prof. YAN, chief scientist of Solar Physics at NAOC and the first author of the study.

Credit: 
Chinese Academy of Sciences Headquarters

How long is a day on Venus? Scientists crack mysteries of our closest neighbor

image: Fundamentals such as how many hours are in a Venusian day provide critical data for understanding the divergent histories of Venus and Earth, UCLA researchers say.

Image: 
NASA/JPL-Caltech

Venus is an enigma. It's the planet next door and yet reveals little about itself. An opaque blanket of clouds smothers a harsh landscape pelted by acid rain and baked at temperatures that can liquify lead.

Now, new observations from the safety of Earth are lifting the veil on some of Venus' most basic properties. By repeatedly bouncing radar off the planet's surface over the last 15 years, a UCLA-led team has pinned down the precise length of a day on Venus, the tilt of its axis and the size of its core. The findings are published today in the journal Nature Astronomy.

"Venus is our sister planet, and yet these fundamental properties have remained unknown," said Jean-Luc Margot, a UCLA professor of Earth, planetary and space sciences who led the research.

Earth and Venus have a lot in common: Both rocky planets have nearly the same size, mass and density. And yet they evolved along wildly different paths. Fundamentals such as how many hours are in a Venusian day provide critical data for understanding the divergent histories of these neighboring worlds.

Changes in Venus' spin and orientation reveal how mass is spread out within. Knowledge of its internal structure, in turn, fuels insight into the planet's formation, its volcanic history and how time has altered the surface. Plus, without precise data on how the planet moves, any future landing attempts could be off by as much as 30 kilometers.

"Without these measurements," said Margot, "we're essentially flying blind."

The new radar measurements show that an average day on Venus lasts 243.0226 Earth days -- roughly two-thirds of an Earth year. What's more, the rotation rate of Venus is always changing: A value measured at one time will be a bit larger or smaller than a previous value. The team estimated the length of a day from each of the individual measurements, and they observed differences of at least 20 minutes.

"That probably explains why previous estimates didn't agree with one another," Margot said.

Venus' heavy atmosphere is likely to blame for the variation. As it sloshes around the planet, it exchanges a lot of momentum with the solid ground, speeding up and slowing down its rotation. This happens on Earth too, but the exchange adds or subtracts just one millisecond from each day. The effect is much more dramatic on Venus because the atmosphere is roughly 93 times as massive as Earth's, and so it has a lot more momentum to trade.

The UCLA-led team also reports that Venus tips to one side by precisely 2.6392 degrees (Earth is tilted by about 23 degrees), an improvement on the precision of previous estimates by a factor of 10. The repeated radar measurements further revealed the glacial rate at which the orientation of Venus' spin axis changes, much like a spinning child's top. On Earth, this "precession" takes about 26,000 years to cycle around once. Venus needs a little longer: about 29,000 years.

With these exacting measurements of how Venus spins, the team calculated that the planet's core is about 3,500 kilometers across -- quite similar to Earth -- though they cannot yet deduce whether it's liquid or solid.

Venus as a giant disco ball

On 21 separate occasions from 2006 to 2020, Margot and his colleagues aimed radio waves at Venus from the 70-meter-wide Goldstone antenna in California's Mojave Desert. Several minutes later, those radio waves bounced off Venus and came back to Earth. The radio echo was picked up at Goldstone and at the Green Bank Observatory in West Virginia.

"We use Venus as a giant disco ball," said Margot, with the radio dish acting like a flashlight and the planet's landscape like millions of tiny reflectors. "We illuminate it with an extremely powerful flashlight -- about 100,000 times brighter than your typical flashlight. And if we track the reflections from the disco ball, we can infer properties about the spin [state]."

The complex reflections erratically brighten and dim the return signal, which sweeps across Earth. The Goldstone antenna sees the echo first, then Green Bank sees it roughly 20 seconds later. The exact delay between receipt at the two facilities provides a snapshot of how quickly Venus is spinning, while the particular window of time in which the echoes are most similar reveals the planet's tilt.

The observations required exquisite timing to ensure that Venus and Earth were properly positioned. And both observatories had to be working perfectly -- which wasn't always the case. "We found that it's actually challenging to get everything to work just right in a 30-second period," Margot said. "Most of the time, we get some data. But it's unusual that we get all the data that we're hoping to get."

Despite the challenges, the team is forging ahead and has turned its sights on Jupiter's moons Europa and Ganymede. Many researchers strongly suspect that Europa, in particular, hides a liquid water ocean beneath a thick shell of ice. Ground-based radar measurements could fortify the case for an ocean and reveal the thickness of the ice shell.

And the team will continue bouncing radar off of Venus. With each radio echo, the veil over Venus lifts a little bit more, bringing our sister planet into ever sharper view.

Credit: 
University of California - Los Angeles

Dead lithium: The culprit of low Coulombic efficiency with LIBs

video: During the subsequent 1st stripping process (Movie S1), the whole outline of Li deposition is almost constant with the morphology after 1st plating.

Image: 
Journal of Energy Chemistry

The target of carbon-neutral and net-zero emissions is the development and utilization of renewable energy. High-energy-density energy storage systems are critical technologies for the integration of renewable energy.

Li metal is highly recognized as a promising alternative anode for next-generation rechargeable batteries due to its high theoretical capacity of 3860 mAh g-1 and ultralow electrode potential of -3.04 V compared to the standard hydrogen electrode.

However, Li metal batteries' (LMBs) main issue is their low Coulombic efficiency (CE), which limits batteries' cycle life. The low CE in LMBs occurs because active Li turns into inactive Li, comprising Li components in the solid-electrolyte interphase (SEI) and SEI-wrapped metallic Li (dead Li0). Dead Li0 is the primary reason inactive Li results in a low CE. Therefore, determining the formation and evolution of dead Li0 is essential to fundamentally enhance the CE for longer-lifespan LMBs.

Recently, a group led by Prof. Qiang Zhang from the Tsinghua University reported new insights into dead Li0 during LMB stripping. The dead Li0 directly forms during the stripping process because the partial metallic Li cannot immediately convert into Li+ but is wrapped by insulated SEI. The stripping processes involve the following stages: electron transfer in the solid phase and Li atom conversion to Li+ and Li+ diffusion through SEI.

They investigated the formation and evaluation of dead Li0 systematically and meticulously during the stripping process from electron transfer, the oxidation of Li0 into Li+, and the diffusion of Li+ through SEI. These processes were regulated by adjusting the contact sites of electron channels, the dynamic rate of conversion from Li0 to Li+, and the SEI structure and components. The design principles for achieving less dead Li0 and higher CE are proposed as a proof of concept in LMBs.

"This work describes the comprehensive understanding of dead Li0 formation, providing guidance to reduce dead Li0 for developing future LMBs with higher CE," said Prof. Zhang.

The results were published in Journal of Energy Chemistry.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Novel late-stage colorectal cancer treatment proves effective in preclinical models

MINNEAPOLIS/ST.PAUL (04/28/2021) -- In a recent discovery by University of Minnesota Medical School, researchers uncovered a new way to potentially target and treat late-stage colorectal cancer - a disease that kills more than 50,000 people each year in the United States. The team identified a novel mechanism by which colorectal cancer cells evade an anti-tumor immune response, which helped them develop an exosome-based therapeutic strategy to potentially treat the disease.

"Late-stage colorectal cancer patients face enormous challenges with current treatment options. Most of the time, the patient's immune system cannot efficiently fight against tumors, even with the help of the FDA-approved cancer immunotherapies," said Subree Subramanian, PhD, an associate professor in the U of M Medical School's Department of Surgery, and a senior author of the study.

In partnership with Xianda Zhao, MD, PhD, a postdoctoral fellow in Subramanian's laboratory, the duo set out to investigate how colorectal cancer becomes resistant to available immunotherapies. What they found was recently published in Gastroenterology, including:

- Colorectal cancer cells secrete exosomes that carry immunosuppressive microRNAs (miR-424) that actually prevent T cell and dendritic cell function because they block key proteins (CD28 and CD80) on these immune cell types, respectively. In the absence of these proteins, the T cells, which would normally kill the cancer cells, become ineffective and are eliminated from tumors, allowing tumors to grow.

- By blocking these immunosuppressive microRNAs in cancer cells, the team observed an enhanced anti-tumor immune response and discovered that cancer cell-secreted exosomes also contain tumor-specific antigens that can stimulate the tumor-specific T cell response.

- The researchers tested tumor-secreted exosomes without immunosuppressive microRNAs, in combination with immune checkpoint inhibitors, as a novel combination therapy in preclinical models with advanced-stage colorectal cancer, which proved effective.

"Our studies indicate that disrupting specific immunosuppressive factors in tumor cells helps unleash the immune system to effectively control tumor growth and metastasis in preclinical models with late-stage colorectal cancer," said Subramanian, who is also a member of the Masonic Cancer Center. "Eliminating the immune suppressive effects of those exosomes is now the focus of a new treatment option for patients with this deadly disease."

The intellectual property behind the modified exosome technology has been protected with assistance from the U of M Technology Commercialization. The team is currently developing clinical-grade exosomes that can be tested in clinical trials for patients with colorectal cancer.

Credit: 
University of Minnesota Medical School

Mapping the electronic states in an exotic superconductor

image: (Left) Through neutron scattering experiments, scientists observed distinct patterns of magnetic correlations in superconducting ("single-stripe magnetism") and nonsuperconducting ("double-stripe magnetism") samples of a compound containing iron (Fe), tellurium (Te), and selenium (Se). (Right) A material phase diagram showing where the superconducting state (SC), nonsuperconducting state (NSC), and topological superconducting state (SC + TSS) appear as a function of Fe and Te concentrations. The starred A refers to the nonsuperconducting sample and the starred B to the superconducting sample. Overlaid on the phase diagram are photoemission spectra showing the emergence (left) and absence (right) of the topological state. Topological superconductivity is an electronic state that could be harnessed for more robust quantum computing.

Image: 
Brookhaven National Laboratory

UPTON, NY--Scientists characterized how the electronic states in a compound containing iron, tellurium, and selenium depend on local chemical concentrations. They discovered that superconductivity (conducting electricity without resistance), along with distinct magnetic correlations, appears when the local concentration of iron is sufficiently low; a coexisting electronic state existing only at the surface (topological surface state) arises when the concentration of tellurium is sufficiently high. Reported in Nature Materials, their findings point to the composition range necessary for topological superconductivity. Topological superconductivity could enable more robust quantum computing, which promises to deliver exponential increases in processing power.

"Quantum computing is still in its infancy, and one of the key challenges is reducing the error rate of the computations," said first author Yangmu Li, a postdoc in the Neutron Scattering Group of the Condensed Matter Physics and Materials Science (CMPMS) Division at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory. "Errors arise as qubits, or quantum information bits, interact with their environment. However, unlike trapped ions or solid-state qubits such as point defects in diamond, topological superconducting qubits are intrinsically protected from part of the noise. Therefore, they could support computation less prone to errors. The question is, where can we find topological superconductivity?

In this study, the scientists narrowed the search in one compound known to host topological surface states and part of the family of iron-based superconductors. In this compound, topological and superconducting states are not distributed uniformly across the surface. Understanding what's behind these variations in electronic states and how to control them is key to enabling practical applications like topologically protected quantum computing.

From previous research, the team knew modifying the amount of iron could switch the material from a superconducting to nonsuperconducting state. For this study, physicist Gendu Gu of the CMPMS Division grew two types of large single crystals, one with slightly more iron relative to the other. The sample with the higher iron content is nonsuperconducting; the other sample is superconducting.

To understand whether the arrangement of electrons in the bulk of the material varied between the superconducting and nonsuperconducting samples, the team turned to spin-polarized neutron scattering. The Spallation Neutron Source (SNS), located at DOE's Oak Ridge National Laboratory, is home to a one-of-a-kind instrument for performing this technique.

"Neutron scattering can tell us the magnetic moments, or spins, of electrons and the atomic structure of a material," explained corresponding author, Igor Zaliznyak, a physicist in the CMPMS Division Neutron Scattering Group who led the Brookhaven team that helped design and install the instrument with collaborators at Oak Ridge. "In order to single out the magnetic properties of electrons, we polarize the neutrons using a mirror that reflects only one specific spin direction."

To their surprise, the scientists observed drastically different patterns of electron magnetic moments in the two samples. Therefore, the slight alteration in the amount of iron caused a change in electronic state.

"After seeing this dramatic change, we figured we should look at the distribution of electronic states as a function of local chemical composition," said Zaliznyak.

At Brookhaven's Center for Functional Nanomaterials (CFN), Li, with support from CFN staff members Fernando Camino and Gwen Wright, determined the chemical composition across representative smaller pieces of both sample types through energy-dispersive x-ray spectroscopy. In this technique, a sample is bombarded with electrons, and the emitted x-rays characteristic of different elements are detected. They also measured the local electrical resistance--which indicates how coherently electrons can transport charge--with microscale electrical probes. For each crystal, Li defined a small square grid (100 by 100 microns). In total, the team mapped the local composition and resistance at more than 2,000 different locations.

"Through the experiments at the CFN, we characterized the chemistry and overall conduction properties of the electrons," said Zaliznyak. "But we also need to characterize the microscopic electronic properties, or how electrons propagate in the material, whether in the bulk or on the surface. Superconductivity induced in electrons propagating on the surface can host topological objects called Majorana modes, which are in theory one of the best ways to perform quantum computations. Information on bulk and surface electronic properties can be obtained through photoemission spectroscopy."

For the photoemission spectroscopy experiments, Zaliznyak and Li reached out to Peter Johnson, leader of the CMPMS Division Electron Spectroscopy Group, and Nader Zaki, a scientific associate in Johnson's group. By measuring the energy and momentum of electrons ejected from the samples (using the same spatial grid) in response to light, they quantified the strengths of the electronic states propagating on the surface, in the bulk, and forming the superconducting state. They quantitatively fit the photoemission spectra to a model that characterizes the strengths of these states.

Then, the team mapped the electronic state strengths as a function of local composition, essentially building a phase diagram.

"This phase diagram includes the superconducting and topological phase transitions and points to where we could find a useful chemical composition for quantum computation materials," said Li. "For certain compositions, no coherent electronic states exist to develop topological superconductivity. In previous studies, people thought instrument failure or measurement error were why they weren't seeing features of topological superconductivity. Here we show that it's due to the electronic states themselves."

"When the material is close to the transition between the topological and nontopological state, you can expect fluctuations," added Zaliznyak. "For topology to arise, the electronic states need to be well-developed and coherent. So, from a technological perspective, we need to synthesize materials away from the transition line."

Next, the scientists will expand the phase diagram to explore the compositional range in the topological direction, focusing on samples with less selenium and more tellurium. They are also considering applying neutron scattering to understand an unexpected energy gap (an energy range where no electrons are allowed) opening in the topological surface state of the same compound. Johnson's group recently discovered this gap and hypothesized it was caused by surface magnetism.

Credit: 
DOE/Brookhaven National Laboratory

Astronomers detect first ever hydroxyl molecule signature in an exoplanet atmosphere

image: Artist's impression of an ultra-hot Jupiter exoplanet, WASP-33b. Image credit Astrobiology Center

Image: 
Astrobiology Center

An international collaboration of astronomers led by a researcher from the Astrobiology Center and Queen's University Belfast, and including researchers from Trinity College Dublin, has detected a new chemical signature in the atmosphere of an extrasolar planet (a planet that orbits a star other than our Sun).

The hydroxyl radical (OH) was found on the dayside of the exoplanet WASP-33b. This planet is a so-called 'ultra-hot Jupiter', a gas-giant planet orbiting its host star much closer than Mercury orbits the Sun and therefore reaching atmospheric temperatures of more than 2,500° C (hot enough to melt most metals).

The lead researcher based at the Astrobiology Center and Queen's University Belfast, Dr Stevanus Nugroho, said: "This is the first direct evidence of OH in the atmosphere of a planet beyond the Solar System. It shows not only that astronomers can detect this molecule in exoplanet atmospheres, but also that they can begin to understand the detailed chemistry of this planetary population."

In the Earth's atmosphere, OH is mainly produced by the reaction of water vapour with atomic oxygen. It is a so-called 'atmospheric detergent' and plays a crucial role in the Earth's atmosphere to purge pollutant gasses that can be dangerous to life (e.g., methane, carbon monoxide).

In a much hotter and bigger planet like WASP-33b, where astronomers have previously detected signs of iron and titanium oxide gas) OH plays a key role in determining the chemistry of the atmosphere through interactions with water vapour and carbon monoxide. Most of the OH in the atmosphere of WASP-33b is thought to have been produced by the destruction of water vapour due to the extremely high temperature.

"We see only a tentative and weak signal from water vapour in our data, which would support the idea that water is being destroyed to form hydroxyl in this extreme environment," explained Dr Ernst de Mooij from Queen's University Belfast, a co-author on this study.

To make this discovery, the team used the InfraRed Doppler (IRD) instrument at the 8.2-meter diameter Subaru Telescope located in the summit area of Maunakea in Hawai`i (about 4,200 m above sea level). This new instrument can detect atoms and molecules through their 'spectral fingerprints,' unique sets of dark absorption features superimposed on the rainbow of colours (or spectrum) that are emitted by stars and planets.

As the planet orbits its host star, its velocity relative to the Earth changes with time. Just like the siren of an ambulance or the roar of a racing car's engine changes pitch while speeding past us, the frequencies of light (e.g., colour) of these spectral fingerprints change with the velocity of the planet. This allows us to separate the planet's signal from its bright host star, which normally overwhelms such observations, despite modern telescopes being nowhere near powerful enough to take direct images of such 'hot Jupiter' exoplanets.

Dr Neale Gibson, Assistant Professor at Trinity College Dublin and co-author of this work, said: "The science of extrasolar planets is relatively new, and a key goal of modern astronomy is to explore these planets' atmospheres in detail and eventually to search for 'Earth-like' exoplanets - planets like our own. Every new atmospheric species discovered further improves our understanding of exoplanets and the techniques required to study their atmospheres, and takes us closer to this goal."

By taking advantage of the unique capabilities of IRD, the astronomers were able to detect the tiny signal from hydroxyl in the planet's atmosphere. "IRD is the best instrument to study the atmosphere of an exoplanet in the infrared," adds Professor Motohide Tamura, one of the principal investigators of IRD, Director of the Astrobiology Center, and co-author of this work.

"These techniques for atmospheric characterisation of exoplanets are still only applicable to very hot planets, but we would like to further develop instruments and techniques that enable us to apply these methods to cooler planets, and ultimately, to a second Earth," says Dr Hajime Kawahara, assistant professor at the University of Tokyo and co-author of this work.

Professor Chris Watson, from Queen's University Belfast, a co-author on this study, continues: "While WASP-33b may be a giant planet, these observations are the testbed for the next-generation facilities like the Thirty Meter Telescope and the European Extremely Large Telescope in searching for biosignatures on smaller and potentially rocky worlds, which might provide hints to one of the oldest questions of humankind: 'Are we alone?'"

Credit: 
Trinity College Dublin

Seismicity on Mars full of surprises, in first continuous year of data

The SEIS seismometer package from the Mars InSight lander has collected its first continuous Martian year of data, revealing some surprises among the more than 500 marsquakes detected so far.

At the Seismological Society of America (SSA)'s 2021 Annual Meeting, Savas Ceylan of ETH Zürich discussed some of the findings from The Marsquake Service, the part of the InSight ground team that detects marsquakes and curates the planet's seismicity catalog.

Marsquakes differ from earthquakes in a number of ways, Ceylan explained. To begin with, they are much smaller than earthquakes, with the largest event recorded at teleseismic distances around magnitude 3.6. SEIS is able to detect these small events because the background seismic noise on Mars can be much lower than on Earth, without the constant tremor produced by ocean waves.

"For much of a Martian year, from around sunset until early hours, the Martian atmosphere becomes very quiet, so there is no local noise either," he said. "Additionally, our sensors are optimized and shielded for operating under severe Martian conditions, such as extremely low temperatures and the extreme diurnal temperature fluctuations on the red planet."

Marsquakes also come in two distinct varieties: low-frequency events with seismic waves propagating at various depths in the planet's mantle, and high-frequency events with waves that appear to propagate through the crust. "In terms of how the seismic energy decays over time, the low-frequency events appear to be more like earthquakes" in which the shaking dies away relatively quickly, Ceylan said, "while the high-frequency events are resembling moonquakes" in persisting for longer periods.

The vast majority of the events are high-frequency and occur at hundreds of kilometers of distance from the lander. "It is not quite clear to us how these events could be confined to only high frequency energy while they occur at such large distances," he said. "On top of that, the frequency of those events seems to vary over the Martian year, which is a pattern that we do not know at all from Earth."

Only a handful of marsquakes have clear seismic phase arrivals--the order in which the different types of seismic waves arrive at a location--which allows researchers to calculate the direction and distance the waves come from. All these marsquakes originate from a sunken area of the surface called Cerberus Fossae, about 1800 kilometers away from the InSight Lander.

Cerberus Fossae is one of the youngest geological structures on Mars, and may have formed from extensional faulting or subsidence due to dike emplacement. Recent studies suggest extension mechanism may be the source of the Cerberus Fossae quakes, Ceylan noted, "however, we have a long way in front of us to be able to explain the main tectonic mechanisms behind these quakes."

The biggest challenge for The Marsquake Service and InSight science team has been "adapting to unexpected signals in the data from a new planet," Ceylan said.

Although there were significant efforts to shield SEIS from non-seismic noise by covering it and placing it directly on the Martian surface, its data are still contaminated by weather and lander noise.

"We needed to understand the noise on Mars from scratch, discover how our seismometers behave, how the atmosphere of Mars affects seismic recordings, and find alternative methods to interpret the data properly," said Ceylan.

It took the Service a while to be "confident in identifying the different event types," he added, "discriminating these weak signals from the rich and varied background noise, and being able to characterize these novel signals in a systematic manner to provide a self-consistent catalog."

The InSight seismicity catalog and data are released to the public via IPG Paris, IRIS, and PDS on a three month schedule, with three month data delay.

Credit: 
Seismological Society of America

Mars' changing habitability recorded by ancient dune fields in Gale crater

Understanding whether Mars was once able to support life has been a major driving force for Mars research over the past 50 years. To decipher the planet's ancient climate and habitability, researchers look to the rock record - a physical record of ancient surface processes which reflect the environment and the prevailing climate at the time the rocks were deposited.

In a new paper published in JGR: Planets, researchers on the NASA-JPL Mars Science Laboratory mission used the Curiosity rover to add another piece to the puzzle of Mars' ancient past by investigating a unit of rocks within Gale crater.

They found evidence of an ancient dune field preserved as a layer of rocks in Gale crater, which overlies rock layers that were deposited in a large lake. The rock remnants of the dune field are known today as the Stimson formation.

The findings help scientists understand surface and atmospheric processes - such as the direction the wind blew sand to form dunes - and potentially how Mars' climate evolved from an environment that potentially harboured microbial life, to an uninhabitable one.

By looking at the preserved rock layers through images collected by the Curiosity rover, the researchers reconstructed the shape, migration direction and size of the large dunes, also known as draas, that occupied that part of the crater.

The models of ancient dunes, created by Imperial researchers, show that dunes were nestled next to the central peak of Gale crater - known as Mount Sharp - on a wind-eroded surface at a five-degree angle. The research also found that the dunes were compound dunes - large dunes which hosted their own set of smaller dunes which travelled in a different directions to the main dune.

Lead author Dr Steven Banham of Imperial's Department of Earth Science and Engineering said: "As the wind blows, it transports sand grains of a certain size, and organises them into piles of sand we recognise as sand dunes. These landforms are common on Earth in sandy deserts, such as the Sahara, the Namibian dune field, and the Arabian deserts. The strength of the wind and its uniformity of direction control the shape and size of the dune, and evidence of this can be preserved in the rock record.

"If there is an excess of sediment transported into a region, dunes can climb as they migrate and partially bury adjacent dunes. These buried layers contain a feature called 'cross-bedding', which can give an indication of the size of the dunes, and the direction which they were migrating. By investigating these cross beds, we were able to determine these strata were deposited by specific dunes that form when competing winds transport sediment in two different directions.

"It's amazing that from looking at Martian rocks we can determine that two competing winds drove these large dunes across the plains of Gale crater three and a half billion years ago. This is some of the first evidence we have of variable wind directions - be they seasonal or otherwise."

The lower part of Mount Sharp is composed of ancient lakebed sediments. These sediments accumulated on the lakebed when the crater flooded, shortly after its formation 3.8 billion years ago. Curiosity has spent much of the last nine years investigating these rocks for signs of habitability.

Dr Banham added: "More than 3.5 billion years ago this lake dried out, and the lake bottom sediments were exhumed and eroded to form the mountain at the centre of the crater - the present-day Mount Sharp. The flanks of the mountain are where we have found evidence that an ancient dune field formed after the lake, indicating an extremely arid climate."

However, the new findings suggest that the ancient dune field might have been less nurturing of life than previously thought. Dr Banham said: "The vast expanse of the dune field wouldn't have been a particularly hospitable place for microbes to live, and the record left behind would rarely preserve evidence of life, if there was any.

"This desert sand represents a snapshot of time within Gale crater, and we know that the dune field was preceded by lakes - yet we don't know what overlies the desert sandstones further up Mount Sharp. It could be more layers deposited in arid conditions, or it could be deposits associated with more humid climates. We will have to wait and see."

Rovers on Mars are allowing researchers to explore the planet in detail like never before. Dr Banham added: "Although geologists have been reading rocks on Earth for 200 years, it's only in the last decade or so that we've been able to read Martian rocks with the same level of detail as we do on Earth."

The researchers continue to examine rocks found by Curiosity and are now focusing on the wind patterns recorded by dunes further up Mount Sharp. Dr Banham said: "We're interested to see how the dunes reflect the wider climate of Mars, its changing seasons, and longer-term changes in wind direction. Ultimately, this all relates to the major driving question: to discover whether life ever arose on Mars."

Credit: 
Imperial College London

How is a molecular machine assembled?

The study was published by the team from Ruhr-Universität Bochum (RUB), the Max Planck Institutes of Biochemistry and Biophysics, the Center for Synthetic Microbiology (SYNMIKRO) and the Chemistry Department at Philipps Universität Marburg, the University of Illinois Urbana-Champaign, USA, and Université Paris-Saclay, France, online on 12 April 2021 in the journal Nature Plants.

Catalyst of life

Photosystem II (PS II) is of fundamental importance for life, as it is able to catalyse the splitting of water. The oxygen released in this reaction allows us to breathe. In addition, PS II converts light energy in such a way that atmospheric CO2 can be used to synthesise organic molecules. PS II thus represents the molecular beginning of all food chains. Its structure and function have already been researched in detail, but little has been known so far about the molecular processes that lead to the orderly assembly of the complex.

Assembly production

PS II consists of more than 100 individual parts that have to come together in a well-orchestrated process in order to ultimately create a fully functional machine. Helper proteins, so-called assembly factors, which are responsible for the sub-steps, play a crucial role in this process. "Picture them as robots on an assembly line, for example making a car," explains Professor Marc Nowaczyk from the RUB Chair for Plant Biochemistry. "Each robot adds a part or assembles prefabricated modules to end up with a perfect machine."

When figuring out how this is done, the difficulty was to isolate an intermediate product, including its molecular helpers, because such transition states are very unstable compared to the finished product and are only present in very small quantities. Only by using tricks, such as removing a part of the assembly line production, was it possible to isolate an intermediate stage with the associated helper proteins for the first time.

Cold insights: cryo-electron microscopy

Thanks to cryo-electron microscopy, sensitive protein structures, which include PS II transition states, and even the smallest virus particles can be imaged. The data, published in Nature Plants, show the molecular structure of a PS II transition complex with as many as three helper proteins. "During the construction of the PSII structural model, it turned out that one of these helper proteins causes previously unknown structural changes that we eventually linked to a novel protective mechanism," explains Dr. Till Rudack from the Centre for Protein Diagnostics (ProDi). During this assembly step, PS II is only partially active: light-induced processes can already take place, but water splitting is not yet activated. This, as it turned out, leads to the formation of aggressive oxygen species that can damage the unfinished complex. However, the binding of the helper protein and the associated structural change at PS II can prevent the formation of the harmful molecules and, consequently, protect the complex in its vulnerable phase. Another helper protein in turn prepares the activation of the water-splitting mechanism. "As soon as we succeed in identifying any further intermediate stages of this activation, this could be the key to a profound understanding of molecular light-driven water splitting. As a result, we could advance the development of synthetic catalysts for the energy conversion of sunlight into organic substances," conclude the authors.

Credit: 
Ruhr-University Bochum

Study reveals the workings of nature's own earthquake blocker

image: Researchers gathering sediment near the Alpine Fault in New Zealand to study the history of the area's earthquakes.

Image: 
Jamie Howarth/Victoria University of Wellington

A new study finds a naturally occurring “earthquake gate” that decides which earthquakes are allowed to grow into magnitude 8 or greater.

Sometimes, the “gate” stops earthquakes in the magnitude 7 range, while ones that pass through the gate grow to magnitude 8 or greater, releasing over 32 times as much energy as a magnitude 7.

“An earthquake gate is like someone directing traffic at a one-lane construction zone. Sometimes you pull up and get a green ‘go’ sign, other times you have a red ‘stop’ sign until conditions change,” said UC Riverside geologist Nicolas Barth.

Researchers learned about this gate while studying New Zealand’s Alpine Fault, which they determined has about a 75 percent chance of producing a damaging earthquake within the next 50 years. The modeling also suggests this next earthquake has an 82 percent chance of rupturing through the gate and being magnitude 8 or greater. These insights are now published in the journal Nature Geoscience.

Barth was part of an international research team including scientists from Victoria University of Wellington, GNS Science, the University of Otago, and the US Geological Survey.

Their work combined two approaches to studying earthquakes: evidence of past earthquakes collected by geologists and computer simulations run by geophysicists. Only by using both jointly were the researchers able to get new insight into the expected behavior of future earthquakes on the Alpine Fault.

“Big earthquakes cause serious shaking and landslides that carry debris down rivers and into lakes,” said lead author Jamie Howarth, Victoria University of Wellington geologist. “We can drill several meters through the lake sediments and recognize distinct patterns that indicate an earthquake shook the region nearby. By dating the sediments, we can precisely determine when the earthquake occurred.”

Sedimentary records collected at six sites along the Alpine Fault identified the extent of the last 20 significant earthquakes over the past 4,000 years, making it one of the most detailed earthquake records of its kind in the world.

The completeness of this earthquake record offered a rare opportunity for the researchers to compare their data against a 100,000-year record of computer-generated earthquakes. The research team used an earthquake simulation code developed by James Dieterich, distinguished professor emeritus at UC Riverside.

Only the model with the fault geometry matching the Alpine Fault was able to reproduce the earthquake data. “The simulations show that a smaller magnitude 6 to 7 earthquake at the earthquake gate can change the stress and break the streak of larger earthquakes,” Barth said. “We know the last three ruptures passed through the earthquake gate. In our best-fit model the next earthquake will also pass 82% of the time.”

Looking beyond New Zealand, earthquake gates are an important area of active research in California. The Southern California Earthquake Center, a consortium of over 100 institutions of which UCR is a core member, has made earthquake gates a research priority. In particular, researchers are targeting the Cajon Pass region near San Bernardino, where the interaction of the San Andreas and San Jacinto faults may cause earthquake gate behavior that could regulate the size of the next damaging earthquake there.

“We are starting to get to the point where our data and models are detailed enough that we can begin forecasting earthquake patterns. Not just how likely an earthquake is, but how big and how widespread it may be, which will help us better prepare,” Barth said.

Journal

Nature Geoscience

DOI

10.1038/s41561-021-00721-4

Credit: 
University of California - Riverside

Study sheds light on stellar origin of 60Fe

image: 60Fe yield in 18 solar mass star. Blue lines (LMP) are calculations based on previous decay rate, red lines (present work) are those based on the new measurement.

Image: 
Physical Review Letters

Researchers from the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences and their collaborators have recently made great progress in the study of the stellar beta-decay rate of 59Fe, which constitutes an important step towards understanding 60Fe nucleosynthesis in massive stars. The results were published in Physical Review Letters on April 12.

Radioactive nuclide 60Fe plays an essential role in nuclear astrophysical studies. It is synthesized in massive stars by successive neutron captures on a stable nucleus of 58Fe and, during the late stages of stellar evolution, ejected into space via a core-collapse supernova.

The characteristic gamma lines associated with the decay of 60Fe have been detected by space gamma-ray detectors. By comparing the 60Fe gamma-ray flux to that from 26Al, which shares a similar origin as 60Fe, researchers should be able to obtain important information on nucleosynthesis and stellar models. However, the observed gamma-ray flux ratio 26Al/60Fe does not match theoretical predictions due to uncertainties in both stellar models and nuclear data inputs.

The stellar beta-decay rate of 59Fe is among the greatest uncertainties in nuclear data inputs. During the nucleosynthesis of 60Fe in massive stars, 59Fe can either capture a neutron to produce 60Fe or beta decay to 59Co. Therefore, the stellar beta-decay rate of 59Fe is critical to the yield of 60Fe.

Although the decay rate of 59Fe has been accurately measured in laboratories, its decay rate may be significantly enhanced in stellar environments due to contributions from its excited states. However, direct measurement of the beta-decay rate from excited states is very challenging since one has to create a high-temperature environment as in stars to keep the 59Fe nuclei in their excited states.

To address this problem, researchers at IMP proposed a new method for measuring the stellar beta-decay rate of 59Fe. "The nuclear charge-exchange reaction is an indirect measurement alternative, which provides key nuclear structure information that can determine those decay rates." said GAO Bingshui, a researcher at IMP.

The researchers carried out their experiment at the Coupled Cyclotron Facility at Michigan State University. In the experiment, a secondary triton beam produced by the cyclotrons was used to bombard a 59Co target. Then the reaction products, 3He particles and gamma rays, were detected by the S800 spectrometer and GRETINA gamma-ray detection array. Using this information, the beta-decay rates from the 59Fe excited states were determined. This measurement thus eliminated one of the major nuclear uncertainties in predicting the yield of 60Fe.

By comparing stellar model calculations using the new decay rate data with previous calculations, the researchers found that, for an 18 solar mass star, the yield of 60Fe is 40% less when using the new data. The result points to a reduced tension in the discrepancy in 26Al/60Fe ratios between theoretical predictions and observations.

"It is an important step towards understanding 60Fe nucleosynthesis in massive stars and it will provide a more solid basis for future astrophysical simulations," said LI Kuoang, the collaborator of Gao.

Credit: 
Chinese Academy of Sciences Headquarters

Elusive particle may point to undiscovered physics

ITHACA, N.Y. - The muon is a tiny particle, but it has the giant potential to upend our understanding of the subatomic world and reveal an undiscovered type of fundamental physics.

That possibility is looking more and more likely, according to the initial results of an international collaboration - hosted by the U.S. Department of Energy's Fermi National Accelerator Laboratory - that involved key contributions by a Cornell team led by Lawrence Gibbons, professor of physics in the College of Arts and Sciences.

The collaboration, which brought together 200 scientists from 35 institutions in seven countries, set out to confirm the findings of a 1998 experiment that startled physicists by indicating that muons' magnetic field deviates significantly from the Standard Model, which is used to explain the laws that govern fundamental particles.

Digitizer modules undergo testing in the lab of Lawrence Gibbons, professor of physics, before being shipped to the Fermi National Accelerator Laboratory. Twenty-eight crates of these modules were installed around the muon g-2 ring.

"The question was, what's going on? Was the experiment wrong? Or is the theory incomplete?" Gibbons said. "And if the theory is incomplete, then confirming what's going on becomes the first terrestrial evidence of a totally new kind of fundamental particle or force that we don't know about. It would be the first experiment on Earth that is sort of the equivalent of the discovery of dark matter in space."

On April 7, the team confirmed that the original findings were correct, which means there must be more to the physics surrounding the muon than previously known.

Muons are like electrons but are more than 200 times more massive. Both are essentially tiny magnets with their own magnetic field. Muons are far more unstable, though, and decay in a few millionths of a second. They are also notoriously difficult to observe at the quantum mechanical level because the vacuum in which they exist is not a big empty cavity, but rather a bubbling, frothing, dynamic environment.

"It's your cappuccino foam version of the vacuum, where there's virtual particles winking in and out of existence all the time," Gibbons said. "And that turns out to affect the strength of the magnetic field of a muon."

To figure out why, researchers at Brookhaven National Laboratory 20 years ago set out to measure the absolute strength of muon's magnetic field. They did this by firing a beam of muons into a 14-meter-diameter magnetic ring at nearly the speed of light while a series of detectors captured data. The scientists discovered a major discrepancy in the muon's magnetic field: It was more than 3.5 standard deviations from the Standard Model predicted by theoretical physicists.

A plan was eventually hatched to repeat the Brookhaven experiment with higher precision. In 2013, the Brookhaven magnetic ring was transported to the Fermilab facility in Batavia, Illinois, where it was coupled with an even stronger particle accelerator that could produce more than 20 times the amount of muons. In 2018, the first of several experiment runs was launched.

This muon g-2 experiment - "g" refers to the value of the magnet's strength caused by its intrinsic spin, which is slightly larger than two - was successful thanks to a system of detectors developed through a joint partnership between Cornell and the University of Washington.

The University of Washington group built a set of 24 calorimeters out of lead fluoride crystals and silicon photomultipliers that measure a blue light, known as Cherenkov radiation, that results when the positrons from muon decay strike the crystals. By measuring the time and amount of light for each of about 8 billion positrons, the researchers can pinpoint the muon's precession rate, which is the frequency of its rotational wobble. The rate is directly related to the value of g-2.

The Cornell team built the digitizers that could look at the electronic signal coming out of the detectors and create a digitized version of the waveform that could be analyzed offline. The researchers were supported in the effort by the Laboratory for Elementary-Particle Physics (LEPP), and their digitizers incorporated $200,000 worth of specialized analog-to-digital converter chips donated by Texas Instruments.

Gibbons' group also built one of the pair of reconstruction packages that helped their collaborators parse and analyze the collected data, and they were assisted in getting the most precise measurements by David Rubin, the Boyce D. McDaniel Emeritus Professor of Physics (A&S), who helped correct for the spread of muon momenta in the stored beam and for the small vertical motion as the beam speeds around the magnetic ring. Two other Cornell faculty, Toichiro "Tom" Kinoshita, professor emeritus of physics, and G. Peter Lepage, the Goldwin Smith Professor of Physics, both in A&S, contributed to the Standard Model prediction of g-2, to which the project compared its results.

As a fitting final touch, Gibbons chose to make the digitizer faceplate Cornell red.

With so much subatomic information to be sifted through, six different groups worked to separately confirm the muon's precession frequency. Gibbons helped design blinding software that would ensure the groups made their calculations independently.

Then the time came to compare results.

"I have to say, it was nerve-racking. You go into the room, and there's all these points scattered all over the place from all the offsets, and you have to decide, OK, are we going to compare results now? And will they agree?" Gibbons said. "We were trying to measure something to 500 parts per billion. The range that we had was plus or minus 25 parts per million on the frequencies that we're trying to measure. There was a huge sigh of relief when we found everything agreed beautifully."

And when all the international collaborators came together online for the final unblinding of the magnetic field measurement and checked it against the original Brookhaven result?

"Oh man. It was like hats flying in the air," Gibbons said. "It was a combination of elation and relief."

The results from this first experimental run represent only 6% of the data the researchers hope to eventually collect. Additional analysis has already begun on a second and third run, which will generate three to four times as much data. It will be 10 years before all the analysis is complete.

"We landed right on top of this result that really could indicate that there's something totally new going on. We really want to push the uncertainty, the precision, to make the strongest possible statement that we can experimentally," said Gibbons, who began work on the project in 2011. "We may be onto something really profound, something we don't understand. And we still have to figure out what it is."

Credit: 
Cornell University

First results from Fermilab's Muon g-2 experiment strengthen evidence of new physics

image: David Flay examines the Muon g-2 plunging probe installation.

Image: 
Fermilab/DOE

AMHERST Mass. - The long-awaited first results from the Muon g-2 experiment at the U.S. Department of Energy's Fermi National Accelerator Laboratory show fundamental particles called muons behaving in a way that is not predicted by scientists' best theory, the Standard Model of particle physics. This landmark result, made with unprecedented precision and to which UMass Amherst's David Kawall's research group made key contributions, confirms a discrepancy that has been gnawing at researchers for decades.

"Today is an extraordinary day, long awaited not only by us but by the whole international physics community," said Graziano Venanzoni, co-spokesperson of the Muon g-2 experiment and physicist at the Italian National Institute for Nuclear Physics. "A large amount of credit goes to our young researchers who, with their talent, ideas and enthusiasm, have allowed us to achieve this incredible result."

"It's fantastically interesting to work on," says Kawall, a professor in UMass's physics department. "Everything matters. Every little detail matters, and all future theories of physics will have to be compatible with this result."

A muon is about 200 times as massive as its cousin, the electron. Muons occur naturally when cosmic rays strike Earth's atmosphere, and particle accelerators at Fermilab can produce them in large numbers. Like electrons, muons act as if they have a tiny internal magnet. In a strong magnetic field, the direction of the muon's magnet precesses, or wobbles, are much like the axis of a spinning top or gyroscope. The strength of the internal magnet determines the rate that the muon precesses in an external magnetic field and is described by a number that physicists call the g-factor. This number can be calculated with ultra-high precision.

Kawall's group, which included postdocs David Flay and Jimin George, graduate student David Kessler, and undergrad Alysea Kim, worked on measuring the strength of the magnetic field through which the muons passed, as well as preparing the magnet itself, a feat requiring almost unimaginable precision. "One of the innovations we were responsible for," says Kawall, "was developing a system involving 8,000 sheets of laser-cut iron foil to make the magnetic field as homogenous as possible. With our system, we were able to achieve results nearly three times better than the previous experiment." The team also spent years developing special calibration probes of incredible fidelity, accurate down to 15 parts per billion.

As the muons circulate in the Muon g-2 magnet, they also interact with a quantum foam of subatomic particles popping in and out of existence. Interactions with these short-lived particles affect the value of the g-factor, causing the muons' precession to speed up or slow down very slightly. The Standard Model predicts this so-called anomalous magnetic moment extremely precisely. But if the quantum foam contains additional forces or particles not accounted for by the Standard Model, that would tweak the muon g-factor further.

"This quantity we measure reflects the interactions of the muon with everything else in the universe. But when the theorists calculate the same quantity, using all of the known forces and particles in the Standard Model, we don't get the same answer," said Renee Fatemi, a physicist at the University of Kentucky and the simulations manager for the Muon g-2 experiment. "This is strong evidence that the muon is sensitive to something that is not in our best theory."

With more than 200 scientists from 35 institutions in seven countries, the Muon g-2 collaboration has now finished analyzing the motion of more than 8 billion muons from that first run.

"So far we have analyzed less than 6% of the data that the experiment will eventually collect. Although these first results are telling us that there is an intriguing difference with the Standard Model, we will learn much more in the next couple of years," says Fermilab scientist Chris Polly.

"Pinning down the subtle behavior of muons is a remarkable achievement that will guide the search for physics beyond the Standard Model for years to come," said Fermilab Deputy Director of Research Joe Lykken. "This is an exciting time for particle physics research, and Fermilab is at the forefront."

Credit: 
University of Massachusetts Amherst