Heavens

Scientists discover new exoplanet with an atmosphere ripe for study

image: An artist's rendering of TOI-1231 b, a Neptune-like planet about 90 light years away from Earth.

Image: 
NASA/JPL-Caltech

An international group of collaborators, including scientists from NASA's Jet Propulsion Laboratory and The University of New Mexico, have discovered a new, temperate sub-Neptune sized exoplanet with a 24-day orbital period orbiting a nearby M dwarf star. The recent discovery offers exciting research opportunities thanks to the planet's substantial atmosphere, small star, and how fast the system is moving away from the Earth.

The research, titled TOI-1231 b: A Temperate, Neptune-Sized Planet Transiting the Nearby M3 Dwarf NLTT 24399, will be published in a future issue of The Astronomical Journal. The exoplanet, TOI-1231 b, was detected using photometric data from the Transiting Exoplanet Survey Satellite (TESS) and followed up with observations using the Planet Finder Spectrograph (PFS) on the Magellan Clay telescope at Las Campanas Observatory in Chile. The PFS is a sophisticated instrument that detects exoplanets through their gravitational influence on their host stars. As the planets orbit their hosts, the measured stellar velocities vary periodically, revealing the planetary presence and information about their mass and orbit.

The observing strategy adopted by NASA's TESS, which divides each hemisphere into 13 sectors that are surveyed for roughly 28 days, is producing the most comprehensive all-sky search for transiting planets. This approach has already proven its capability to detect both large and small planets around stars ranging from sun-like down to low-mass M dwarf stars. M dwarf stars, also known as a red dwarf, are the most common type of star in the Milky Way making up some 70 percent of all stars in the galaxy.

M dwarfs are smaller and possess a fraction of the sun's mass and have low luminosity. Because an M dwarf is smaller, when a planet of a given size transits the star, the amount of light that is blocked out by the planet is larger, making the transit more easily detectable. Imagine an Earth-like planet passing in front of a star the size of the sun, it's going to block out a tiny bit of light; but if it's passing in front of a star that's a lot smaller, the proportion of light that's blocked out will be larger. In a sense, this creates a larger shadow on the surface of the star, making planets around M dwarfs more easily detectable and easier to study.

Although it enables the detection of exoplanets across the sky, TESS's survey strategy also produces significant observational biases based on orbital period. Exoplanets must transit their host stars at least twice within TESS 's observing span to be detected with the correct period by the Science Processing Operations Center (SPOC) pipeline and the Quick Look Pipeline (QLP), which search the 2-minute and 30-minute cadence TESS data, respectively. Because 74 percent of TESS' total sky coverage is only observed for 28 days, the majority of TESS exoplanets detected have periods less than 14 days. TOI-1231b's 24-day period, therefore, makes its discovery even more valuable.

NASA JPL scientist Jennifer Burt, the lead author of the paper, along with her collaborators including Diana Dragomir, an assistant professor in UNM's Department of Physics and Astronomy, measured both the radius and mass of the planet.

"Working with a group of excellent astronomers spread across the globe, we were able to assemble the data necessary to characterize the host star and measure both the radius and mass of the planet," said Burt. "Those values in turn allowed us to calculate the planet's bulk density and hypothesize about what the planet is made out of. TOI-1231 b is pretty similar in size and density to Neptune, so we think it has a similarly large, gaseous atmosphere."

"Another advantage of exoplanets orbiting M dwarf hosts is that we can measure their masses easier because the ratio of the planet mass to the stellar mass is also larger. When the star is smaller and less massive, it makes detection methods work better because the planet suddenly plays a bigger role as it stands out more easily in relation to the star," explained Dragomir. "Like the shadow cast on the star. The smaller the star, the less massive the star, the more the effect of the planet can be detected.

"Even though TOI 1231b is eight times closer to its star than the Earth is to the Sun, its temperature is similar to that of Earth, thanks to its cooler and less bright host star," says Dragomir. "However, the planet itself is actually larger than earth and a little bit smaller than Neptune - we could call it a sub-Neptune."

Burt and Dragomir, who actually initiated this research while they were Fellows at MIT's Kavli Institute, worked with scientists specializing in observing and characterizing the atmospheres of small planets to figure out which current and future space-based missions might be able to peer into TOI-1231 b's outer layers to inform researchers exactly what kinds of gases are swirling around the planet. With a temperature around 330 Kelvin or 140 degrees Fahrenheit, TOI-1231b is one of the coolest, small exoplanets accessible for atmospheric studies discovered thus far.

Past research suggests planets this cool may have clouds high in their atmospheres, which makes it hard to determine what types of gases surround them. But new observations of another small, cool planet called K2-18 b broke this trend and showed evidence of water in its atmosphere, surprising many astronomers.

"TOI-1231 b is one of the only other planets we know of in a similar size and temperature range, so future observations of this new planet will let us determine just how common (or rare) it is for water clouds to form around these temperate worlds," said Burt.

Additionally, with its host star's high Near-Infrared (NIR) brightness, it makes an exciting target for future missions with the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST). The first set of these observations, led by one of the paper's co-authors, should take place later this month using the Hubble Space Telescope.

"The low density of TOI 1231b indicates that it is surrounded by a substantial atmosphere rather than being a rocky planet. But the composition and extent of this atmosphere are unknown!" said Dragomir. "TOI1231b could have a large hydrogen or hydrogen-helium atmosphere, or a denser water vapor atmosphere. Each of these would point to a different origin, allowing astronomers to understand whether and how planets form differently around M dwarfs when compared to the planets around our Sun, for example. Our upcoming HST observations will begin to answer these questions, and JWST promises an even more thorough look into the planet's atmosphere."

Another way to study the planet's atmosphere is to investigate whether gas is being blown away, by looking for evidence of atoms like hydrogen and helium surrounding the planet as it transits across the face of its host star. Generally, hydrogen atoms are almost impossible to detect because their presence is masked by interstellar gas. But this planet-star system offers a unique opportunity to apply this method because of how fast it's moving away from the Earth.

"One of the most intriguing results of the last two decades of exoplanet science is that, thus far, none of the new planetary systems we've discovered look anything like our own solar system," said Burt. "They're full of planets between the size of Earth and Neptune on orbits much shorter than Mercury's, so we don't have any local examples to compare them to. This new planet we've discovered is still weird - but it's one step closer to being somewhat like our neighborhood planets. Compared to most transiting planets detected thus far, which often have scorching temperatures in the many hundreds or thousands of degrees, TOI-1231 b is positively frigid."

In closing, Dragomir reflects that "this planet joins the ranks of just two or three other nearby small exoplanets that will be scrutinized with every chance we get and using a wide range of telescopes, for years to come so keep an eye out for new TOI1231b developments!"

Credit: 
University of New Mexico

Which way does the solar wind blow?

image: (Top panel, from left to right) July 12, 2012 coronal mass ejection seen in STEREO B Cor2, SOHO C2, and STEREO A Cor2 coronagraphs, respectively. (Bottom panel) The same images overlapped with the model results.

Image: 
Talwinder Singh, Mehmet S. Yalim, Nikolai V. Pogorelov, and Nat Gopalswamy

The surface of the sun churns with energy and frequently ejects masses of highly-magnetized plasma towards Earth. Sometimes these ejections are strong enough to crash through the magnetosphere -- the natural magnetic shield that protects the Earth -- damaging satellites or electrical grids. Such space weather events can be catastrophic.

Astronomers have studied the sun's activity for centuries with greater and greater understanding. Today, computers are central to the quest to understand the sun's behavior and its role in space weather events.

The bipartisan PROSWIFT (Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow) Act [https://www.govinfo.gov/content/pkg/BILLS-116s881enr/pdf/BILLS-116s881enr.pdf], passed into law in October 2020, is formalizing the need to develop better space weather forecasting tools.

"Space weather requires a real-time product so we can predict impacts before an event, not just afterward," explained Nikolai Pogorelov, distinguished professor of Space Science at The University of Alabama in Huntsville, who has been using computers to study space weather for decades. "This subject - related to national space programs, environmental, and other issues - was recently escalated to a higher level."

To many, space weather may seem like a distant concern, but like a pandemic -- something we knew was possible and catastrophic -- we may not realize its dangers until it's too late.

"We don't think about it, but electrical communication, GPS, and everyday gadgets can be effected by extreme space weather effects," Pogorelov said.

Furthermore, the U.S. is planning missions to other planets and the moon. All will require very accurate predictions of space weather - for the design of spacecraft and to alert astronauts to extreme events.

With funding from the National Science Foundation (NSF) and NASA, Pogorelov leads a team working to improve the state-of-the-art in space weather forecasting.

"This research, blending intricate science, advanced computing and exciting observations, will advance our understanding of how the Sun drives space weather and its effects on Earth," said Mangala Sharma, Program Director for Space Weather in the Division of Atmospheric and Geospace Sciences at NSF. "The work will help scientists predict space weather events and build our nation's resilience against these potential natural hazards."

The multi-institutional effort involves the Goddard and Marshall Space Flight Centers, Lawrence Berkeley National Laboratory, and two private companies, Predictive Science Inc. and Space Systems Research Corporation.

Pogorelov uses the Frontera supercomputer at the Texas Advanced Computing Center (TACC) -- the ninth fastest in the world -- as well as high performance systems at NASA and the San Diego Supercomputing Center, to improve the models and methods at the heart of space weather forecasting.

Turbulence plays a key role in the dynamics of the solar wind and coronal mass ejections. This complex phenomenon has many facets, including the role of shock-turbulence interaction and ion acceleration.

"Solar plasma is not in thermal equilibrium. This creates interesting features," Pogorelov said.

Writing in the Astrophysical Journal [https://iopscience.iop.org/article/10.3847/1538-4357/abe62c/meta] in April 2021, Pogorelov, along with Michael Gedalin (Ben Gurion University of the Negev, Israel), and Vadim Roytershteyn (Space Science Institute) described the role of backstreaming pickup ions in the acceleration of charged particles in the universe. Backstreaming ions, either of interstellar or local origin, are picked up by the magnetized solar wind plasma and move radially outwards from the Sun.

"Some non-thermal particles can be further accelerated to create solar energetic particles that are particularly important for space weather conditions on Earth and for people in space," he said.

Pogorelov performed simulations on Frontera to better understand this phenomenon and compare it with observations from Voyager 1 and 2, the spacecraft that explored the outer reaches of the heliosphere and are now providing unique data from the local interstellar medium.

One of the major focuses of space weather prediction is correctly forecasting the arrival of coronal mass ejections -- the release of plasma and accompanying magnetic field from the solar corona -- and determining the direction of the magnetic field it carries with it. Pogorelov's team's study of backstreaming ions help to do so, as does work published in the Astrophysical Journal in 2020 that used a flux rope-based magnetohydrodynamic model to predict the arrival time to Earth and magnetic field configuration of the July 12, 2012 coronal mass ejection. (Magnetohydrodynamics refers the magnetic properties and behavior of electrically conducting fluids like plasma, which plays a key role in dynamics of space weather).

"Fifteen years ago, we didn't know that much about the interstellar medium or solar wind properties," Pogorelov said. "We have so many observations available today, which allow us to validate our codes and make them much more reliable."

Pogorelov is a co-investigator on an on-board component of the Parker Solar Probe called SWEAP (Solar Wind Electrons, Protons, and Alphas instrument) [http://sweap.cfa.harvard.edu/]. With each orbit, the probe approaches the sun, providing new information about the characteristics of the solar wind.

"Soon it will penetrate beyond the critical sphere where the solar wind becomes superfast magnetosonic, and we'll have information on the physics of solar wind acceleration and transport that we never had before," he said.

As the probe and other new observational tools become available, Pogorelov anticipates a wealth of new data that can inform and drive the development of new models relevant to space weather forecasting. For that reason, alongside his basic research, Pogorelov is developing a software framework that is flexible, useable by different research groups around the world, and can integrate new observational data.

"No doubt, in years to come, the quality of data from the photosphere and solar corona will be improved dramatically, both because of new data available and new, more sophisticated ways to work with data," he said. "We're trying to build software in a way that if a user comes up with better boundary conditions from new science missions, it will be easier for them to integrate that information."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Light-shrinking material lets ordinary microscope see in super resolution

image: This light-shrinking material turns a conventional light microscope into a super-resolution microscope.

Image: 
Junxiang Zhao

Electrical engineers at the University of California San Diego developed a technology that improves the resolution of an ordinary light microscope so that it can be used to directly observe finer structures and details in living cells.

The technology turns a conventional light microscope into what's called a super-resolution microscope. It involves a specially engineered material that shortens the wavelength of light as it illuminates the sample--this shrunken light is what essentially enables the microscope to image in higher resolution.

"This material converts low resolution light to high resolution light," said Zhaowei Liu, a professor of electrical and computer engineering at UC San Diego. "It's very simple and easy to use. Just place a sample on the material, then put the whole thing under a normal microscope--no fancy modification needed."

The work, which was published in Nature Communications, overcomes a big limitation of conventional light microscopes: low resolution. Light microscopes are useful for imaging live cells, but they cannot be used to see anything smaller. Conventional light microscopes have a resolution limit of 200 nanometers, meaning that any objects closer than this distance will not be observed as separate objects. And while there are more powerful tools out there such as electron microscopes, which have the resolution to see subcellular structures, they cannot be used to image living cells because the samples need to be placed inside a vacuum chamber.

"The major challenge is finding one technology that has very high resolution and is also safe for live cells," said Liu.

The technology that Liu's team developed combines both features. With it, a conventional light microscope can be used to image live subcellular structures with a resolution of up to 40 nanometers.

The technology consists of a microscope slide that's coated with a type of light-shrinking material called a hyperbolic metamaterial. It is made up of nanometers-thin alternating layers of silver and silica glass. As light passes through, its wavelengths shorten and scatter to generate a series of random high-resolution speckled patterns. When a sample is mounted on the slide, it gets illuminated in different ways by this series of speckled light patterns. This creates a series of low resolution images, which are all captured and then pieced together by a reconstruction algorithm to produce a high resolution image.

The researchers tested their technology with a commercial inverted microscope. They were able to image fine features, such as actin filaments, in fluorescently labeled Cos-7 cells--features that are not clearly discernible using just the microscope itself. The technology also enabled the researchers to clearly distinguish tiny fluorescent beads and quantum dots that were spaced 40 to 80 nanometers apart.

The super resolution technology has great potential for high speed operation, the researchers said. Their goal is to incorporate high speed, super resolution and low phototoxicity in one system for live cell imaging.

Liu's team is now expanding the technology to do high resolution imaging in three-dimensional space. This current paper shows that the technology can produce high resolution images in a two-dimensional plane. Liu's team previously published a paper showing that this technology is also capable of imaging with ultra-high axial resolution (about 2 nanometers). They are now working on combining the two together.

Credit: 
University of California - San Diego

Parasites may accumulate in spleens of asymptomatic individuals infected with malaria

Malaria, a disease caused mainly by the parasites Plasmodium falciparum and Plasmodium vivax, (P. vivax) is associated with over 400,000 deaths each year. Previously, the spleen was assumed to mostly play a role in parasite destruction, as it eliminates malaria parasites after antimalarial treatment. A study published in the open access journal PLOS Medicine by Steven Kho and Nicholas Anstey at Menzies School of Health Research, Australia, and international colleagues, suggests that in chronic P. vivax infections, malaria parasites survive and replicate via a previously undetected lifecycle within the spleen.

A large biomass of intact asexual-stage malaria parasites accumulates in the spleen of asymptomatic human subjects infected with Plasmodium vivax (P. vivax). However, the mechanisms underlying this intense reaction are unknown. To better understand the accumulation of malaria parasites in the spleen, researchers examined the spleen tissue in twenty-two individuals naturally exposed to P. vivax and P. falciparum undergoing splenectomy in Papua, Indonesia between 2015-2017. The authors then analysed the density of infection, parasites and immature red blood cells, as well as their distribution throughout the spleen.

The researchers found that the human spleen is a reservoir for immature red blood cells that are targeted by P. vivax for invasion, and that the examined spleens contained a substantial hidden biomass of malaria parasites, with densities hundreds to thousands of times higher than in circulating peripheral blood, suggesting an undetectable endosplenic lifecycle in asymptomatic P. vivax infections. The study had several limitations, such as the small sample size and asymptomatic status of all individuals included in the study. Future research should include acute, symptomatic malaria cases.

According to the authors, "Our findings provide a major contribution to the understanding of malaria biology and pathology and provide insight into P. vivax specific adaptations that have evolved to maximise survival and replication in the spleen".

Credit: 
PLOS

Scientific software - Quality not always good

image: Specialized software is used in almost all scientific fields, but its quality is not always good. (Photo: Markus Breig, KIT)

Image: 
Photo: Markus Breig, KIT

Computational tools are indispensable in almost all scientific disciplines. Especially in cases where large amounts of research data are generated and need to be quickly processed, reliable, carefully developed software is crucial for analyzing and correctly interpreting such data. Nevertheless, scientific software can have quality quality deficiencies. To evaluate software quality in an automated way, computer scientists at Karlsruhe Institute of Technology (KIT) and Heidelberg Institute for Theoretical Studies (HITS) have designed the SoftWipe tool.

"Adherence to coding standards is rarely considered in scientific software, although it can even lead to incorrect scientific results," says Professor Alexandros Stamatakis, who works both at HITS and at the Institute of Theoretical Informatics (ITI) of KIT. The open-source SoftWipe software tool provides a fast, reliable, and cost-effective approach to addressing this problem by automatically assessing adherence to software development standards. Besides designing the above-mentioned tool, the computer scientists benchmarked 48 scientific software tools from different research areas, to assess to which degree they met coding standards.

"SoftWipe can also be used in the review process of scientific software and support the software selection process," adds Adrian Zapletal. The Master's student and his fellow student Dimitri Höhler have substantially contributed to the development of SoftWipe. To select assessment criteria, they relied on existing standards that are used in safety-critical environments, such as at NASA or CERN.

"Our research revealed enormous discrepancies in software quality," says co-author Professor Carsten Sinz of ITI. Many programs, such as covid-sim, which is used in the UK for mathematical modeling of the COVID-19 disease, had a very low quality score and thus performed poorly in the ranking. The researchers recommend using programs such as SoftWipe by default in the selection and review process of software for scientific purposes.

How Does SoftWipe Work?

SoftWipe is a pipeline written in the Python3 programming language that uses several available static and dynamic code analyzers (most of them are freely available) in order to assess the code quality of software written in C/C++. In this process, SoftWipe compiles the software and then executes it so that programming errors can be detected during execution. Based on the output of the code analysis tools used, SoftWipe calculates a quality score between 0 (poor) and 10 (excellent) to compute an overall final score . (mex/ses)

Credit: 
Karlsruher Institut für Technologie (KIT)

An inconstant Hubble constant? U-M research suggests fix to cosmological cornerstone

More than 90 years ago, astronomer Edwin Hubble observed the first hint of the rate at which the universe expands, called the Hubble constant.

Almost immediately, astronomers began arguing about the actual value of this constant, and over time, realized that there was a discrepancy in this number between early universe observations and late universe observations.

Early in the universe's existence, light moved through plasma--there were no stars yet--and from oscillations similar to sound waves created by this, scientists deduced that the Hubble constant was about 67. This means the universe expands about 67 kilometers per second faster every 3.26 million light-years.

But this observation differs when scientists look at the universe's later life, after stars were born and galaxies formed. The gravity of these objects causes what's called gravitational lensing, which distorts light between a distant source and its observer.

Other phenomena in this late universe include extreme explosions and events related to the end of a star's life. Based on these later life observations, scientists calculated a different value, around 74. This discrepancy is called the Hubble tension.

Now, an international team including a University of Michigan physicist has analyzed a database of more than 1,000 supernovae explosions, supporting the idea that the Hubble constant might not actually be constant.

Instead, it may change based on the expansion of the universe, growing as the universe expands. This explanation likely requires new physics to explain the increasing rate of expansion, such as a modified version of Einstein's gravity.

The team's results are published in the Astrophysical Journal.

"The point is that there seems to be a tension between the larger values for late universe observations and lower values for early universe observation," said Enrico Rinaldi, a research fellow in the U-M Department of Physics. "The question we asked in this paper is: What if the Hubble constant is not constant? What if it actually changes?"

The researchers used a dataset of supernovae--spectacular explosions that mark the final stage of a star's life. When they shine, they emit a specific type of light. Specifically, the researchers were looking at Type Ia supernovae.

These types of supernovae stars were used to discover that the universe was expanding and accelerating, Rinaldi said, and they are known as "standard candles," like a series of lighthouses with the same lightbulb. If scientists know their luminosity, they can calculate their distance by observing their intensity in the sky.

Next, the astronomers use what's called the "redshift" to calculate how the universe's rate of expansion might have increased over time. Redshift is the name of the phenomenon that occurs when light stretches as the universe expands.

The essence of Hubble's original observation is that the further away from the observer, the more wavelength becomes lengthened--like you tacked a Slinky to a wall and walked away from it, holding one end in your hands. Redshift and distance are related.

In Rinaldi's team's study, each bin of stars has a fixed reference value of redshift. By comparing the redshift of each bin of stars, the researchers can extract the Hubble constant for each of the different bins.

In their analysis, the researchers separated these stars based on intervals of redshift. They placed the stars at one interval of distance in one "bin," then an equal number of stars at the next interval of distance in another bin, and so on. The closer the bin to Earth, the younger the stars are.

"If it's a constant, then it should not be different when we extract it from bins of different distances. But our main result is that it actually changes with distance," Rinaldi said. "The tension of the Hubble constant can be explained by some intrinsic dependence of this constant on the distance of the objects that you use."

Additionally, the researchers found that their analysis of the Hubble constant changing with redshift allows them to smoothly "connect" the value of constant from the early universe probes and the value from the late universe probes, Rinaldi said.

"The extracted parameters are still compatible with the standard cosmological understanding that we have," he said. "But this time they just shift a little bit as we change the distance, and this small shift is enough to explain why we have this tension."

The researchers say there are several possible explanations for this apparent change in the Hubble constant--one being the possibility of observational biases in the data sample. To help correct for potential biases, astronomers are using Hyper Suprime-Cam on the Subaru Telescope to observe fainter supernovae over a wide area. Data from this instrument will increase the sample of observed supernovae from remote regions and reduce the uncertainty in the data.

Credit: 
University of Michigan

Stanford study reveals new biomolecule

image: Glycans are a common carbohydrate found on cell surfaces that are known to modify lipids (fats) and proteins in a process called glycosylation. Now there's evidence that some living things use RNA as a third scaffold for glycosylation.

Image: 
Ryan Flynn

Stanford researchers have discovered a new kind of biomolecule that could play a significant role in the biology of all living things.

The novel biomolecule, dubbed glycoRNA, is a small ribbon of ribonucleic acid (RNA) with sugar molecules, called glycans, dangling from it. Up until now, the only kinds of similarly sugar-decorated biomolecules known to science were fats (lipids) and proteins. These glycolipids and glycoproteins appear ubiquitously in and on animal, plant and microbial cells, contributing to a wide range of processes essential for life.

The newfound glycoRNAs, neither rare nor furtive, were hiding in plain sight simply because no one thought to look for them - understandably so, given that their existence flies in the face of well-established cellular biology.

A study in the journal Cell, published May 17, describes the findings.

"This is a stunning discovery of an entirely new class of biomolecules," said Carolyn Bertozzi, the Anne T. and Robert M. Bass professor at Stanford's School of Humanities and Sciences, the Baker Family Director of Stanford Chemistry, Engineering and Medicine for Human Health and the study's senior author. "It's really a bombshell because the discovery suggests that there are biomolecular pathways in the cell that are completely unknown to us."

"What's more," Bertozzi added, "some of the RNAs modified by glycans to form glycoRNA have a sordid history of association with autoimmune diseases."

Bertozzi gives credit for the discovery to the study's lead author Ryan Flynn, who worked for months in her lab as a postdoctoral fellow chasing down glycoRNA, based mostly on a hunch.

"I came into Carolyn's lab asking, 'what if glycans can bind to RNA?', which turned out to be something that hadn't been explored before," said Flynn, now an assistant professor at Boston Children's Hospital in the Department of Stem Cell and Regenerative Biology. "I just like wondering and asking questions and it was immensely gratifying to arrive at this unexpected answer."

Approaching research with an open mind

Over the course of her trailblazing career, Bertozzi has brought the once-fringe field of glycobiology into the mainstream. For the past 25 years, her work has helped biologists appreciate how glycans, the long-overlooked sugar structures that stud our cells, are every bit as important as proteins and nucleic acids such as RNA and DNA.

Flynn admits he knew little about glycans when he joined Bertozzi's lab. His area of expertise is RNA, which was the focus of his medical degree and PhD. Flynn earned those degrees under the mentorship of Howard Chang, the Virginia and D. K. Ludwig Professor of Cancer Research and Professor of Genetics at Stanford.

"Ryan is RNA, I'm glycans," said Bertozzi. "We have completely different backgrounds."

The fields of RNA and glycan research are traditionally distinct because the biomolecules form and operate in different cellular places. Most types of RNA live in a cell's nucleus as well as in the cytosol, where the genome is kept and protein synthesis occurs, respectively. Glycans, in contrast, originate in subcellular structures bound by membranes and are thus separated from spaces where RNAs occupy. Glycoproteins and glycolipids localize to the cell's surface, acting as binding sites for extracellular molecules and communicating with other cells. (An example of glycolipids are those that define our blood type.)

"RNA and glycans live in two separate worlds if you believe the textbooks," said Bertozzi.

An odd bit of outlying biology had initially piqued Flynn's interest and got him wondering if those worlds might in fact overlap. He had taken note in the scientific literature of an enzyme, little-studied in the RNA field, that glycosylates (adds glycans to) certain proteins and which can also bind to RNAs. Based on this enzyme's mutual affinity for proteins and RNA, Flynn decided to see if there was a more direct connection between RNA and glycans.

"When Ryan started exploring a possible connection between glycosylation and RNA, I thought the chances of finding anything were very low," said Bertozzi. "But I figured it doesn't hurt to snoop around."

On the hunt

Flynn wielded an array of techniques in his search for hypothetical glycoRNAs. Among the most effective was bioorthogonal chemistry, originally pioneered by Bertozzi for enabling studies of living cells without disturbing naturally occurring processes. One common method involves attaching an inobtrusive "reporter" chemical to a biomolecule that emits light when engaging in certain reactions.

Flynn outfitted many different glycans with reporting "lightbulbs" to see what biomolecules the sugars bound to and where the sugar-bonded biomolecules ended up in and on cells. Relying on his experiences preparing and working with RNA, Flynn went beyond the protein- and lipid-containing compartments inside cells as had theretofore been probed.

"Ryan is the first person we know of that actually looked at glycans and RNA in this way," said Bertozzi.

After many frustrating months of negative and confusing results, Flynn reassessed his data. He noticed that one labeled sugar, incorporated into a precursor molecule for sialic acid, kept popping up.

"Once I saw that signal, I felt like something was actually there," Flynn said.

"It was really important that Ryan did not come at this topic with preconceived notions and unconscious bias," Bertozzi said. "His mind was open to possibilities that violate what we think we know about biology."

Life's origins and operations

After documenting the presence of the apparently novel glycoRNA in human cells, Flynn and colleagues searched for it in other cells. They found glycoRNAs in every cell type they tested - human, mouse, hamster and zebrafish.

The presence of glycoRNAs in different organisms suggests they perform fundamentally important functions. Furthermore, the RNAs are structurally similar in creatures that evolutionarily diverged hundreds of millions to billions of years ago. This suggests glycoRNAs could have ancient origins and may have had some role in the emergence of life on Earth, explained Bertozzi.

The function of glycoRNAs is not yet known, but it merits further study as they may be linked to autoimmune diseases that cause the body to attack its own tissues and cells, Flynn explained. For example, the immune systems of people suffering from lupus are known to target several of the specific RNAs that can compose glycoRNAs.

"When you find something brand-new like these glycoRNAs, they're so many questions to ask," Flynn said.

Credit: 
Stanford University

New research shows: Antoni van Leeuwenhoek led rivals astray

image: Microscope lenses reconstructed according to the method of Robert Hooke, which Antoni van Leeuwenhoek also used for his highly magnifying microscopes.

Image: 
Rijksmuseum Boerhaave/TU Delft

A microscope used by Antoni van Leeuwenhoek to conduct pioneering research contains a surprisingly ordinary lens, as new research by Rijksmuseum Boerhaave Leiden and TU Delft shows. It is a remarkable finding, because Van Leeuwenhoek (1632-1723) led other scientists to believe that his instruments were exceptional. Consequently, there has been speculation about his method for making lenses for more than three centuries. The results of this study were published in Science Advances on May 14.

Previous research carried out in 2018 already indicated that some of Van Leeuwenhoek's microscopes contained common ground lenses. Researchers have now examined a particularly highly magnifying specimen, from the collection of the University Museum Utrecht. Although it did contain a different type of lens, the great surprise was that the lens-making method used was a common one.

Pioneering but secretive

With his microscopes, Antoni van Leeuwenhoek saw a whole new world full of minute life which nobody had ever suspected could exist. He was the first to observe unicellular organisms, which is why he is called the father of microbiology. The detail of his observations was unprecedented and was only superseded over a century after his death.

His contemporaries were very curious about the lenses with which Van Leeuwenhoek managed to achieve such astounding feats. Van Leeuwenhoek, however, was very secretive about it, suggesting he had found a new way of making lenses. It now proves to have been an empty boast, at least as far as the Utrecht lens is concerned. This became clear when the researchers from Rijksmuseum Boerhaave Leiden and TU Delft subjected the Utrecht microscope to neutron tomography. It enabled them to examine the lens without opening the valuable microscope and destroy it in the process. The instrument was placed in a neutron beam at the Reactor Institute Delft, yielding a three-dimensional image of the lens.

Small globule

This lens turned out to be a small globule, and its appearance was consistent with a known production method used in Van Leeuwenhoek's time. The lens was very probably made by holding a thin glass rod in the fire, so that the end curled up into a small ball, which was then broken off the glass rod.

This method was described in 1678 by another influential microscopist, the Englishman Robert Hooke, which inspired other scientists to do the same. Van Leeuwenhoek, too, may have taken his lead from Hooke. The new discovery is ironical, because it was in fact Hooke who was very curious to learn more about Van Leeuwenhoek's 'secret' method.

The new study shows that Van Leeuwenhoek obtained extraordinary results with strikingly ordinary lens production methods.

Credit: 
Delft University of Technology

Is the past (and future) there when nobody looks?

image: An observer (Wigner's friend) performs a quantum measurement on a spin system. Later, Wigner measures the friend and spin in an entangled basis. As a consequence of this measurement, not only does the friend not reliably remember his past observed outcome, but cannot even quantify this ignorance with a reasonably behaved probability distribution.

Image: 
© Aloop, IQOQI-Wien, Österreichische Akademie der Wissenschaften

In 1961, the Nobel prize winning theoretical physicist Eugene Wigner proposed what is now known as the Wigner's friend thought experiment as an extension of the notorious Schroedinger's cat experiment. In the latter, a cat is trapped in a box with poison that will be released if a radioactive atom decays. Governed by quantum mechanical laws, the radioactive atom is in a superposition between decaying and not decaying, which also means that the cat is in a superposition between life and death. What does the cat experience when it is in the superposition? Wigner sharpened the question by pushing quantum theory to its conceptual limits. He investigated what happens when an observer also has quantum properties.

In the thought experiment an observer, usually called Wigner's friend, performs a quantum measurement and perceives an outcome. From the point of view of another observer, called Wigner, the measurement process of the friend can be described as a quantum superposition. The fact that quantum theory sets no validity limits for its application leads to a clear tension between the perception of the friend, who sees a specific single result, and the description of Wigner, who observes the friend in a superposition of different perceptions. This thought experiment thus raises the question: What does it mean for an observer in a quantum superposition to observe the result of a measurement? Can an observer always trust what they see and use this data to make predictions about future measurements?

In their recent paper published in Communications Physics, a team of researchers, led by Caslav Brukner, from the University of Vienna, the Institute of Quantum Optics and Quantum Information (IQOQI-Vienna) of the Austrian Academy of Sciences and the Perimeter Institute for Theoretical Physics investigate the limits that Wigner's friend's thought experiment imposes on an observer's ability to predict their own future observations. To this end, the authors identify a number of assumptions, all traditionally considered to be at the core of quantum formalism. These allow an observer in standard experimental situations to predict the probabilities for future outcomes, on the basis of their past experiences. The assumptions constrain the probabilities to obey quantum mechanical laws. However, the researchers prove that these assumptions for Wigner's friend cannot all be satisfied in the thought experiment. This work raises important questions about the "persistent reality" of the friend's perceptions. Indeed, the authors show that in a Wigner's friend scenario, it is impossible to consider the friend's perceptions to be coexistent at different points in time. This makes it questionable whether a quantum observer in general can consider their own past or future experiences to be as real as their present ones. Philippe Allard Guérin, the lead author of the study, says "Our work shows that at least one of three key assumptions of quantum mechanics must be violated; which one depends on your preferred interpretation of quantum mechanics."

Credit: 
University of Vienna

Can the diffraction limit overcome in the linear imaging system?

image: Tunable large-spatial-frequency-shift microscopy chip. The modulation of the illumination period and direction can be realized by selecting different input ports on. In the right bottom shows the detected spatial frequency range in the tunable large-spatial-frequency-shift microscopy.

Image: 
©Science China Press

Compared with the superresolution microscopy that bases on squeezing the point spread function in the spatial domain, the superresolution microscopy that broadens the detection range in the spatial frequency domain through the spatial-frequency-shift (SFS) effect shows intriguing advantages including large field of view, high speed, and good modularity, owing to its wide-field picture acquisition process and universal implementation without using special fluorophores labeling.

To enable spatial-frequency-shift microscopy with a superresolution at the subwavelength scale, it is essential to use the near-field evanescent wave with a larger wave vector than the far-field propagation wave for illumination, which can be built on the integrated photonics, paving the way for compact nanoscopy on a thin chip. In recent years, the research group led by Dr. Xu Liu and Dr. Qing Yang from Zhejiang University pioneered in using the evanescent wave illumination with large wave vector and long propagation distance for superresolution imaging, and achieved the chip-based label-free spatial-frequency-shift superresolution microscopy with a field of view 1-2 magnitude larger than the other wide-field label-free superresolution techniques.

However, the lack of an approach to actively tune the spatial-frequency-shift in a large range hampers the progress to achieve deep-subwavelength resolution through evanescent illumination with ultra-large wave vectors. When the wave vector of the illumination surpasses two folds of the cutoff frequency of the objective, a missing band exists between the shifted components and the zeroth component in the spatial frequency domain, which causes severe artifacts and distortion in the image. It thus restricts the resolution below triple of the Abbe diffraction limit of the detection system, as shown in Fig. 1.

In this month's cover article in the Journal of SCIENCE CHINA Physics, Mechanics & Astronomy, the researchers propose a broadly-tunable large-spatial-frequency-shift effect and a chip-compatible 3D nanoscopy. With their method, the resolution of the linear optical system can be freely improved using an illumination chip of a larger effective refractive index without a theoretical limit. This method is also significant complementation to the Nobel-prize-awarded superresolution techniques that rely on labeling sample with fluorophores of nonlinear property and requires a point-by-point scanning process or capturing thousands of raw images. Simultaneously, compared to traditional superresolution microscopy imaging technology, this method is based on on-chip waveguide illumination and has the advantages of integration, low cost, and high stability. It can be further integrated with microfluidic and optoelectronic functional chips, providing a comprehensive research platform for the study of modern biological problems.

Dr. Qing Yang, one of the corresponding authors, introduced, "In order to avoid the missing band in the large -spatial-frequency-shift microscopy and achieve the deep-subwavelength resolution, we proposed the method of tunable large-spatial-frequency-shift microscopy and proposed several approaches to actively tune the spatial-frequency-shift value. In the early days, we use multiple-wavelength illumination to tune the spatial-frequency-shift value, while its tuning range is very limited. We must find other ways that allow the spatial-frequency-shift value to be tuned much more broadly."

This research reports a wide-range 3D tunable deep SFS imaging method compatible with photonic chips, which makes the resolution enhancement of SFS imaging fundamentally get rid of the limitation of the detection aperture and no longer has the theoretical limit. In the lateral dimension, the spatial-frequency-shift tunability is realized by modulating the azimuthal propagation direction of two evanescent waves. The effective wave vector of their superposition can be tuned actively and broadly to enable a wide-range and complete detection in the spatial frequency domain, as shown in Fig. 2. In the vertical dimension, a sectional saturation effect by intensity modulation is used to tune the vertical spatial frequency spectrum of the evanescent illumination so as to solve the vertical spatial distribution within the evanescent penetration depth. Finally, a 3D superresolution image can be achieved by multiplying the lateral and vertical distribution together.

The researchers select GaP as the waveguide material, considering its high refractive index and low optical loss in the visible spectrum. A lateral resolution of ?/9, which is five folds of the diffraction-limited resolution, and a vertical localization precision of Dr. Xu Liu introduced, "This proposed 3D large-spatial-frequency-shift tuning method is compatible with waveguides chip and can provide a mass-producible and robust chip module endowing a standard microscope with fast, large FOV and 3D deep-subwavelength resolving capability. With the improvement of micro-nano processing technology and the establishment of photonic integrated chip production lines, on-chip integrated photonic chips can reduce costs and achieve mass production, which would put forward the real application of deep-subwavelength superresolution in biomedicine, life science, and materials, etc."

Credit: 
Science China Press

Shaken, not stirred: Reshuffling skyrmions ultrafast

image: Fig. 1: A single laser pulse of appropriate intensity can create random skyrmion patterns with a density defined by an external magnetic field (thin arrows). This scheme of laser writing of skyrmions may be used as an ultrafast "skyrmion reshuffler" for stochastic computing. The area surrounded by the dashed line marks the field of view of the x-ray microscope used to see the magnetic skyrmions appearing as black dots. The field of view is 1 μm in diameter.

Image: 
MBI

Smaller, faster, more energy-efficient: future requirements to computing and data storage are hard to fulfill and alternative concepts are continuously explored. Small magnetic textures, so-called skyrmions, may become an ingredient in novel memory and logic devices. In order to be considered for technological application, however, fast and energy-efficient control of these nanometer-sized skyrmions is required.

Magnetic skyrmions are particle-like magnetization patches that form as very small swirls in an otherwise uniformly magnetized material. In particular ferromagnetic thin films, skyrmions are stable at room temperature, with diameters down to the ten-nanometer range. It is known that skyrmions can be created and moved by short pulses of electric current. Only recently it was discovered that also short laser pulses are able to create and annihilate skyrmions. In contrast to electric current pulses, laser pulses of sub-picosecond duration can be used, providing a faster and potentially more energy-efficient route to write and delete information encoded by skyrmions. This makes laser skyrmion writing interesting for technological applications, including alternative memory and logic devices.

Scientists of Max Born Institute together with colleagues from Helmholtz-Zentrum Berlin, Massachusetts Institute of Technology and further research institutions now investigated in detail how laser-based creation and annihilation of skyrmions can be controlled to promote application of the process in devices. To image the magnetic skyrmions, the team of researchers used holography-based x-ray microscopy, which can make the tiny magnetization swirls with a diameter of 100 nanometer and less visible. Being able to see the skyrmions, they were able to systematically study how laser pulses with different intensity, applied in the presence of an external magnetic field, can create or delete skyrmions. Two types of material systems, designed to be able to host magnetic skyrmions in the first place, were investigated, both consisting of ultrathin multilayer stacks of ferromagnetic and paramagnetic materials.

Not surprisingly given the thermal nature of the process, the laser intensity has to be right. However, there is a material-dependent window of laser intensities which allows for the creation of a new skyrmion pattern which is completely independent of the previous magnetic state. For lower intensities, an existing pattern remains unaltered or is only slightly modified, for much higher intensities, the multilayer structure is damaged. Remarkably, the number of skyrmions created within the laser spot is not influenced by the laser intensity. Instead, the researchers found that the presence of an external magnetic field allows to precisely control the density of skyrmions created. The strength of the external field therefore provides a knob to tune the number of skyrmions created and even allows for annihilation of skyrmions, as the scientists report in the journal Applied Physics Letters.

They demonstrated the controlled creation or annihilation of single skyrmions within the laser spot, as required for applications in data storage where a single bit could then be represented by the presence or absence of a skyrmion. Of interest for potential device application, however, is also the ability to simultaneously generate a particular density of skyrmions in the area illuminated by a single laser pulse. This process could be used as a "skyrmion reshuffler" in stochastic computing. There, numbers are represented as strings of random bits of "0" and "1", with the probability to encounter "1" encoding the number value. Computations can then be carried out via logic operations between individual bits of different input numbers. While clearly a niche approach compared to the prevalent digital logic, stochastic computing has proven promising for particular problems such as image processing. However, completely randomized bit strings are needed as input signals for correct results of stochastic computing operations. As demonstrated in this work, such randomizing "reshuffling" of skyrmions can be performed optically on a timescale of picoseconds, compatible with state-of-the-art computer clock speed and much faster than in previous concepts based on thermal diffusion operating on the timescale of seconds.

Credit: 
Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI)

University of Minnesota Medical School researchers identify target for senolytic drugs

MINNEAPOLIS/ST.PAUL (05/12/2021) -- In a study recently published in Nature, University of Minnesota Medical School researchers found that senescent immune cells are the most dangerous type of senescent cell.

Cells become senescent when they are damaged or stressed in the body, and they accumulate in our organs as we age. Senescent cells drive inflammation and aging as well as most age-related diseases.

The research team -- led by Laura Niedernhofer, MD, PhD, a professor in the Department of Biochemistry, Molecular Biology and Biophysics -- discovered that senescent immune cells drive tissue damage all over the body and shorten lifespan. Therefore, senescent immune cells are detrimental and should be targeted with senolytics.

U of M researchers, including Niedernhofer and collaborators at the Mayo Clinic, previously identified a new class of drugs in 2015 and coined them as senolytics, which selectively remove senescent cells from your body. However, senolytic drugs have to be targeted to a specific cell type, so one senolytic drug is not able to kill a senescent brain cell and a senescent liver cell.

"Now that we have identified which cell type is most deleterious, this work will steer us towards developing senolytics that target senescent immune cells," said Niedernhofer, who is also the director for the Institute on the Biology of Aging and Metabolism at the U of M Medical School, one of the state-sponsored Medical Discovery Teams. "We also hope that it will help guide discovery of biomarkers in immune cell populations that will help gauge who is at risk of tissue damage and rapid aging, and therefore who is at most need of senolytic therapy."

Credit: 
University of Minnesota Medical School

Hidden within African diamonds, a billion-plus years of deep-earth history

image: A diamond encapsulating tiny bits of fluid from the deep earth, held here by fine tweezers, was part of a study delving into the age and origins of South African stones.

Image: 
Yaakov Weiss

Diamonds are sometimes described as messengers from the deep earth; scientists study them closely for insights into the otherwise inaccessible depths from which they come. But the messages are often hard to read. Now, a team has come up with a way to solve two longstanding puzzles: the ages of individual fluid-bearing diamonds, and the chemistry of their parent material. The research has allowed them to sketch out geologic events going back more than a billion years--a potential breakthrough not only in the study of diamonds, but of planetary evolution.

Gem-quality diamonds are nearly pure lattices of carbon. This elemental purity gives them them their luster; but it also means they carry very little information about their ages and origins. However, some lower-grade specimens harbor imperfections in the form of tiny pockets of liquid--remnants of the more complex fluids from which the crystals evolved. By analyzing these fluids , the scientists in the new study worked out the times when different diamonds formed, and the shifting chemical conditions around them.

"It opens a window--well, let's say, even a door--to some of the really big questions" about the evolution of the deep earth and the continents, said lead author Yaakov Weiss, an adjunct scientist at Columbia University's Lamont-Doherty Earth Observatory, where the analyses were done, and senior lecturer at the Hebrew University of Jerusalem. "This is the first time we can get reliable ages for these fluids." The study was published this week in the journal Nature Communications.

Most diamonds are thought to form some 150 to 200 kilometers under the surface, in relatively cool masses of rock beneath the continents. The process may go back as far as 3.5 billion years, and probably continues today. Occasionally, they are carried upward by powerful, deep-seated volcanic eruptions called kimberlites. (Don't expect to see one erupt today; the youngest known kimberlite deposits are tens of millions of years old.)

Much of what we know about diamonds comes from lab experiments, and studies of other minerals and rocks that come up with the diamonds, or are sometimes even encased within them. The 10 diamonds the team studied came from mines founded by the De Beers company in and around Kimberley, South Africa. "We like the ones that no one else really wants," said Weiss--fibrous, dirty-looking specimens containing solid or liquid impurities that disqualify them as jewelry, but carry potentially valuable chemical information. Up to now, most researchers have concentrated on solid inclusions, such as tiny bits of garnet, to determine the ages of diamonds. But the ages that solid inclusions indicate can be debatable, because the inclusions may or may not have formed at the same time as the diamond itself. Encapsulated fluids, on the other hand, are the real thing, the stuff from which the diamond itself formed.

What Weiss and his colleagues did was find a way to date the fluids. They did this by measuring traces of radioactive thorium and uranium, and their ratios to helium-4, a rare isotope that results from their decay. The scientists also figured out the maximum rate at which the nimble little helium molecules can leak out of the diamond--without which data, conclusions about ages based on the abundance of the isotope could be thrown far off. (As it turns out, diamonds are very good at containing helium.)

The team identified three distinct periods of diamond formation. These all took place within separate rock masses that eventually coalesced into present-day Africa. The oldest took place between 2.6 billion and 700 million years ago. Fluid inclusions from that time show a distinct composition, extremely rich in carbonate minerals. The period also coincided with the buildup of great mountain ranges on the surface, apparently from the collisions and squishing together of the rocks. These collisions may have had something to do with production of the carbonate-rich fluids below, although exactly how is vague, the researchers say.

The next diamond-formation phase spanned a possible time frame of 550 million to 300 million years ago, as the proto-African continent continued to rearrange itself. At this time, the liquid inclusions show, the fluids were high in silica minerals, indicating a shift in subterranean conditions. The period also coincided with another major mountain-building episode.

The most recent known phase took place between 130 million years and 85 million years ago. Again, the fluid composition switched: Now, it was high in saline compounds containing sodium and potassium. This suggests that the carbon from which these diamonds formed did not come directly from the deep earth, but rather from an ocean floor that was dragged under a continental mass by subduction. This idea, that some diamonds' carbon may be recycled from the surface, was once considered improbable, but recent research by Weiss and others has increased its currency.

One intriguing find: At least one diamond encapsulated fluid from both the oldest and youngest eras. The shows that new layers can be added to old crystals, allowing individual diamonds to evolve over vast periods of time.

It was at the end of this most recent period, when Africa had largely assumed its current shape, that a great bloom of kimberlite eruptions carried all the diamonds the team studied to the surface. The solidified remains of these eruptions were discovered in the 1870s, and became the famous De Beers mines. Exactly what caused them to erupt is still part of the puzzle.

The tiny diamond-encased droplets provide a rare way to link events that took place long ago on the surface with what was going on at the same time far below, say the scientists. "What is fascinating is, you can constrain all these different episodes from the fluids," said Cornelia Class, a geochemist at Lamont-Doherty and coauthor of the paper. "Southern Africa is one of the best-studied places in the world, but we've very rarely been able to see beyond the indirect indications of what happened there in the past."

When asked whether the findings could help geologists find new diamond deposits, Weiss just laughed. "Probably not," he said. But, he said, the method could be applied to other diamond-producing areas of the world, including Australia, Brazil, and northern Canada and Russia, to disentangle the deep histories of those regions, and develop new insights into how continents evolve.

"These are really big questions, and it's going to take people a long time to get at them," he said. "I will go to pension, and still not have finished that walk. But at least this gives us some new ideas about how to find out how things work."

Credit: 
Columbia Climate School

Flash flood risk may triple across third pole due to global warming

image: Glacial lake in the Himalayan region

Image: 
LI Heng

An international team led by researchers from the Xinjiang Institute of Ecology and Geography (XIEG) of the Chinese Academy of Sciences and the University of Geneva has found that flash floods may triple across the Earth's "Third Pole" in response to ongoing climate change.

Their findings were published in Nature Climate Change on May 6.

The Hindu Kush-Himalaya, Tibetan Plateau and surrounding mountain ranges are widely known as the "Third Pole" of the Earth. It contains the largest number of glaciers outside the polar regions.

Due to global warming, the widespread and accelerated melting of glaciers over most of the Third Pole is causing rapid expansion and formation of glacial lakes. When water is suddenly released from these lakes through dam failure or overtopping, glacial lake outburst floods occur, posing a severe threat to downstream communities.

Despite the severe threat these extreme events pose for sustainable mountain development across the Third Pole, scientists are uncertain where and when such events are likely to occur.

In this study, the researchers focused on the threat from new lakes forming in front of rapidly retreating glaciers. They used satellite imagery and topographic modeling to establish the risk associated with about 7,000 glacial lakes now located across the Third Pole.

They found that one in six (1,203) of current glacial lakes pose a high to very high risk to downstream communities, most notably in the eastern and central Himalayan regions of China, India, Nepal, and Bhutan.

The researchers also systematically investigated past outburst flood events and hoped to find some patterns from them. Meanwhile, they used these events to validate their approaches. "We found that these approaches allowed us to accurately classify 96% of glacial lakes known to have produced floods in the past as high or very high risk. We can then apply them to future scenarios," said ZHENG Guoxiong from XIEG, one of the co-first authors of the study.

Under the highest emission scenario (sometimes referred to as the "business-as-usual" scenario), the study shows that much of the Third Pole may approach peak risk for glacial lake flooding by the end of the 21st century--or even by the middle of the century in some regions.

In addition to larger potential flood volumes resulting from the expansion of more than 13,000 lakes in the coming years, over time the lakes will grow closer to steep, unstable mountain slopes that may crash into lakes and provoke small tsunamis.

If global warming continues on its current path, the number of lakes classified as high or very high risk will increase from 1,203 to 2,963, with new risk hotspots emerging in the western Himalaya, Karakorum and parts of Central Asia. These regions have experienced glacial lake outburst floods before, but they have tended to be repetitive and linked to advancing glaciers.

The mountain ranges of the Third Pole span 11 nations, giving rise to potential transboundary natural disasters. The study shows that the number of future potential transboundary glacial flood sources could roughly double to a total of 902 lakes, with 402 of these lakes in the high and very high risk categories.

"Such disasters are sudden and highly destructive. Regular monitoring and assessment as well as early warning systems are important to prevent these floods," said Prof. BAO Anming from XIEG, a corresponding author of the study. "We hope this study will motivate relevant nations and the international community to work together to prevent future flood disasters in the Third Pole".

Credit: 
Chinese Academy of Sciences Headquarters