Heavens

New measurement of universe's expansion rate is 'stuck in the middle'

video: Imagine you have a crowd of people and you sort them by size and measure the height of the tallest individual. If you assume that the tallest person in any crowd will have approximately the same height as the one you measured, then by comparing how big these tallest individuals look compared to the one you measured, you can determine the distance to each crowd. In this analogy, crowds are galaxies and the tallest person is the tip of the red giant branch.

Image: 
Carnegie Institution for Science

Pasadena, CA--A team of collaborators from Carnegie and the University of Chicago used red giant stars that were observed by the Hubble Space Telescope to make an entirely new measurement of how fast the universe is expanding, throwing their hats into the ring of a hotly contested debate. Their result--which falls squarely between the two previous, competing values--is published in The Astrophysical Journal.

Nearly a century ago, Carnegie astronomer Edwin Hubble discovered that the universe has been growing continuously since it exploded into being during the Big Bang. But precisely how fast it's moving--a value termed the Hubble constant in his honor--has remained stubbornly elusive.

The Hubble constant helped scientists sketch out the universe's history and structure and an accurate measurement of it might reveal any flaws in this prevailing model.

"The Hubble constant is the cosmological parameter that sets the absolute scale, size and age of the universe; it is one of the most direct ways we have of quantifying how the universe evolves," said lead author Wendy Freedman of the University of Chicago, who began this work at Carnegie.

Until now, there have been two primary tools used to measure the universe's rate of expansion. Unfortunately, their results don't agree and the tension between the two numbers has persisted even as each side makes increasingly precise readings. However, is possible that the difference between the two values is due to systemic inaccuracies in one or both methods, spurring the research team to develop their new technique.

One method, pioneered at Carnegie, uses stars called Cepheids, which pulsate at regular intervals. Because the rate at which they pulse is known to be related to their intrinsic brightness, astronomers can use their luminosities and the period between pulses to measure their distances from Earth.

"From afar two bells may well appear to be the same, listening to their tones can reveal that one is actually much larger and more distant, and the other is smaller and closer," explained Carnegie's Barry Madore, one of the paper's co-authors. "Likewise, comparing how bright distant Cepheids appear to be against the brightness of nearby Cepheids enables us to determine how far away each of the stars' host galaxies are from Earth."

When a celestial object's distance is known, a measurement of the speed at which it is moving away from us reveals the universe's rate of expansion. The ratio of these two figures--the velocity divided by the distance--is the Hubble constant.

The second method uses the afterglow left over from the Big Bang. Called cosmic background radiation, it is the oldest light we can see. Patterns of compression in the thick, soupy plasma of which the baby universe was comprised can still be seen and mapped as slight temperature variations. These ripples, documenting the universe's first few moments, can be run forward in time through a model and used to predict the present-day Hubble constant.

The former technique says the expansion rate of the universe is 74.0 kilometers per second per megaparsec; the latter says it's 67.4. If it's real, the discrepancy could herald new physics.

Enter the third option.

The Carnegie-Chicago Hubble Program, led by Freedman and including Carnegie astronomers Madore, Christopher Burns, Mark Phillips, Jeff Rich, and Mark Seibert--as well as Carnegie-Princeton fellow Rachael Beaton--developed a new way to calculate the Hubble constant.

Their technique is based on a very luminous class of stars called red giants. At a certain point in their lifecycles, the helium in these stars is ignited, and their structures are rearranged by this new source of energy in their cores.

"Just as the cry of a loon is instantly recognizable among bird calls, the peak brightness of a red giant in this state is easily differentiated," Madore explained. "This makes them excellent standard candles."

The team made use of the Hubble Space Telescope's sensitive cameras to search for red giants in nearby galaxies.

"Think of it as scanning a crowd to identify the tallest person--that's like the brightest red giant experiencing a helium flash," said Burns. "If you lived in a world where you knew that the tallest person in any room would be that exact same height--as we assume that the brightest red giant's peak brightness is the same--you could use that information to tell you how far away the tallest person is from you in any given crowd."

Once the distances to these newly found red giants are known, the Hubble constant can be calculated with the help of another standard candle--type Ia supernovae--to diminish the uncertainty caused by the red giants' relative proximity to us and extend our reach out into the more-distant Hubble flow.

According to the red giant method the universe's expansion rate is 69.8--falling provocatively between the two previously determined numbers.

"We're like that old song, 'Stuck in the Middle with You,'" joked Madore. "Is there a crisis in cosmology? We'd hoped to be a tiebreaker, but for now the answer is: not so fast. The question of whether the standard model of the universe is complete or not remains to be answered."

Credit: 
Carnegie Institution for Science

Smoke from Canadian fires drifts into United States

image: On July 7, 2019, the Suomi NPP satellite using its VIIRS (Visible Infrared Imaging Radiometer Suite) instrument captured this image of the fires ongoing in the region. Also visible in this image is the massive amount of smoke pouring off these fires and moving southeast into other parts of Ontario, over the Great Lakes, and into Michigan.

Image: 
NASA

Canada has been battling a very active and destructive fire season on multiple fronts this year. A warming climate, very dry environment, and more extreme weather including severe thunderstorms has led to massive wildfires throughout the country. Two of the provinces hardest hit by these fires are Ontario and Manitoba. On July 7, 2019, the Suomi NPP satellite using its VIIRS (Visible Infrared Imaging Radiometer Suite) instrument captured this image of the fires ongoing in the region. Also visible in this image is the massive amount of smoke pouring off these fires and moving southeast into other parts of Ontario, over the Great Lakes, and into Michigan. Small amounts are also seen drifting into Minnesota. The smoke will continue to drift southward as it's carried on the jet stream. Actively burning areas, detected by VIIRS's thermal bands, are outlined in red. Suomi NPP is managed by NASA and NOAA.

NASA's Earth Observing System Data and Information System (EOSDIS) Worldview application provides the capability to interactively browse over 700 global, full-resolution satellite imagery layers and then download the underlying data. Many of the available imagery layers are updated within three hours of observation, essentially showing the entire Earth as it looks "right now." Image Courtesy: NASA Worldview, Earth Observing System Data and Information System (EOSDIS). Caption: Lynn Jenner

Credit: 
NASA/Goddard Space Flight Center

New method may resolve difficulty in measuring universe's expansion

Astronomers using National Science Foundation (NSF) radio telescopes have demonstrated how a combination of gravitational-wave and radio observations, along with theoretical modeling, can turn the mergers of pairs of neutron stars into a "cosmic ruler" capable of measuring the expansion of the Universe and resolving an outstanding question over its rate.

The astronomers used the NSF's Very Long Baseline Array (VLBA), the Karl G. Jansky Very Large Array (VLA) and the Robert C. Byrd Green Bank Telescope (GBT) to study the aftermath of the collision of two neutron stars that produced gravitational waves detected in 2017. This event offered a new way to measure the expansion rate of the Universe, known by scientists as the Hubble Constant. The expansion rate of the Universe can be used to determine its size and age, as well as serve as an essential tool for interpreting observations of objects elsewhere in the Universe.

Two leading methods of determining the Hubble Constant use the characteristics of the Cosmic Microwave Background, the leftover radiation from the Big Bang, or a specific type of supernova explosions, called Type Ia, in the distant Universe. However, these two methods give different results.

"The neutron star merger gives us a new way of measuring the Hubble Constant, and hopefully of resolving the problem," said Kunal Mooley, of the National Radio Astronomy Observatory (NRAO) and Caltech.

The technique is similar to that using the supernova explosions. Type Ia supernova explosions are thought to all have an intrinsic brightness which can be calculated based on the speed at which they brighten and then fade away. Measuring the brightness as seen from Earth then tells the distance to the supernova explosion. Measuring the Doppler shift of the light from the supernova's host galaxy indicates the speed at which the galaxy is receding from Earth. The speed divided by the distance yields the Hubble Constant. To get an accurate figure, many such measurements must be made at different distances.

When two massive neutron stars collide, they produce an explosion and a burst of gravitational waves. The shape of the gravitational-wave signal tells scientists how "bright" that burst of gravitational waves was. Measuring the "brightness," or intensity of the gravitational waves as received at Earth can yield the distance.

"This is a completely independent means of measurement that we hope can clarify what the true value of the Hubble Constant is," Mooley said.

However, there's a twist. The intensity of the gravitational waves varies with their orientation with respect to the orbital plane of the two neutron stars. The gravitational waves are stronger in the direction perpendicular to the orbital plane, and weaker if the orbital plane is edge-on as seen from Earth.

"In order to use the gravitational waves to measure the distance, we needed to know that orientation," said Adam Deller, of Swinburne University of Technology in Australia.

Over a period of months, the astronomers used the radio telescopes to measure the movement of a superfast jet of material ejected from the explosion. "We used these measurements along with detailed hydrodynamical simulations to determine the orientation angle, thus allowing use of the gravitational waves to determine the distance," said Ehud Nakar from Tel Aviv University.

This single measurement, of an event some 130 million light-years from Earth, is not yet sufficient to resolve the uncertainty, the scientists said, but the technique now can be applied to future neutron-star mergers detected with gravitational waves.

"We think that 15 more such events that can be observed both with gravitational waves and in great detail with radio telescopes, may be able to solve the problem," said Kenta Hotokezaka, of Princeton University. "This would be an important advance in our understanding of one of the most important aspects of the Universe," he added.

Credit: 
National Radio Astronomy Observatory

Massive stars grow same way as light stars, just bigger

image: Artist's impression of the gaseous disk and envelope around the massive protostar G353.273+0.641.

Image: 
National Astronomical Observatory of Japan

Astronomers obtained the first detailed face-on view of a gaseous disk feeding the growth of a massive baby star. They found that it shares many common features with lighter baby stars. This implies that the process of star formation is the same, regardless of the final mass of the resulting star. This finding paves the way for a more complete understanding of star formation.

A protostar, a baby star still in the process of forming, is fed by a surrounding disk of gas falling towards the center. The details of the process, such as why stars form with a wide range of masses, are still unclear. Low mass stars are being formed in the vicinity of the Solar System, allowing astronomers to see the process up-close. On the other hand, massive protostars are rare, and even the nearest are located quite far away from us.

Kazuhito Motogi, an assistant professor at Yamaguchi University, Japan, and his team used the Atacama Large Millimeter/submillimeter Array (ALMA) to observe a massive protostar called G353.273+0.641 (hereafter G353). Located 5500 light-years away in the constellation Scorpius, G353 has a mass 10 times larger than the Sun, and is still growing. It is a unique target among massive protostars because we can see its gaseous disk from straight above. ALMA has revealed detailed views of several other massive infant stars, however, most of them are in edge-on configurations, making it difficult to see the inner regions of the disks.

ALMA observations captured a rotating disk around G353 with a radius eight times larger than the orbit of Neptune. This sounds huge, but it is one of the smallest disks yet found around a massive protostar. ALMA also found that the disk is surrounded by an envelope of gas three times larger than the disk.

"We measured the gas infall rate from the outer envelope to the inner disk," says Motogi. "This helps us to estimate the age of the baby star. Surprisingly it is only 3000 years old, the youngest among known massive protostars. We are witnessing the earliest phase of the growth of a giant star."

Interestingly, the disk is not uniform; the south-eastern side of the disk is brighter than other parts. This is the first time astronomers have seen an asymmetric disk around a massive protostar. The team also found that the disk is highly unstable and going to fragment. The uneven disk might be caused by this instability. These features are often seen around smaller protostars, suggesting that the essential physical processes are the same in low-mass and high-mass star formation.

"Previous studies had implied that the formation process might be different for stars of different masses," says Motogi. "Our new observations show the similarity. This is an important step to understand how massive protostars gain mass from the surroundings."

Credit: 
National Institutes of Natural Sciences

Satellite data reveals largest-ever macroalgae bloom

Scientists have used satellite observations to identify the largest bloom of macroalgae in the world, the Great Atlantic Sargassum Belt - a heavy mass of brown algae stretching from West Africa to the Gulf of Mexico. According to the results, the expansion of the Great Atlantic Sargassum Belt (GASB) could signify a seaweedy new norm for the tropical Atlantic and Caribbean Sea, driven by factors including deforestation and fertilizer use. Some species of Sargassum - a genus of seaweed - live on the ocean's surface and grow into floating, island-like masses that attract many species of fish, birds and turtles. However, the amount and spatial extent of Sargassum has increased substantially over the last decade; entire flotillas of wayward seaweed mats have increasingly begun to wash ashore, inundating Atlantic and Caribbean beaches and resulting in significant environmental and economic problems. Largely due to a lack of large-scale Sargassum data, little is known about the cause of these Sargassum expansions, particularly the role of atmospheric, oceanic and/or climatic conditions in driving them. For the first time, Mengqiu Wang and colleagues address these questions using a 19-year record of measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite to study the GASB, which experienced a major bloom every year from 2011 to 2018, except for 2013. According to Wang et al. - who also analyzed fertilizer consumption patterns in Brazil, Amazon deforestation rates, Amazon River discharge, and more - the GASB annual bloom events since 2011 show connections to two key nutrient inputs: one human-derived, and one natural. In the spring and summer, Amazon River discharge adds nutrients to the ocean, which may have increased in recent years due to increased deforestation and fertilizer use. In the winter, upwelling off the West African coast delivers nutrients from deep waters to the ocean surface. As of June 2018, the GASB, impacted by these and other factors, extended 8,850 kilometers and harbored over 20 million tons of Sargassum biomass. The authors suggest that the results indicate a possibly permanent regime shift in Sargassum blooms. In a related Perspective, James Gower and Stephanie King highlight the way satellite sensors are especially well-suited to monitor Sargassum blooms.

Credit: 
American Association for the Advancement of Science (AAAS)

Methane vanishing on Mars: Danish researchers propose new mechanism as an explanation

image: This is a simulation of wind erosion on Mars. The quartz ampoule contains particles of olivine basalt and a Mars-like atmosphere. By shaking the ampoule, the researchers simulate wind-generated saltation, ie. that the wind causes the sand grains to make short jumps over the surface. The friction of the particles creates electrical charges, and the yellow star illustrates that an argon atom has lost an electron. The small electrical charges cause the particles to glow slightly, as illustrated in the four pictures to the right.

Image: 
Mars Simulation Laboratory, Aarhus University

The processes behind the release and consumption of methane on Mars have been discussed since methane was measured for the first time for approx. 15 years ago. Now, an interdisciplinary research group from Aarhus University has proposed a previously overlooked physical-chemical process that can explain methane's consumption.

For approx. 15 years ago, one could for the first time read about methane in Mars's atmosphere. This aroused great interest, also outside the scientific circles, since methane, based on our knowledge of methane on Earth, is considered a bio-signature, i.e. signs of biological activity and thus life.

In subsequent years, one could read articles that alternately reported on methane's presence and absence. This variation led to doubts about the accuracy of the first methane measurements. Recent measurements of methane in Mars' atmosphere have now shown that its dynamics is real enough and the fact that sometimes only very low concentrations can be measured is due to an unresolved mechanism that makes methane disappear from the atmosphere and not a mis-measurement.

The methane sources or the causes for its disappearance have not been identified at present. Especially the latter, the rapid disappearance of methane, lacks a plausible mechanistic explanation. The most obvious mechanism, namely the photochemical degradation of methane caused by UV radiation, cannot explain methane's rapid disappearance, which is a prerequisite for explaining the dynamics.

Erosion and chemistry

Aarhus researchers have just published an article in the journal Icarus in which they propose a new mechanism that can explain the removal of methane on Mars. For years, the multidisciplinary Mars group has investigated the importance of wind-driven erosion of minerals for the formation of reactive surfaces under Mars-like conditions. For this purpose, the research group has developed equipment and methods for simulating erosion on Mars in their "earthly" laboratories.

Based on Mars-analogue minerals such as basalt and plagioclase, the researchers have shown that these solids can be oxidized and gases are ionized during the erosion processes. Thus, the ionized methane reacts with the mineral surfaces and bonds to them. The research team has shown that the carbon atom, such as methyl group from methane, directly binds to the silicon atom in plagioclase, which is also a dominant component of Mars' surface material.

What the researchers see in the laboratory could also explain the loss of methane on Mars. By this mechanism, which is much more effective than photochemical processes, methane could be removed from the atmosphere within the observed time and then deposited in the Martian source soil.

Affects the possibility of life

The research group has furthermore shown that these mineral surfaces can lead to the formation of reactive chemicals such as hydrogen peroxide and oxygen radicals, which are very toxic to living organisms, incl. bacteria.

The group's results are important for assessing the possibility of life on or near Mars' surface. In a number of follow-up studies, the researchers will now examine what is going on with the bound methane, and whether the erosion process in addition to the gases in atmosphere also changes or even completely removes more complex organic material, which can either originate on Mars itself or has come to Mars as part of meteorites.

The results thus have an impact on our understanding of the preservation of organic material on Mars and thus the fundamental issue of life on Mars - inter alia in connection with the interpretation of the results of the upcoming ExoMars rover, which ESA is expected to land on Mars in 2021.

Credit: 
Aarhus University

Measuring the laws of nature

image: These are experiments at the ILL Grenoble.

Image: 
TU Wien

There are some numerical values that define the basic properties of our universe. They are just as they are, and no one can tell why. These include, for example, the value of the speed of light, the mass of the electron, or the coupling constants that define the strength of the forces of nature.

One of these coupling constants, the "weak axial vector coupling constant" (abbreviated to gA), has now been measured with very high precision. This constant is needed to explain nuclear fusion in the sun, to understand the formation of elements shortly after the Big Bang, or to understand important experiments in particle physics. With the help of sophisticated neutron experiments, the value of the coupling constant gA has now been determined with an accuracy of 0.04 % The result has now been published in the journal "Physical Review Letters".

When particles change

There are four fundamental forces in our universe: electromagnetism, strong and weak nuclear force, and gravity. "To calculate these forces, we have to know certain parameters that determine their strength - and especially in the case of weak interaction, this is a complicated matter," says Prof. Hartmut Abele from the Institute of Atomic and Subatomic Physics at TU Wien (Vienna). Weak interaction plays a crucial role when certain particles are transformed into others - for example, when two protons merge into a nucleus in the sun and one of them becomes a neutron. To analyze such processes, the "weak axial vector coupling constant" gA has to be known.

There have been different attempts to measure gA. "For some of them, however, systematic corrections were required. Major disturbing factors can change the result by up to 30%," says Hartmut Abele.

A different measuring principle called "PERKEO" was developed in the 1980s in Heidelberg by Prof. Dirk Dubbers. Hartmut Abele has been involved in the work on the PERKEO detectors for many years, he himself has developed "PERKEO 2" as part of his dissertation. He works together with his former student Prof. Bastian Märkisch from TU Munich and Torsten Soldner from the Institut Laue-Langevin in Grenoble to significantly improve the measurement. With "PERKEO 3", new measurements have now been carried out in Grenoble, far exceeding all previous experiments in terms of accuracy.

The PEREKO detector analyzes neutrons, which decay into protons and emit a neutrino and an electron. "This electron emission is not perfectly symmetric," explains Hartmut Abele. "On one side, a few more electrons are emitted than on the other - that depends on the spin direction of the neutron." The PERKEO detector uses strong magnetic fields to collect the electrons in both directions and then counts them. From the strength of the asymmetry, i.e. the difference in the number of electrons in the two directions, one can then directly deduce the value of the coupling constant gA.

From the Big Bang to CERN

In many areas of modern physics, it is very important to know the precise value of the coupling constant gA: About one second after the big bang, "primordial nucleosynthesis" began - forming the first elements. The ratio of elements created at that time depends (among other things) on gA. These first few seconds of nucleosynthesis determine the chemical composition of the universe today. Also, the big mystery of the relationship between dark matter and ordinary matter is related to this coupling constant. Last, but not least, it is crucial in order to increase the accuracy of large-scale experiments, such as particle collisions at CERN.

Credit: 
Vienna University of Technology

New model suggests lost continents for early Earth

image: These are models for the distribution of crustal thickness in early Earth. The crust in the prevailing paradigm is mostly oceanic, with some thin continental crust. The new model predicts a thicker and greater continental portion that was not preserved.

Image: 
Derrick Hasterok

A new radioactivity model of Earth's ancient rocks calls into question current models for the formation of Earth's continental crust, suggesting continents may have risen out of the sea much earlier than previously thought but were destroyed, leaving little trace.

Scientists at the University of Adelaide have published two studies on a model of rock radioactivity over billions of years which found that the Earth's continental crust may have been thicker, much earlier than current models suggest, with continents possibly present as far back as four billion years.

"We use this model to understand the evolving processes from early Earth to the present, and suggest that the survival of the early crust was dependent on the amount of radioactivity in the rocks - not random chance," says Dr Derrick Hasterok, from the University of Adelaide's Department of Earth Sciences and Mawson Centre for Geoscience.

"If our model proves to be correct, it may require revision to many aspects of our understanding of the Earth's chemical and physical evolution, including the rate of growth of the continents and possibly even the onset of plate tectonics."

Dr Hasterok and his PhD student Matthew Gard compiled 75,800 geochemical samples of igneous rocks (such as granite) with estimated ages of formation from around the continents. They estimated radioactivity in these rocks today and constructed a model of average radioactivity from four billion years ago to the present.

"All rocks contain natural radioactivity that produces heat and raises temperatures in the crust when it decays - the more radioactive a rock the more heat it produces," says Dr Hasterok. "Rocks typically associated with the continental crust have higher radioactivity than oceanic rocks. A rock four billion years old would have about four times as much radioactivity when it was created compared with today."

But the researchers found an unexpected deficit in the level of radioactivity in rocks older than about two billion years. When they corrected for higher heat production, because of the higher radioactivity that would have been present, the deficit disappeared.

"We think there would have been more granite-like - or continental-type - rocks around but because of the higher radioactivity, and therefore higher heat, they either melted or were easily destroyed by tectonic movement. That's why these continental crusts don't show in the geological record.

"Our prevailing models suggest that continents eventually grew out of the oceans as the crust thickened. But we think there may have been significant amount of, albeit very unstable, continental crust much earlier."

Co-author Professor Martin Hand, also from the University of Adelaide, says the new model could have important implications for monitoring the effects of global warming.

"What this new model allows us to do is help predict rock radioactivity in places where we have few or no samples, like Antarctica, where we cannot access samples, which could be very important in assessing the stability of ice sheets and the threshold of temperature changes needed for global warming to impact glacial melting," says Martin Hand, Professor of Earth Sciences.

The researchers say the new radioactivity model also may help in the search for hot rocks with geothermal potential and can be used to produce more accurate models of oil maturation in sedimentary basins.

Credit: 
University of Adelaide

Hubble captures cosmic fireworks in ultraviolet

image: Telescopes, including Hubble, have monitored the Eta Carinae star system for more than two decades. It has been prone to violent outbursts, including an episode in the 1840s during which ejected material formed the bipolar bubbles seen here.

Image: 
NASA, ESA, N. Smith (University of Arizona, Tucson), and J. Morse (BoldlyGo Institute, New York)

Hubble offers a special view of the double star system Eta Carinae's expanding gases glowing in red, white, and blue. This is the highest resolution image of Eta Carinae taken by the NASA/ESA Hubble Space Telescope.

Imagine slow-motion fireworks that started exploding nearly two centuries ago and haven't stopped since then. This is how you might describe this double star system located 7500 light-years away in the constellation Carina (The Ship's Keel). In 1838 Eta Carinae underwent a cataclysmic outburst called the Great Eruption, quickly escalating to become in 1844 the second brightest star in the sky by April of that year. The star has since faded, but this new view from the NASA/ESA Hubble Space Telescope shows that the spectacular display is still ongoing, and reveals details that have never been seen before.

Violent mass ejections are not uncommon in Eta Carinae's history; the system has been blighted by chaotic eruptions, often blasting parts of itself into space But the Great Eruption was particularly dramatic. The larger of the two stars is a massive, unstable star nearing the end of its life, and what astronomers witnessed over a century and a half ago was, in fact, a stellar near-death experience.

The resulting surge of light was outshone only by Sirius, which is almost one thousand times closer to Earth, and for a time made Eta Carinae an important navigation star for mariners in the southern seas. This close call stopped just short of destroying Eta Carinae, and the light intensity gradually subsided. Researchers studying the star today can still see the signature of the Great Eruption on its surroundings; the huge dumbbell shape is formed of the dust and gas and other filaments that were hurled into space in the expulsion. These hot glowing clouds are known as the Homunculus Nebula, and have been a target of Hubble since its launch in 1990.

In fact, the volatile star has been imaged by almost every instrument on Hubble over more than 25 years. Astronomers have observed the cosmic drama play out in ever higher resolution. This latest image was created using Hubble's Wide Field Camera 3 to map warm magnesium gas glowing in ultraviolet light (shown in blue).

Scientists have long known that the outer material thrown off in the 1840s eruption has been heated by shock waves generated when it crashed into material previously ejected from the star . The team who captured this new image were expecting to find light from magnesium coming from the complicated array of filaments seen in the light from glowing nitrogen (shown in red). Instead, a whole new luminous magnesium structure was found in the space between the dusty bipolar bubbles and the outer shock-heated nitrogen-rich filaments.

"We've discovered a large amount of warm gas that was ejected in the Great Eruption but hasn't yet collided with the other material surrounding Eta Carinae," explained Nathan Smith of Steward Observatory at the University of Arizona, lead investigator of the Hubble programme. "Most of the emission is located where we expected to find an empty cavity. This extra material is fast, and it 'ups the ante' in terms of the total energy of an already powerful stellar blast."

This newly revealed data is important for understanding how the eruption began, because it represents the fast and energetic ejection of material that may have been expelled by the star shortly before the expulsion of the rest of the nebula. Astronomers need more observations to measure exactly how fast the material is moving and when it was ejected.

Another striking feature of the image is the streaks visible in the blue region outside the lower-left bubble. These streaks appear where the star's light rays poke through the dust clumps scattered along the bubble's surface. Wherever the ultraviolet light strikes the dense dust, it leaves a long thin shadow that extends beyond the lobe into the surrounding gas. "The pattern of light and shadow is reminiscent of sunbeams that we see in our atmosphere when sunlight streams past the edge of a cloud, though the physical mechanism creating Eta Carinae's light is different," noted team member Jon Morse of BoldlyGo Institute in New York.

This technique of searching in ultraviolet light for warm gas could be used to study other stars and gaseous nebulae, the researchers say.

"We had used Hubble for decades to study Eta Carinae in visible and infrared light, and we thought we had a pretty full account of its ejected debris. But this new ultraviolet-light image looks astonishingly different, revealing gas we did not see in either visible-light or infrared images," Smith said. "We're excited by the prospect that this type of ultraviolet magnesium emission may also expose previously hidden gas in other types of objects that eject material, such as protostars or other dying stars; and only Hubble can take these kinds of pictures".

The causes of Eta Carinae's Great Eruption remain the subject of speculation and debate. A recent theory suggests that Eta Carinae, which may once have weighed as much as 150 Suns, started out as a triple system, and the 1840s mass ejection was triggered when the primary star devoured one of its companions, rocketing more than ten times the mass of our Sun into space. While the exact circumstances of that show-stopping burst of light remain a mystery for now, astronomers are more certain of how this cosmic light show will conclude. Eta Carinae's fireworks display is fated to reach its finale when it explodes as a supernova, greatly surpassing even its last powerful outburst. This may already have happened, but the tsunami of light from such a blinding blast would take 7500 years to reach Earth.

Credit: 
ESA/Hubble Information Centre

SwRI-led team studies binaries to make heads or tails of planet formation

image: An SwRI-led team performed 3D simulations of the streaming instability model of planet formation, where particle clumping triggers gravitational collapse into planetesimals. This snapshot from the simulation shows the vertically integrated density of solids, projected on the protoplanetary disk plane.

Image: 
HST/StSci/SwRI/Simon Porter

SAN ANTONIO -- June 25, 2019 -- A Southwest Research Institute-led team studied the orientation of distant solar system bodies to bolster the "streaming instability" theory of planet formation.

"One of the least understood steps in planet growth is the formation of planetesimals, bodies more than a kilometer across, which are just large enough to be held together by gravity," said SwRI scientist Dr. David Nesvorny, the lead author of the paper "Trans-Neptunian Binaries as Evidence for Planetesimal Formation by the Streaming Instability" published in Nature Astronomy.

During the initial stages of planet growth, dust grains gently collide and chemically stick to produce larger particles. However, as grains grow larger, collisions likely become more violent and destructive. Scientists have struggled to understand how planetary growth passes the 'meter-size barrier.'

The streaming instability theory posits that as large dust grains interact with the gas that orbits young stars, streaming mechanisms cause grains to clump into dense regions and collapse under their own gravity to form planetesimals.

The team studied objects beyond Neptune that orbit each other as binary pairs in the Kuiper Belt. Unlike comets flung by Jupiter or asteroids bombarded by collisions and radiation, the distant Kuiper Belt has not been disturbed much since it formed, so these primordial objects provide hints about the early solar system. If a pair orbits in the same direction as the planets orbit, it's considered heads-up. It's tails-up if it orbits in the opposite direction.

Using the Hubble Space Telescope and the Keck Observatory in Hawaii, the team found that most binaries, about 80%, orbit heads-up, which astronomers call "prograde." This finding contradicted the theory that binaries form when two passing planetesimals are captured into a binary. That theory predicts mostly tails-up or "retrograde" orbits.

To test whether the streaming instability could explain these Kuiper Belt binaries, the team analyzed simulations on large supercomputers. They found that the dense clumps formed by the streaming instability rotated heads-up 80% of the time, in agreement with the Kuiper Belt objects.

"While our simulations can't yet follow the collapse all the way to forming binaries, it appears we are on the right track," said SwRI's Dr. Jacob B. Simon, who coauthored the paper.

"The solar system offers many clues to how planets formed, both around our Sun and distant stars," Nesvorny said. "Although, these clues can be difficult to interpret, observers and theorists working together are starting to make heads or tails of these clues -- and the evidence is mostly heads."

Credit: 
Southwest Research Institute

Tokyo Tech-led study shows how icy outer solar system satellites may have formed

image: The masses of the satellite(s) range from 1/10 to 1/1000 of the corresponding TNOs. For comparison, Earth and Moon are also shown.

Image: 
NASA/APL/SwRI/ESA/STScI

Using sophisticated computer simulations and observations, a team led by researchers from the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology has shown how the so-called trans-Neptunian Objects (or TNOs) may have formed. TNOs, which include the dwarf planet Pluto, are a group of icy and rocky small bodies--smaller than planets but larger than comets--that orbit the Solar System beyond the planet Neptune. TNOs likely formed at the same time as the Solar System, and understanding their origin could provide important clues as to how the entire Solar System originated.

Like many solar system bodies, including the Earth, TNOs often have their own satellites, which likely formed early on from collisions among the building blocks of the Solar System. Understanding the origin of TNOs along with their satellites may help understand the origin and early evolution of the entire Solar System. The properties of TNOs and their satellites--for example, their orbital properties, composition and rotation rates--provide a number of clues for understanding their formation. These properties may reflect their formation and collisional history, which in turn may be related to how the orbits of the giant planets Jupiter, Saturn, Neptune, and Uranus changed over time since the Solar System formed.

The New Horizons spacecraft flew by Pluto, the most famous TNO, in 2015. Since then, Pluto and its satellite Charon have attracted a lot of attention from planetary scientists, and many new small satellites around other large TNOs have been found. In fact, all known TNOs larger than 1000 km in diameter are now known to have satellite systems. Interestingly, the range of estimated mass ratio of these satellites to their host systems ranges from 1/10 to 1/1000, encompassing the Moon-to-Earth mass ratio (~1/80). This may be significant because Earth's Moon and Charon are thought to have formed from a giant impactor.

To study the formation and evolution of TNO satellite systems, the research team performed more than 400 giant impact simulations and tidal evolution calculations. "This is really hard work," says the study's senior author, Professor Hidenori Genda from the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology. Other Tokyo Tech team members included Sota Arakawa and Ryuki Hyodo.

The Tokyo Tech study found that the size and orbit of the satellite systems of large TNOs are best explained if they formed from impacts of molten progenitors. They also found that TNOs which are big enough can retain internal heat and remain molten for a span of only a few million years; especially if their internal heat source is short-lived radioactive isotopes such as Aluminum-26, which has also been implicated in the internal heating of the parent bodies of meteorites. Since these progenitors would need to have a high short-lived radionuclide content in order to be molten, these results suggest that TNO-satellite systems formed before the outward migration of the outer planets, including Neptune, or in the first ~ 700 million years of Solar System history.

Previous planet formation theories had suggested the growth of TNOs took much longer than the lifetime of short-lived radionuclides, and thus TNOs must not have been molten when they formed. These scientists found, however, that rapid TNO formation is consistent with recent planet formation studies which suggest TNOs formed via accretion of small solids to pre-existing bodies. The rapid formation of large TNOs is consistent with recent planet formation studies; however, other analyses suggest comets formed well after most short-lived radionuclides had decayed. Thus the authors note that there is still much work to be done to produce a unified model for the origin of the Solar System's planetary bodies.

Credit: 
Tokyo Institute of Technology

News from the diamond nursery

image: Professor Horst Marschall in front of one of the high-pressure belt apparatus in the Institute for Geosciences used to simulate the formation of inclusions in diamonds.

Image: 
Horst Marschall

Diamonds are crystals of carbon that form deep in the Earth's mantle underneath the oldest continents, the cratons. They are transported to the surface of the earth in exotic magmas called kimberlites by explosive volcanic eruptions. Previous studies had always determined that diamonds include fluids containing sodium and potassium, but the origin of these fluids was unknown.

"In order for these inclusions to form, parts of the Earth's oceanic crust and their sediment layer had to be submerged beneath the cratonic continents in what is known as a subduction zone. These zones are located at depths of over 110 kilometres at a pressure of over four gigapascals, or 40 thousand times the atmospheric pressure," explains Michael Förster, the first author of the study that was published in the scientific journal Science Advances. The submergence of the earth's crust has to happen quickly so that the diamond can form before the sediment starts to melt at temperatures over 800 degrees Celsius, and react with the cratonic mantle.

For the high-pressure experiment in the laboratory, scientists from Sydney, Mainz and Frankfurt stacked marine sediment and peridotite (rocks from the Earth's mantle) in four-millimetre capsules and placed them under high pressure and extreme temperatures. At pressures of four to six gigapascals - corresponding to depths of 120 to 180 kilometres - small salt crystals formed from the reaction between the two layers. Their potassium to sodium ratio corresponded exactly to the saline fluid inclusions in diamonds. In experiments with less pressure, corresponding to depths of less than 110 kilometres, these salts were not present. Instead, potassium was absorbed from the recycled sediment by mica.

"Unlike previous models that attributed the source of the salts to seawater, the sediments represent a plausible source of potassium," says the mineralogist Professor Horst Marschall from Goethe University. "The potassium concentration in seawater is too low to explain the saline inclusions in diamonds." Magnesium-rich carbonates, important components of the kimberlites, also came about as a by-product of the reaction.

Credit: 
Goethe University Frankfurt

Real-time analysis of MOF adsorption behavior

image: Schematic illustration of molecules adsorbed on metal organic frameworks with different pores of various structures, where the In-situ X-ray crystallography has been developed to classify each pore structure and analyze the position of the molecule to determine the amount of molecules adsorbed to each pore.

Image: 
KAIST

Researchers have developed a technology to analyze the adsorption behavior of molecules in each individual pore of a metal organic framework (MOF). This system has large specific surface areas, allowing for the real-time observation of the adsorption process of an MOF, a new material effective for sorting carbon dioxide, hydrogen, and methane.

Accurate measurements and assessments of gas adsorption isotherms are important for characterizing porous materials and developing their applications. The existing technology is only able to measure the amount of gas molecules adsorbed to the material, without directly observing the adsorption behavior.

The research team led by Professor Jeung Ku Kang from the Graduate School of Energy, Environment, Water and Sustainability (EEWS) prescribed a real time gas adsorption crystallography system by integrating an existing X-ray diffraction (XRD) measurement device that can provide structural information and a gas adsorption measurement device.

Specifically, the system allowed the observation of a mesoporous MOF that has multiple pores rather than a single pore structure. The research team categorized the adsorption behaviors of MOF molecules by pore type, followed by observations and measurements, resulting in the identification of a stepwise adsorption process that was previously not possible to analyze.

Further, the team systematically and quantitatively analyzed how the pore structure and the type of adsorption molecule affect the adsorption behavior to suggest what type of MOF structure is appropriate as a storage material for each type of adsorption behavior.

Professor Kang said, "We quantitatively analyzed each pore molecule in real time to identify the effects of chemical and structural properties of pores on adsorption behavior." He continued, "By understanding the real-time adsorption behavior of molecules at the level of the pores that form the material, rather than the whole material, we will be able to apply this technology to develop a new high-capacity storage material."

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

UCF is part of NASA Cassini mission that reveals new details about Saturn's rings

ORLANDO, June 17, 2019 - Even though NASA's Cassini spacecraft's mission to Saturn ended in 2017, scientists are still poring over the copious amounts of data it transmitted.

Now, in a new paper that appeared in Science on Friday and includes two University of Central Florida co-authors, researchers are offering glimpses into the nature and composition of the mighty planet's legendary rings by using data from some of the closest observations ever made of the main rings.

The paper is a big picture and detailed look at the planet's rings and includes an analysis of Cassini's "grand finale" observations made before the spacecraft's planned crash into the planet on Sept. 15, 2017.

The study reports the rings, which are comprised of icy particles ranging from the size of a marble to the size of a car, have three distinct textures - clumpy, smooth and streaky - and that tiny moons exist within the rings and interact with surrounding particles.

Josh Colwell, a UCF physics professor and study co-author, has been a part of the Cassini mission since some of its earliest planning stages in 1990, including the design and observation planning for the Ultraviolet Imaging Spectrograph, or UVIS, on the multi-instrument spacecraft.

In the paper, Colwell and his former student Richard Jerousek, a researcher with UCF's Florida Space Institute and a study co-author, measured and described the structure of Saturn's largest innermost ring, the C Ring, using UVIS data recorded by Cassini.

Using the UVIS's photometer, which measured the brightness of starlight shining through the rings, and by having Cassini take observations from multiple different angles, the researchers were able to create a three-dimensional map of the ring.

They did this by having the photometer focus on a star from a particular angle and then measure the star's brightness as the spacecraft looked at the star through the ring.

Areas where more light passed through indicated areas with less material or gaps in the ring, while areas with less light shining through indicated a denser area where more material was present.

"You can think of it like a friend running through the woods at night with a flashlight pointed at you," Colwell said. "You would see the flashlight flicker because of trees blocking the light."

"So, we did a similar thing with the rings and the flickering of the star tells us something about how many ring particles there are, how big they are and how they are clumped together," Colwell said. "We did many of these observations called stellar occultations."

Colwell and Jerousek found streaky textures in the ring, which seem to be big holes created by the gravity of large boulders that are significantly larger than most ring particles.

They found that vertical thickness of the ring in the locations of these holes is only about 20 feet, while the rings themselves span hundreds of thousands of miles across.

Colwell said it's somewhat odd that there appears to be a larger proportion of the large, boulder-like objects at certain locations in the ring, because they could be made from smaller particles that have run into each other and are sticking together, a process known as accretion. This would be intriguing as the tidal force from Saturn tends to pull objects apart in the rings. The large boulder-like objects also could be fragments of something that's broken apart.

"Both of those possibilities are interesting, and it ties into questions about the early stages of how planets form because the same kinds of processes that form planets could be going on in Saturn's rings today," Colwell said.

Colwell developed the computer model of the mapping procedure to originally examine clumps in the rings, while Jerousek flipped the model to measure the holes.

"From this work we were able to constrain the widths and number of these very narrow gaps or holes as well as the vertical extent of these regions of the rings," Jerousek said.

"These properties are helping us to understand more about the icy boulders that open these gaps and may provide an excellent analog to the early stages of planet formation."

"Understanding these small moonlets and the gaps and textures they create in Saturn's rings provide a snapshot into the early solar system and the conditions in the protoplanetary disk from which planets formed," he said. "Since the details of planet formation are still poorly understood, we're really lucky to have a ring system like Saturn's in our astronomical backyard to help us work out the kinks in our understanding."

Colwell said the end of the mission has been sad, as it has been a part of his life for more than 25 years, and he's made many personal and professional connections as well as had many great experiences. However, he said there is much data still to be analyzed from the mission, and it is something he and his students plan to be working on for years to come.

"We still have so much excellent data, and there's still a tremendous amount to learn," he said.

And while there are no immediate plans to go back to Saturn, there are mission proposals developed. However, Colwell said considering the expense and time it takes to get to Saturn, it could be decades before there is a return.

That means as of now, Cassini is the only spacecraft to make an extensive visit to Saturn. Previous visits were only the flybys made by Voyager 2 in 1981, Voyager 1 in 1980 and Pioneer 11 in 1979. Cassini was launched from Cape Canaveral Air Force Station on Oct. 15, 1997. It arrived at Saturn on June 30, 2004.

Credit: 
University of Central Florida

X and gamma rays --Even more powerful

International group of researchers including scientists from Skoltech have invented a new method for the generation of intense X and gamma-ray radiation based on Nonlinear Compton Scattering. Their results were published in the prestigious Physical Review Letters journal and the invention is about to get an international patent.

The Compton Effect is similar to playing tennis but in an unusual way. An electron plays the role of the racket and a photon plays the role of the ball. A photon being reflected from the fast electron racket acquires additional energy. It cannot fly even faster - the speed limit forbids that, but it can easily change its color, i.e. wavelength. Using this simple game, one can transform the wavelength of the incoming photon from the visible range to X and gamma-rays. Hard photon sources based on Inverse (linear) Compton Scattering are widely used throughout the world and typically consist of the electron accelerator and the laser system. The main advantage of such sources is the possibility to generate a narrow bandwidth radiation with the wavelength easily tunable by changing the energy of the electrons.

The most straightforward way to increase the number of generated X and gamma-ray photons is to increase the intensity of the laser system. In other words, the more compactly packed is the laser radiation in space (considering that the diffraction is small), the more scattering events between laser photons and electrons there will be.

This is well known together with the fact that increasing the power of the laser radiation in Compton Scattering leads to considerable spectral broadening. This is due to the light pressure, which slows the electrons down. In other words, our tennis racket while reflecting myriads of small tennis balls at once is slowed down; hence, the reflected balls will receive less energy. The problem here is that powerful laser radiation is not continuous, but rather comes as pulses in time. The intensity of powerful laser pulses first slowly grows and then slowly dies out. Consequently, the light pressure is non-uniform and the slow-down of the electrons is different at different moments of time leading to different energy of reflected photons.

The scientific team including Skoltech Professor Sergey Rykovanov invented a new method for generation of intense monoenergetic X and gamma-ray radiation based on Nonlinear Compton Scattering.

Sergey Rykovanov, a Professor from Skoltech's Center for Computational and Data-Intensive Science and Engineering:

"Such spectral line broadening is parasitic since we want to obtain a narrow bandwidth photon source with a well defined wavelength. Together with Vasily Kharin from Research Institute in Moscow and Daniel Seipt from University of Michigan in USA we invented a very simple method to remove the parasitic Compton line broadening for intense laser pulses and significantly increase the number of generated X and gamma-ray photons. To do this one has to carefully tune the frequency of the laser pulse (in other words to chirp it) so that it corresponds to the laser pulse intensity at each moment of time. For optimal effect, we proposed to use two linearly and oppositely chirped laser pulses propagating with a certain delay to each other. In my opinion, the beauty of our work is in its simplicity. To be entirely honest, we were very surprised how simply and smoothly everything worked out."

The new invention can significantly increase the brightness of modern and future synchrotron sources for research in medicine, nuclear physics and material science.

The scientists note that part of the simulations was performed on Skoltech's flagship supercomputer, "Zhores", named after the Nobel laureate Zhores Alferov.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)