Culture

Rare 'superflares' could one day threaten Earth

Astronomers probing the edges of the Milky Way have in recent years observed some of the most brilliant pyrotechnic displays in the galaxy: superflares.

These events occur when stars, for reasons that scientists still don't understand, eject huge bursts of energy that can be seen from hundreds of light years away. Until recently, researchers assumed that such explosions occurred mostly on stars that, unlike Earth's, were young and active.

Now, new research shows with more confidence than ever before that superflares can occur on older, quieter stars like our own--albeit more rarely, or about once every few thousand years.

The results should be a wake-up call for life on our planet, said Yuta Notsu, the lead author of the study and a visiting researcher at CU Boulder.

If a superflare erupted from the sun, he said, Earth would likely sit in the path of a wave of high-energy radiation. Such a blast could disrupt electronics across the globe, causing widespread black outs and shorting out communication satellites in orbit.

Notsu presented his research at a press briefing at the 234th meeting of the American Astronomical Society in St. Louis.

"Our study shows that superflares are rare events," said Notsu, a researcher in CU Boulder's Laboratory for Atmospheric and Space Physics. "But there is some possibility that we could experience such an event in the next 100 years or so."

Scientists first discovered this phenomenon from an unlikely source: the Kepler Space Telescope. The NASA spacecraft, launched in 2009, seeks out planets circling stars far from Earth. But it also found something odd about those stars themselves. In rare events, the light from distant stars seemed to get suddenly, and momentarily, brighter.

Researchers dubbed those humungous bursts of energy "superflares."

Notsu explained that normal-sized flares are common on the sun. But what the Kepler data was showing seemed to be much bigger, on the order of hundreds to thousands of times more powerful than the largest flare ever recorded with modern instruments on Earth.

And that raised an obvious question: Could a superflare also occur on our own sun?

"When our sun was young, it was very active because it rotated very fast and probably generated more powerful flares," said Notsu, also of the National Solar Observatory in Boulder. "But we didn't know if such large flares occur on the modern sun with very low frequency."

To find out, Notsu and an international team of researchers turned to data from the European Space Agency's Gaia spacecraft and from the Apache Point Observatory in New Mexico. Over a series of studies, the group used those instruments to narrow down a list of superflares that had come from 43 stars that resembled our sun. The researchers then subjected those rare events to a rigorous statistical analysis.

The bottom line: age matters. Based on the team's calculations, younger stars tend to produce the most superflares. But older stars like our sun, now a respectable 4.6 billion years old, aren't off the hook.

"Young stars have superflares once every week or so," Notsu said. "For the sun, it's once every few thousand years on average."

The group published its latest results in May in The Astrophysical Journal.

Notsu can't be sure when the next big solar light show is due to hit Earth. But he said that it's a matter of when, not if. Still, that could give humans time to prepare, protecting electronics on the ground and in orbit from radiation in space.

"If a superflare occurred 1,000 years ago, it was probably no big problem. People may have seen a large aurora," Notsu said. "Now, it's a much bigger problem because of our electronics."

Credit: 
University of Colorado at Boulder

The sun may have a dual personality, simulations suggest

Researchers at CU Boulder have discovered hints that humanity's favorite star may have a dual personality, with intriguing discrepancies in its magnetic fields that could hold clues to the sun's own "internal clock."

Physicists Loren Matilsky and Juri Toomre developed a computer simulation of the sun's interior as a means of capturing the inner roiling turmoil of the star. In the process, the team spotted something unexpected: On rare occasions, the sun's internal dynamics may jolt out of their normal routines and switch to an alternate state--bit like a superhero trading the cape and cowl for civilian clothes.

While the findings are only preliminary, Matilsky said, they may line up with real observations of the sun dating back to the 19th century.

He added that the existence of such a solar alter-ego could provide physicists with new clues to the processes that govern the sun's internal clock--a cycle in which the sun switches from periods of high activity to low activity about once every 11 years.

"We don't know what is setting the cycle period for the sun or why some cycles are more violent than others," said Matilsky, a graduate student at JILA. "Our ultimate goal is to map what we're seeing in the model to the sun's surface so that we can then make predictions."

He will present the team's findings at a press briefing today at the 234th meeting of the American Astronomical Society in St. Louis.

The study takes a deep look at a phenomenon that scientists call the solar "dynamo," essentially a concentration of the star's magnetic energy. This dynamo is formed by the spinning and twisting of the hot gases inside the sun and can have big impacts--an especially active solar dynamo can generate large numbers of sunspots and solar flares, or globs of energy that blast out from the surface.

But that dynamo isn't easy to study, Matilsky said. That's because it mainly forms and evolves within the sun's interior, far out of range of most scientific instruments.

"We can't dive into the interior, which makes the sun's internal magnetism a few steps removed from real observations," he said.

To get around that limitation, many solar physicists use massive supercomputers to try to recreate what's occurring inside the sun.

Matilsky and Toomre's simulation examines activity in the outer third of that interior, which Matilsky likens to "a spherical pot of boiling water."

And, he said, this model delivered some interesting results. When the researchers ran their simulation, they first found that the solar dynamo formed to the north and south of the sun's equator. Following a regular cycle, that dynamo moved toward the equator and stopped, then reset in close agreement with actual observations of the sun.

But that regular churn wasn't the whole picture. Roughly twice every 100 years, the simulated sun did something different.

In those strange cases, the solar dynamo didn't follow that same cycle but, instead, clustered in one hemisphere over the other.

"That additional dynamo cycle would kind of wander," Matilsky said. "It would stay in one hemisphere over a few cycles, then move into the other one. Eventually, the solar dynamo would return to its original state."

That pattern could be a fluke of the model, Matilsky said, but it might also point to real, and previously unknown, behavior of the solar dynamo. He added that astronomers have, on rare occasions, seen sun spots congregating in one hemisphere of the sun more than the other, an observation that matches the CU Boulder team's findings.

Matilsky said that the group will need to develop its model further to see if the dual dynamo pans out. But he said that the team's results could, one day, help to explain the cause of the peaks and dips in the sun's activity--patterns that have huge implications for climate and technological societies on Earth.

"It gives us clues to how the sun might shut off its dynamo and turn itself back on again," he said.

Credit: 
University of Colorado at Boulder

Genetics influence how protective childhood vaccines are for individual infants

A genome-wide search in thousands of children in the UK and Netherlands has revealed genetic variants associated with differing levels of protective antibodies produced after routine childhood immunizations. The findings, appearing June 11 in the journal Cell Reports, may inform the development of new vaccine strategies and could lead to personalized vaccination schedules to maximize vaccine effectiveness.

"This study is the first to use a genome-wide genotyping approach, assessing several million genetic variants, to investigate the genetic determinants of immune responses to three routine childhood vaccines," says Daniel O'Connor of the University of Oxford, who is co-first author on the paper along with Eileen Png of the Genome Institute of Singapore. "While this study is a good start, it also clearly demonstrates that more work is needed to fully describe the complex genetics involved in vaccine responses, and to achieve this aim we will need to study many more individuals."

Vaccines have revolutionized public health, preventing millions of deaths each year, particularly in childhood. The maintenance of antibody levels in the blood is essential for continued vaccine-induced protection against pathogens. Yet there is considerable variability in the magnitude and persistence of vaccine-induced immunity. Moreover, antibody levels rapidly wane following immunization with certain vaccines in early infancy, so boosters are required to sustain protection.

"Evoking robust and sustained vaccine-induced immunity from early life is a crucial component of global health initiatives to combat the burden of infectious disease," O'Connor says. "The mechanisms underlying the persistence of antibody is of major interest, since effectiveness and acceptability of vaccines would be improved if protection were sustained after infant immunization without the need for repeated boosting through childhood."

Vaccine responses and the persistence of immunity are determined by various factors, including age, sex, ethnicity, microbiota, nutritional status, and infectious diseases. Twin studies have also shown vaccine-induced immunity to be highly heritable, and recent studies have started to unpick the genetic components underlying this complex trait.

To explore genetic factors that determine the persistence of immunity, O'Connor and colleagues carried out a genome-wide association study of 3,602 children in the UK and Netherlands. The researchers focused on three routine childhood vaccines that protect against life-threatening bacterial infections: capsular group C meningococcal (MenC), Haemophilus influenzae type b (Hib), and tetanus toxoid (TT) vaccines. They analyzed approximately 6.7 million genetic variants affecting a single DNA building block, known as single nucleotide polymorphisms (SNPs), associated with vaccine-induced antibody levels in the blood.

The researchers identified two genetic loci associated with the persistence of vaccine-induced immunity following childhood immunization. The persistence of MenC immunity is associated with SNPs in a genomic region containing a family of signal-regulatory proteins, which are involved in immunological signaling. Meanwhile, the persistence of TT-specific immunity is associated with SNPs in the human leukocyte antigen (HLA) locus. HLA molecules present peptides to T cells, which in turn induce B cells to produce antibodies.

These variants likely account for only a small portion of the genetic determinants of persistence of vaccine-induced immunity. Moreover, it is unclear whether the findings apply to other ethnic populations besides Caucasians from the UK and Netherlands. But according to the authors, neonatal screening approaches could soon incorporate genetic risk factors that predict the persistence of immunity, paving the way for personalized vaccine regimens.

"We are now carrying out in-depth investigations into the biology of the genetic variants we described in this study," O'Connor says. "We also planned further research, in larger cohorts of children and other populations that benefit from vaccination, to further our understanding of how our genetic makeup shapes vaccine responses."

Credit: 
Cell Press

New energy-efficient algorithm keeps UAV swarms helping longer

image: A new energy-efficient data routing algorithm could keep unmanned aerial vehicle swarms flying longer, report an international team of researchers this month in the journal Chaos. UAV swarms are cooperative, intercommunicating groups of UAVs used for a wide and growing variety of civilian and military applications. In disaster response, UAV swarms linked to one or more local base stations act as eyes in the sky, providing first responders with crucial damage and survivor information. This image shows UAV-aided medical assistance system architecture including base stations and a UAV swarm, with UAVs closest to the base stations acting as relay nodes for otherwise out-of-range UAVs.

Image: 
Wuhui Chen

WASHINGTON, D.C., June 11, 2019 -- A new energy-efficient data routing algorithm developed by an international team could keep unmanned aerial vehicle swarms flying -- and helping -- longer, report an international team of researchers this month in the journal Chaos, from AIP Publishing.

UAV swarms are cooperative, intercommunicating groups of UAVs used for a wide and growing variety of civilian and military applications. In disaster response, particularly when local communications infrastructure is destroyed, UAV swarms linked to one or more local base stations act as eyes in the sky, providing first responders with crucial damage and survivor information.

"The battery capacity of UAVs is a critical shortcoming that limits their usage in extended search and rescue missions," said co-author Wuhui Chen, a researcher at China's Sun Yat-Sen University.

Much of a UAV's energy use can be related to high bandwidth and long transmission times -- think of the drain on the battery of your phone in such cases. To address this, Chen and colleagues have developed a UAV swarm data routing algorithm that uses the strength of the group to maximize real-time transmission rates and minimize individual UAV battery challenges.

Their new hybrid computational approach combines linear programming and a genetic algorithm to create a "multi-hop" data routing algorithm. A genetic algorithm solves chaotic optimization problems using an analogue of natural selection, the process that drives biological evolution.

In real time, the new adaptive LP-based genetic algorithm (ALPBGA) identifies the lowest communications energy route within a swarm and simultaneously balances out individual UAV power use, for example, by determining which UAV will beam information to a base station.

"By balancing power consumption among the UAVs, we significantly enhance the ability of the whole system," said Patrick Hung, a co-author at the University of Ontario Institute of Technology in Canada. "Our simulations show that our approach can outperform the existing state of the art methods."

These computer simulations show that, especially as swarm size increases from 10 to hundreds of UAVs, ALPBGA reduces the number of UAVs that stop communicating by 30% to 75% compared to existing leading UAV swarm communication algorithms.

"We believe the results of our research will inspire others to design more energy-efficient UAV communication systems," said Chen, who plans to extend the ALPBGA research to optimize it within the context of different swarm flying trajectories.

Credit: 
American Institute of Physics

Holistic view of planning energy self-sufficient communities

image: Researchers are using advanced modeling and visualization tools to help create the planned net-zero energy district, Peña Station NEXT, near the Denver International Airport in Denver, Colorado.

Image: 
National Renewable Energy Laboratory

WASHINGTON, D.C., June 11, 2019 - Sustainable communities supplied by local renewable energy production are beginning to be established in the U.S. By using energy-efficient buildings and distributing means of energy generation, such as solar panels, throughout buildings in these districts, the communities manage to produce enough energy for their local needs -- achieving a yearly net zero energy (NZE) balance.

However, this yearly average glosses over the local energy fluctuations that can challenge the supporting power grid. For example, a spike in energy production on a sunny day could cause voltage violations that might exceed appliance allowable standards.

To enable expansion of NZE districts to create more sustainable urban areas, researchers have integrated power grid considerations into the model of a newly planned NZE district and examined energy fluctuations at 15-minute intervals. The analysis and recommendations are presented in the Journal of Renewable and Sustainable Energy, from AIP Publishing.

"We had a unique partnership between electric utility suppliers, building developers, landowners and other stakeholders, which enabled us to look at this from a holistic perspective and come up with a solution that is better for everyone at a lower total cost," said Bri-Mathias Hodge, who led the study at the National Renewable Energy Laboratory in Colorado.

Centralized planning provides a means to proactively ensure a reliable service of electricity to the districts. One way to achieve central management could be through utility ownership or operation of distributed devices, such as solar power systems and batteries.

To model this new centralized framework, the team combined power distribution software and energy efficiency modeling. They applied their integrated model to the 100-building solar-powered residential district planned in Denver, Colorado -- Peña Station NEXT.

Using much more detailed solar and building data with higher time and geographic resolution than previously considered, the team was able to compute an exhaustive number of potential scenarios, finding the best way to achieve NZE at the lowest costs and with the least reliability impacts.

"The surprising thing was the magnitude of difference, from the power system perspective, between balancing NZE on the annual compared to the 15-minute basis," said Hodge, who explained that alternative energy sources would be essential to pick up the daily slack in energy supply when the sun is low in the evenings, as well as seasonal fluctuations.

Another particularly important aspect for maintaining power grid operation and preventing costly rebuilds was the application of advanced solar power inverter control schemes to maintain local power balance.

Other NZE district planners are looking to apply the centralized planning tool. To speed up the process, Hodge's team is automating the application of their model, as well as integrating other factors, such as electric car charging.

Credit: 
American Institute of Physics

Eating more vitamin K found to help, not harm, patients on warfarin

Baltimore (June 11, 2019) - When prescribed the anticoagulant drug warfarin, many patients are told to limit foods rich in vitamin K, such as green vegetables. The results of a new clinical trial call that advice into question and suggest patients on warfarin actually benefit from increasing their vitamin K intake--as long as they keep their intake levels consistent.

Warfarin is widely used to prevent the dangerous blood clots that cause heart attacks and strokes. The drug's dosage must be carefully calibrated to balance the risk of clots against the risk of uncontrolled bleeding. Because warfarin counteracts the activity of vitamin K in the blood, large swings in vitamin K intake can disrupt this balance.

The current recommendation to keep daily vitamin K intakes consistent often translates into patients limiting vitamin K intake. According to the new trial, patients would be better advised to increase the amount of vitamin K in their diet.

"I think all warfarin-treated patients would benefit from increasing their daily vitamin K intake, said lead study author Guylaine Ferland, professor of nutrition at Université de Montréal and scientist at the Montreal Heart Institute Research Centre. "That said, given the direct interaction between dietary vitamin K and the action of the drug, it is important that (higher) daily vitamin K intakes be as consistent as possible."

Ferland will present the research at Nutrition 2019, the American Society for Nutrition annual meeting, held June 8-11, 2019 in Baltimore.

"Our hope is that healthcare professionals will stop advising warfarin-treated patients to avoid green vegetables," said Ferland. She explained that eating plenty of green vegetables and other nutritious vitamin-K rich foods can help stabilize anticoagulation therapy and offers many other health benefits.

The study is the first randomized controlled trial to test how patients on warfarin respond to a dietary intervention aimed at systematically increasing vitamin K intake. Researchers enrolled 46 patients with a history of anticoagulation instability. Half attended dietary counseling sessions and cooking lessons that provided general nutrition information, while half attended counseling sessions and cooking lessons focused on increasing intake of green vegetables and vitamin-K rich oils and herbs.

After six months, 50 percent of those counseled to increase their vitamin K intake were maintaining stable anticoagulation levels, compared with just 20 percent of those who received the general nutritional counseling, a significant improvement. The results suggest patients taking warfarin would benefit from consuming foods that provide a minimum of 90 micrograms of vitamin K per day for women and 120 micrograms per day for men, Ferland said.

Guylaine Ferland will present this research on Tuesday, June 11, from 11:15 - 11:30 a.m. in the Baltimore Convention Center, Room 319/320 (abstract). Contact the media team for more information or to obtain a free press pass to attend the meeting.

Credit: 
American Society for Nutrition

Too many businesses failing to properly embrace AI into processes, not reaping benefits

Businesses actively embracing artificial intelligence and striving to bring technological advancements into their operations are reaping dividends not seen by companies who fail to properly adapt and adopt.

While most business and technology leaders are optimistic about the value-creating potential of AI in their enterprise - Enterprise Cognitive Computing (ECC) - the actual rate of adoption is low, and benefits have proved elusive for a majority of organisations.

A study involving Lancaster University Management School's Centre for Technological Futures and MIT Sloan School's Center for Information Systems Research, published in MIT Sloan Management Review, examined adoption of ECC in 150 organisations from various industries across Europe, North America, Asia and Australia, to understand why.

Companies who are able to generate value from ECC do so having built a number of organisational capabilities. They develop skills for data science and algorithmic expertise, shape their business and the roles of staff to accommodate and integrate ECC initiatives, and account for the need to include human judgement and digital inquisitiveness in order to see benefits. Such businesses have strong domain expertise and a good operating IT infrastructure.

They apply these capabilities to a number of practices across the organisation, including co-creation involving people from across the business through the lifecycle of ECC applications, and developing use cases around pressing and meaningful business problems. They have strategies for managing and training AI algorithms within the ECC applications, and - importantly - they both create a positive buzz about ECC and at the same time have realistic and clear-eyed expectations of the benefits they can expect.

Professor Monideepa Tarafdar, Professor of Information Systems and Co-Director of the Centre for Technological Futures at Lancaster University, who co-authored the study, said: "Bringing AI successfully into a business has many positive effects. It can free employees to perform tasks that require adaptability and creativity found in human input, enhance operations, and augment employees' skills.

"But one of our studies showed half of companies have no ECC in place, and only half of those who have believe it to have produced measurable value. This suggests that generating value from such AI is not easy if organizations do not develop the needed capabilities and practices.

"Companies that are serious about AI applications spend the money to hire the right staff and develop the business practices that ensure ECC can improve their business operations, rather than spending money and harnessing massive amounts of data with no obvious benefits."

She added: "Having the proper capabilities in place enables employees to execute the new practices, and the practices in turn strengthen the capabilities of the ECC programmes. Such a virtuous cycle can lead to dramatic improvements in operational and financial performance, and customer satisfaction."

Credit: 
Lancaster University

Veteran-directed care program is effective

A new study led by Boston University School of Public Health (BUSPH) and Veterans Affairs Boston Healthcare System researchers finds that a program that gives veterans flexible budgets for at-home caregivers is at least as effective as other veteran purchased-care services. Published in the June issue of Health Affairs, the study shows that, although the average enrollee in the Veterans Health Administration (VHA)'s Veteran-Directed Care (VDC) program has more complex health burdens than veterans in other purchased-care programs, enrollees in both groups had similar hospitalization and cost trajectories.

VDC provides monthly budgets and counseling to veterans who need significant assistance with daily living, allowing them to hire personal care workers or family and friends as paid caregivers to help them continue living at home. The VDC program launched in 2009, and has proven very popular with veterans and their family caregivers, but is not yet operational nationwide.

"Given the popularity of this program, and our findings that enrollees have similar outcomes as enrollees in other programs, further expansion of Veteran-Directed Care may be justified," says Dr. Melissa Garrido, associate director of the Partnered Evidence-Based Policy Resource Center (PEPReC) at the VA Boston Healthcare System and research associate professor of health law, policy & management at BUSPH.

The researchers used VHA data from fiscal year 2017 to identify 965 VDC enrollees, 21,117 veterans receiving other purchased-care services at VHA medical centers that offered VDC, and 15,325 veterans receiving other purchased-care services at VHA medical centers that did not yet offer VDC but were interested in implementing the program. The researchers then looked at VHA hospitalizations and related costs for all of these veterans in 2016 and in 2018. When the researchers controlled for demographics, levels of care needed, duration of care, the biases of the data, and other factors, they found similar changes in hospitalization rates and costs from before and after enrollment in VDC or another program.

Credit: 
Boston University School of Medicine

Tube anemone has the largest animal mitochondrial genome ever sequenced

image: Discovery by Brazilian and US researchers could change the classification of two species, which appear more akin to jellyfish than was thought.

Image: 
Sérgio Stampar

The tube anemone Isarachnanthus nocturnus is only 15 cm long but has the largest mitochondrial genome of any animal sequenced to date, with 80,923?base pairs. The human mitochondrial genome (mitogenome), for example, comprises 16,569 base pairs.

Tube anemones (Ceriantharia) are the focus of an article recently published in Scientific Reports describing the findings of a study led by Sérgio Nascimento Stampar (https://bv.fapesp.br/en/pesquisador/171695/sergio-nascimento-stampar/), a professor in São Paulo State University's School of Sciences and Letters (FCL-UNESP) at Assis in Brazil.

The study was supported by FAPESP via a regular grant for the project "Evolution and diversity of Ceriantharia (Cnidaria" and via its program São Paulo Researchers in International Collaboration (SPRINT) under a cooperation agreement with the University of North Carolina (UNC) at Charlotte in the US.

The mitogenome is simpler than the nuclear genome, which in the case of I. nocturnus has not yet been sequenced, Stampar explained. The human nuclear genome comprises some 3 billion base pairs, for example.
Another discovery reported in the article is that I. nocturnus and Pachycerianthus magnus (another species studied by Stampar's group, 77,828 base pairs) have linear genomes like those of medusae (Medusozoa), whereas other species in their class (Anthozoa) and indeed most animals have circular genomes.

I. nocturnus is found in the Atlantic from the coast of Patagonia in Argentina as far north as the East Coast of the US. P. magnuslives in the marine environment around the island of Taiwan in Asia. Both inhabit waters at most 15 m deep.

"I. nocturnus's mitogenome is almost five times the size of the human mitogenome," Stampar said. "We tend to think we're molecularly more complex, but actually our genome has been more 'filtered' during our evolution. Keeping this giant genome is probably more costly in terms of energy expenditure."

The shape of the mitogenomes in these two species of tube anemone and the gene sequences they contain were more surprising than their size.

Because they are closely related species, their gene sequences should be similar, but I. nocturnus has five chromosomes while P. magnus has eight, and each has a different composition in terms of genes. This kind of variation had previously been found only in medusozoans, sponges, and some crustaceans.

"Humans and bony fish species are more similar than these two tube anemones in terms of the structure of their mitochondrial DNA," Stampar said.

São Paulo coast and South China Sea

To arrive at these results, the researchers captured specimens in São Sebastião, which lies on the coast of São Paulo State in Brazil, and off Taiwan in the South China Sea. Small pieces of the animals' tentacles were used to sequence their mitogenomes.

The genomes of the two species hitherto available from databases were incomplete owing to the difficulty of sequencing them. After completing the study, the researchers published the genomes by gifting them to GenBan, a database maintained in the US by the National Center for Biotechnology Information (NCBI) at the National Institutes of Health (NIH).

Another obstacle to sequencing was the difficulty of collecting these animals because of their elusive behavior. In response to any potential threat, a tube anemone hides in the long leathery tube that distinguishes it from true sea anemones, making capture impossible.

"You have to dig a hole around it, sometimes as deep as a meter, and stop up the part of the tube buried in sand. All this must be done under water while carrying diving gear. Otherwise, it hides in the buried part of the tube and you simply can't get hold of it," Stampar said.

Thanks to the support of FAPESP's SPRINT program, Stampar and Marymegan Daly, a research colleague at Ohio State University in the US, established a partnership with Adam Reitzel and Jason Macrander at UNC Charlotte. Macrander, then a postdoctoral researcher under Reitzel, is a professor at Florida Southern College.

Reitzel and Macrander specialize in the use of bioinformatics to filter genomics data and assemble millions of small pieces of mitochondrial DNA into a single sequence. They used this technique to arrive at complete mitochondrial genomes for both species.

"In this technique, you sequence bits of the genome and link them in a circle. The problem is that this only works with circular genomes. Because we couldn't find a piece to close the circle, we realized the genome had to be linear, as it is for Medusozoa," Stampar said.

The discovery makes way for a possible reclassification of cnidarian species (hydras, medusae, polyps, corals and sea anemones). The tube anemones studied appear to form a separate group from corals and sea anemones and display some similarities to medusae.

However, more data will be needed before a definitive conclusion can be reached. The necessary data could come from the sequencing of these species' nuclear genomes, which Stampar and his group intend to complete by the end of 2019.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Want effective policy? Ask the locals

image: Associate Professors Andrew Chapman (left) and Hidemichi Fujii (middle) and Professor Shunsuke Managi (right) of Kyushu University discuss the research outcomes and applications of key results from their multinational survey including 37 nations and 100,956 respondents toward achieving the Sustainable Development Goals.

Image: 
Andrew Chapman, Kyushu University

As multinational organizations such as the United Nations strive to improve life for people across the globe through initiatives like the Sustainable Development Goals, there is a tendency to look for indicators that can be used across the board to drive policy aimed at achieving these objectives. However, analysis of a survey conducted across 37 nations by researchers at Kyushu University in Japan shows that regional economic, developmental, and cultural factors greatly influence the relationships among self-reported levels of energy affordability, life satisfaction, health, and economic inequality.

"Based on our findings, we can state that there is no 'one-size-fits all' policy approach that will solve health, economic inequality, life satisfaction, or income level issues," says Andrew Chapman, associate professor at Kyushu University's International Institute for Carbon-Neutral Energy Research and first author of the Nature Sustainability paper announcing the new results. "Instead, policy responses need to be tailored to the nations in which they are to be delivered."

To investigate the relationships, the researchers conducted a survey from 2015 to 2017 across 37 countries, yielding 100,956 respondents. While an internet survey was used for a majority of the countries, field agents--who were individually trained by one of the researchers--administered face-to-face surveys in Egypt, Kazakhstan, Mongolia, Myanmar, and Sri Lanka.

Although the researchers found that income level is correlated with a number of factors, high income did not always improve perceived life satisfaction, health, or economic inequality. For example, some low-income nations, even those with limited energy access and lower healthy life expectancies, reported superior life satisfaction and health, strongly indicating that cultural factors influence individual self-reporting.

One general trend was that increased national welfare spending reduced respondents' perceived economic gap with their peers. In addition, although increasing energy access in poorer, marginalized nations is generally likely to lead to an improvement in health, cultural and lifestyle factors also play a strong role.

"On the one hand, less-developed nations are likely to be improved through developmental aid from more-developed nations," says Chapman. "But at the same time, developed nations can learn important lessons from less-developed nations that experience comparatively better levels of perceived life satisfaction despite their lower incomes and worse access to energy."

The researchers believe that the findings of this research will aid in the development of fit-for-purpose energy affordability, health, and economic policies to improve the lives of those affected and to achieve the Sustainable Development Goals.

"The big take away is that policy that seeks to improve energy affordability needs to be culturally aware with respect to each nation's residents, specifically when dealing with overlapping issues such as energy access and limited government expenditure on health or welfare," comments Chapman.

In the future, the researchers plan to assess the non-income-based factors leading to superior levels of self-reported life satisfaction and health in some less-developed nations.

Credit: 
Kyushu University

Almost 400 medical practices found ineffective in analysis of 3,000 studies

Scientists have identified nearly 400 established medical practices that have been found to be ineffective by clinical studies published across three top medical journals.

Writing in the open-access journal eLife, the team hope their findings will encourage the de-adoption of these practices, also known as medical reversals, ultimately making patient care more efficient and cost effective.

Medical reversals are practices that have been found to be no better than prior or lesser standards of care, through randomised controlled trials (RCTs: studies that aim to reduce certain types of bias when testing new treatments). But it can be difficult to identify these practices. For example, Cochrane Reviews provide high-quality evidence on medical practices, but only one practice is covered in each review and many have not been reviewed in this way. Additionally, the Choosing Wisely initiative in the US aims to maintain a list of low-value medical practices, but it relies on medical organisations to report them.

"We wanted to build on these and other efforts to provide a larger and more comprehensive list for clinicians and researchers to guide practice as they care for patients more effectively and economically," says lead author Diana Herrera-Perez, Research Assistant at the Knight Cancer Institute at Oregon Health & Science University (OHSU), US.

To do this, Herrera-Perez and her team conducted a search of RCTs published over 15 years in three leading general medical journals: the Journal of the American Medical Association, the Lancet and the New England Journal of Medicine.

Their analysis revealed 396 medical reversals from 3,000 articles. Of these, most were conducted on people in high-income countries (92%), likely because the majority of randomised trials are performed in this setting. Meanwhile, 8% were done in low or middle-income countries, including China, India, Malaysia and Ethiopia.

Cardiovascular disease was the most commonly represented medical category among the reversals (20%), followed by public health/preventive medicine (12%) and critical care (11%). In terms of the type of intervention, medication was the most common (33%), followed by a procedure (20%) and vitamins and/or supplements (13%).

"There are a number of lessons that we can take away from our set of results, including the importance of conducting RCTs for both novel and established practices," explains senior author Vinay Prasad, Associate Professor at the OHSU Knight Cancer Institute. "Once an ineffective practice is established, it may be difficult to convince practitioners to abandon its use. By aiming to test novel treatments rigorously before they become widespread, we can reduce the number of reversals in practice and prevent unnecessary harm to patients.

"We hope our broad results may serve as a starting point for researchers, policy makers and payers who wish to have a list of practices that likely offer no net benefit to use in future work."

Prasad adds that some limitations need to be taken into account with the results, including the fact that only three general medical journals were studied. This means the findings may not be broadly generalisable to all journals or fields. Additionally, other researchers may categorise results differently, depending on their expertise. To help overcome this issue, the team invited physicians from a range of backgrounds to review and comment on the practices identified as reversals.

"Taken together, we hope our findings will help push medical professionals to evaluate their own practices critically and demand high-quality research before adopting a new practice in future, especially for those that are more expensive and/or aggressive than the current standard of care," concludes co-lead author Alyson Haslam, PhD, also at the OHSU Knight Cancer Institute.

Credit: 
eLife

Electric vehicles would be a breath of fresh air for Houston

image: Cornell researchers say replacing at least 35 percent of Houston's gasoline cars and diesel trucks with electric vehicles by 2040 will reduce pollution and improve air quality by 50 percent.

Image: 
Cornell University

ITHACA, N.Y. - Cornell University researchers are expressing hope for the future of Houston's breathable air, despite the city's poor rankings in the American Lung Association's 2019 "State of the Air" report.

The report, released in April, ranked Houston ninth nationally for worst ozone pollution and 17th for particle pollution.

Researchers say replacing at least 35 percent of Houston's gasoline cars and diesel trucks with electric vehicles by 2040 will reduce pollution and improve air quality by 50 percent.

"The built environment plays a significant role in affecting our daily life and health," said H. Oliver Gao, professor of civil and environmental engineering and senior author of "Potential Impacts of Electric Vehicles on Air Quality and Health Endpoints in the Greater Houston Area in 2040," published in Atmospheric Environment.

"While transportation provides us with mobility, it impacts our environment and our public health," said Gao, who directs Cornell's Center for Transportation, Environment and Community Health and is a fellow at Cornell's Atkinson Center for a Sustainable Future. "We are enjoying this mobility at a very high cost."

Shuai Pan, postdoctoral associate in civil and environmental engineering, along with Gao and a team of chemists and engineers, modeled four scenarios using various levels of electric car adoption to see how Houston's air quality and public health likely would respond two decades from now.

"The population in 2040 Houston will see a huge increase, but we can apply new technology to reduce emissions, improve air quality and think about health," said Pan, who earned his Ph.D. in atmospheric science from the University of Houston in 2017.

In their exhaust, gasoline and diesel vehicles emit nitrogen oxides - volatile organic compounds that react in the presence of sunlight to form ozone and increase detrimental fine particulates, elements known to harm human health.

If left unchecked, current ozone and particulate-matter levels would result in 122 more premature deaths annually throughout greater Houston by 2040. With moderate or aggressive electrification for cars and trucks, the numbers reflect air-quality improvement, with prevented premature deaths at 114 and 188, respectively.

In the case of complete turnover to electric vehicles, the number of prevented premature deaths per year around Houston shoots to 246.

"Mayors or policymakers - who care about the environment, the economy and public health - must advocate for electrification," Gao said. "The knowledge is there, but we need mayors and city planners to be creative and innovative to design policies that would help the electrification of the transportation sector."

Credit: 
Cornell University

Uncovering the hidden history of a giant asteroid

image: This is an artist's concept of a massive 'hit-and-run' collision hitting Asteroid Vesta.

Image: 
Mikiko Haba

A massive 'hit-and-run' collision profoundly impacted the evolutionary history of Vesta, the brightest asteroid visible from Earth. This finding, by a team of researchers from Tokyo Institute of Technology, Japan's National Institute of Polar Research and ETH Zurich, Switzerland, deepens our understanding of protoplanet formation more than 4.5 billion years ago, in the early infancy of the Solar System.

In a remarkable feat of astronomical detective work, scientists have determined the precise timing of a large-scale collision on Vesta that helps explain the asteroid's lopsided shape. Their study, published in Nature Geoscience, pinpoints the collision to 4,525.4 million years ago.

Vesta, the second largest body in the asteroid belt, is of immense interest to scientists investigating the origin and formation of planets. Unlike most asteroids, it has kept its original, differentiated structure, meaning it has a crust, mantle and metallic core, much like Earth.

Most of what we know about the asteroid had so far come from howardite-eucrite-diogenite (HED) meteorites, following studies in the 1970s that first proposed Vesta as the parent body of these meteorites. In recent years, NASA's Dawn mission, which orbited Vesta in 2011-2012, reinforced the idea that HED meteorites originate from Vesta and provided more insights into the asteroid's composition and structure. Careful mapping of Vesta's geology revealed an unusually thick crust at the asteroid's south pole.

The new study provides a confident framework for understanding Vesta's geological timeline, including the massive collision that caused the formation of the thick crust.

Key to uncovering this timeline was examining a rare mineral called zircon found in mesosiderites (stony-iron meteorites that are similar to HED meteorites in terms of texture and composition). Based on a strong premise that both types of meteorites came from the same parent body, Vesta, the team focused on dating zircon from mesosiderites with unprecedented precision.

Makiko Haba of Tokyo Institute of Technology (Tokyo Tech), a specialist in geochemical and chronological studies of meteorites, and Akira Yamaguchi of Japan's National Institute of Polar Research (NIPR) were involved in sample preparation -- a major challenge, Haba explains, as fewer than ten zircon grains have been reported over the past few decades. "We developed how to find zircon in mesosiderites and eventually prepared enough grains for this study," she says.

Joining forces with co-authors at ETH Zurich who developed a technique to measure the age of the samples using uranium-lead dating, the team pooled their expertise to propose a new evolutionary model for Vesta. "This work could not be achieved without collaboration between Tokyo Tech, NIPR, and ETH Zurich," Haba points out.

The team highlights two significant time-points: initial crust formation 4,558.5 ± 2.1 million years ago and metal-silicate mixing by the hit-and-run collision at 4,525.39 ± 0.85 million years ago. This collision, impacting Vesta's northern hemisphere as shown in Figure 1, likely caused the thick crust observed by the Dawn mission, and supports the view that Vesta is the parent body of mesosiderites and HED meteorites.

By building on this study, Haba says she plans to examine "more precise conditions, such as temperature and cooling rate during and after the large-scale collision on Vesta based on mesosiderite and HED meteorite measurements."

"I'd like to draw a picture that shows the whole history of Vesta from the cradle to the grave," she says. "Combining such information with an impact simulation study would contribute to a more comprehensive understanding of large-scale collisions on protoplanets."

The dating method could be applied to other meteorites in future. Haba adds: "This is very important for understanding when and how protoplanets formed and grew to become planets like Earth. I'd like to also apply our dating method to samples from future spacecraft missions."

Credit: 
Tokyo Institute of Technology

How bosses react influences whether workers speak up

HOUSTON - (June 11, 2019) - Speaking up in front of a supervisor can be stressful -- but it doesn't have to be, according to new research from a Rice University psychologist. How a leader responds to employee suggestions can impact whether or not the employee opens up in the future.

Danielle King, an assistant professor of psychology at Rice, is the lead author of "Voice Resilience: Fostering Future Voice After Non-Endorsement of Suggestions," which will appear in an upcoming special issue of the Journal of Occupational and Organizational Psychology. The paper explains how leaders can use language that encourages workers to offer more ideas in the future, even if their suggestions are not implemented.

After conducting two studies, King found that people who speak up at work only to have their ideas rejected by supervisors will nonetheless offer more suggestions later if their bosses respond properly.

"Given that many employee ideas for change cannot be endorsed, our results highlight the practical importance of providing sensitive explanations for why employee suggestions cannot be embraced," she said. "Specifically, it is critically important for leaders to exhibit sensitivity in their communication with employees."

The first study, with 197 participants, included a survey asking workers to describe a time when they gave their supervisor a suggestion that was rejected. They also answered questions about the adequacy of their leader's explanation, how the experience made them feel and how likely they were to speak up in the future.

The second study, including 223 students, involved two 30-minute online surveys. In this experimental study, students worked as interns for a marketing firm that was developing advertisements for businesses frequented by other students.

Students who provided suggestions about the marketing materials received one of four responses, all of which indicated their boss didn't agree with their advice. Those four responses covered a range of answers, from sensitive and well-explained to insensitive and poorly explained. The students then had a second chance to offer suggestions on different material.

King, whose future research will explore other forms of resilience at work, hopes this study will encourage more sensitive communication between leaders and employees.

"It would be useful for organizations to offer training and development for leaders on how to let employees down gently while encouraging them to speak up in the future," King said. "As demonstrated in our study, explanation sensitivity led to employees opening up again. In addition, it may be valuable to help employees understand that extenuating circumstances sometimes prevent implementation of potentially good ideas. It also would be useful to provide justification for why complete explanations cannot be revealed for strategic or confidentiality reasons. If such explanations are delivered in a sensitive manner, this should maintain the type of leader-employer relationship that encourages employees to speak up in the future."

Credit: 
Rice University

USPSTF recommendation on screening for HIV infection

Bottom Line: The U.S. Preventive Services Task Force (USPSTF) recommends screening for HIV infection in adolescents and adults ages 15 to 65; in those younger or older at increased risk of infection; and in all pregnant people. The USPSTF routinely makes recommendations about the effectiveness of preventive care services and this statement is an update of its 2013 recommendation. About 15% of people living with HIV are unaware of their infection, and it is estimated that those individuals are responsible for 40% of HIV transmissions in the United States. There were about 38,000 new cases of HIV infection in 2017.

(doi:10.1001/jama.2019.6587)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Note: More information about the U.S. Preventive Services Task Force, its process, and its recommendations can be found on the newsroom page of its website.

Credit: 
JAMA Network