Culture

How common are forced first sexual intercourse experiences among US women?

Bottom Line: This study estimates 1 in 16 U.S. women had an unwanted first sexual intercourse experience that was physically forced or coerced. In an analysis of nationally representative survey data for 13,310 women, 6.5% of the respondents reported a forced first sexual intercourse encounter, which is equivalent to more than 3.3 million women between the ages of 18 and 44. The average age for women at the time of the forced encounter was 15 ½ compared with 17 ½ for those reporting a voluntary first sexual intercourse experience. The average age of the assailant at a first forced sexual intercourse was six years older (27 vs. 21) compared with the partner in a voluntary first sexual experience. Women with a forced first sexual intercourse experience were more likely to have an unwanted first pregnancy or abortion, as well as other gynecological and general health problems. Limitations of the study include its observational design so causal inferences are impossible and authors were unable to account for other potential influencing factors.

Authors: Laura Hawks, M.D., of  Cambridge Health Alliance and Harvard Medical School, Boston, and coauthors

(doi:10.1001/jamainternmed.2019.3500)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Nanoparticles used to transport anti-cancer agent to cells

image: Crystalline metal-organic framework.

Image: 
David Fairen-Jimenez

Scientists from the University of Cambridge have developed a platform that uses nanoparticles known as metal-organic frameworks to deliver a promising anti-cancer agent to cells.

Research led by Dr David Fairen-Jimenez, from the Cambridge Department of Chemical Engineering and Biotechnology, indicates metal-organic frameworks (MOFs) could present a viable platform for delivering a potent anti-cancer agent, known as siRNA, to cells.

Small interfering ribonucleic acid (siRNA), has the potential to inhibit overexpressed cancer-causing genes, and has become an increasing focus for scientists on the hunt for new cancer treatments.

Fairen-Jimenez's group used computational simulations to find a MOF with the perfect pore size to carry an siRNA molecule, and that would breakdown once inside a cell, releasing the siRNA to its target. Their results were published today in Cell Press journal, Chem.

Some cancers can occur when specific genes inside cells cause over-production of particular proteins. One way to tackle this is to block the gene expression pathway, limiting the production of these proteins.

SiRNA molecules can do just that - binding to specific gene messenger molecules and destroying them before they can tell the cell to produce a particular protein. This process is known as 'gene knockdown'. Scientists have begun to focus more on siRNAs as potential cancer therapies in the last decade, as they offer a versatile solution to disease treatment - all you need to know is the sequence of the gene you want to inhibit and you can make the corresponding siRNA that will break it down. Instead of designing, synthesising and testing new drugs - an incredibly costly and lengthy process - you can make a few simple changes to the siRNA molecule and treat an entirely different disease.

One of the problems with using siRNAs to treat disease is that the molecules are very unstable and are often broken down by the cell's natural defence mechanisms before they can reach their targets. SiRNA molecules can be modified to make them more stable, but this compromises their ability to knock down the target genes. It's also difficult to get the molecules into cells - they need to be transported by another vehicle acting as a delivery agent.

The Cambridge researchers have used a special nanoparticle to protect and deliver siRNA to cells, where they show its ability to inhibit a specific target gene.

Fairen-Jimenez leads research into advanced materials, with a particular focus on MOFs: self-assembling 3D compounds made of metallic and organic building blocks connected together.

There are thousands of different types of MOFs that researchers can make - there are currently more than 84,000 MOF structures in the Cambridge Structural Database with 1000 new structures published each month - and their properties can be tuned for specific purposes. By changing different components of the MOF structure, researchers can create MOFs with different pore sizes, stabilities and toxicities, enabling them to design structures that can carry molecules such as siRNAs into cells without harmful side effects.

"With traditional cancer therapy if you're designing new drugs to treat the system, these can have different behaviours, geometries, sizes, and so you'd need a MOF that is optimal for each of these individual drugs," says Fairen-Jimenez. "But for siRNA, once you develop one MOF that is useful, you can in principle use this for a range of different siRNA sequences, treating different diseases."

"People that have done this before have used MOFs that don't have a porosity that's big enough to encapsulate the siRNA, so a lot of it is likely just stuck on the outside," says Michelle Teplensky, former PhD student in Fairen-Jimenez's group, who carried out the research. "We used a MOF that could encapsulate the siRNA and when it's encapsulated you offer more protection. The MOF we chose is made of a zirconium based metal node and we've done a lot of studies that show zirconium is quite inert and it doesn't cause any toxicity issues."

Using a biodegradable MOF for siRNA delivery is important to avoid unwanted build-up of the structures once they've done their job. The MOF that Teplensky and team selected breaks down into harmless components that are easily recycled by the cell without harmful side effects. The large pore size also means the team can load a significant amount of siRNA into a single MOF molecule, keeping the dosage needed to knock down the genes very low.

"One of the benefits of using a MOF with such large pores is that we can get a much more localised, higher dose than other systems would require," says Teplensky. "SiRNA is very powerful, you don't need a huge amount of it to get good functionality. The dose needed is less than 5% of the porosity of the MOF."

A problem with using MOFs or other vehicles to carry small molecules into cells is that they are often stopped by the cells on the way to their target. This process is known as endosomal entrapment and is essentially a defence mechanism against unwanted components entering the cell. Fairen-Jimenez's team added extra components to their MOF to stop them being trapped on their way into the cell, and with this, could ensure the siRNA reached its target.

The team used their system to knock down a gene that produces fluorescent proteins in the cell, so they were able to use microscopy imaging methods to measure how the fluorescence emitted by the proteins compared between cells not treated with the MOF and those that were. The group made use of in-house expertise, collaborating with super-resolution microscopy specialists Professors Clemens Kaminski and Gabi Kaminski-Schierle, who also lead research in the Department of Chemical Engineering and Biotechnology.

Using the MOF platform, the team were consistently able to prevent gene expression by 27%, a level that shows promise for using the technique to knock down cancer genes.

Fairen-Jimenez believes they will be able to increase the efficacy of the system and the next steps will be to apply the platform to genes involved in causing so-called hard-to-treat cancers.

"One of the questions we get asked a lot is 'why do you want to use a metal-organic framework for healthcare?', because there are metals involved that might sound harmful to the body," says Fairen-Jimenez. "But we focus on difficult diseases such as hard-to-treat cancers for which there has been no improvement in treatment in the last 20 years. We need to have something that can offer a solution; just extra years of life will be very welcome."

The versatility of the system will enable the team to use the same adapted MOF to deliver different siRNA sequences and target different genes. Because of its large pore size, the MOF also has the potential to deliver multiple drugs at once, opening up the option of combination therapy.

Credit: 
University of Cambridge

Like an instruction manual, the genome groups genes together for convenience

Every living organism's cell has a complete copy of DNA, life's instruction book, which is condensed tightly in chromosomes. Every time the cell needs to perform a function, it activates genes that open or close different regions in the DNA. Like following an instruction manual with consecutive pages, it's easier to activate two genes that are closer together to complete a function.

Until now, we knew little how the genome of eukaryotic organisms (organisms with complex cells with a nucleus) organized groups of genes in accordance with their function, i.e. whether they were physically near or not. Previous studies have studied the link between gene clusters and the secondary metabolism, which are responsible to create penicillin and other toxins with antibiotic properties.

In a study published today in Nature Microbiology, researchers from the Centre of Genomic Regulation (CRG) in Barcelona, led by ICREA Professor Toni Gabaldón, now at the Institute for Research in Biomedicine (IRB Barcelona) and the Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS), shed light on this sorting process in primary metabolism. They chose to study fungi because they have smaller genomes and are easier to sequence than other eukaryote species like plants or animals.

"If genes for a specific biological process are placed near each other in the chromosome, they can co-regulate each other in a more coordinated and effective manner," says Gabaldón.

The scientists developed an algorithm capable of identifying genes near each other in genomes of different species according to their evolutionary history, i.e. looking for whether they were conserved clusters in different species of fungi, independently of the function they had. They predicted more than 11000 families of grouped genes in the genome. Of the 300 genomes analysed, they found that a third were part of a conserved group.

"Natural selection means some genes are near each other for functional relevance. The way they're organized isn't random chance - they have been selected because it makes regulating genes easier. We've found that it's pretty common, and that it affects an important proportion of the genome," says Gabaldón. "The selective forces favour the conformations of genes that allow a smaller investment in energy and improved regulatory processes," he adds.

Previous studies of gene groups linked to secondary metabolism observed that they had a switch, a type of transcription factor, to turn them on and off. Other observations also found that these gene groups passed from one species to another in block, known as horizontal transfer, though no one knew why.

The CRG scientists have now provided evidence that horizontal transfer may be less common than previously thought, and their most recent findings don't represent what the genome does as a whole. They saw that a cluster made up of the same groups of genes appeared independently twice, in parallel distant lineages.

Surely, the groups of genes carry out a specific function. "When you need something at a precise moment, it's when you most need it to be co-regulated. A general function, which is active most of the time, doesn't need such precise regulation", says Marina Marcet-Houben, first author of the study.

"Now we have a list of functions to explore. Some might have a pharmaceutical or industrial use. The gene candidates we have found affect a lot of different species that until now hadn't been found. Many of them are interesting genes and it's likely they code a function with an applied potential," adds Gabaldón.

The study's findings were based on public databases. For this reason, the CRG has also publicised its results on primary genes. "We want to give this knowledge back to the scientific community thanks to the information and data we've gathered from the public domain".

Credit: 
Center for Genomic Regulation

Most massive neutron star ever detected, almost too massive to exist

video: Artist impression and animation of the Shapiro Delay. As the neutron star sends a steady pulse towards the Earth, the passage of its companion white dwarf star warps the space surrounding it, creating the subtle delay in the pulse signal.

Image: 
BSaxton, NRAO/AUI/NSF

Neutron stars - the compressed remains of massive stars gone supernova - are the densest "normal" objects in the known universe. (Black holes are technically denser, but far from normal.) Just a single sugar-cube worth of neutron-star material would weigh 100 million tons here on Earth, or about the same as the entire human population. Though astronomers and physicists have studied and marveled at these objects for decades, many mysteries remain about the nature of their interiors: Do crushed neutrons become "superfluid" and flow freely? Do they breakdown into a soup of subatomic quarks or other exotic particles? What is the tipping point when gravity wins out over matter and forms a black hole?

A team of astronomers using the National Science Foundation's (NSF) Green Bank Telescope (GBT) has brought us closer to finding the answers.

The researchers, members of the NANOGrav Physics Frontiers Center, discovered that a rapidly rotating millisecond pulsar, called J0740+6620, is the most massive neutron star ever measured, packing 2.17 times the mass of our Sun into a sphere only 30 kilometers across. This measurement approaches the limits of how massive and compact a single object can become without crushing itself down into a black hole. Recent work involving gravitational waves observed from colliding neutron stars by LIGO suggests that 2.17 solar masses might be very near that limit.

"Neutron stars are as mysterious as they are fascinating," said Thankful Cromartie, a graduate student at the University of Virginia and Grote Reber pre-doctoral fellow at the National Radio Astronomy Observatory in Charlottesville, Virginia. "These city-sized objects are essentially ginormous atomic nuclei. They are so massive that their interiors take on weird properties. Finding the maximum mass that physics and nature will allow can teach us a great deal about this otherwise inaccessible realm in astrophysics."

Pulsars get their name because of the twin beams of radio waves they emit from their magnetic poles. These beams sweep across space in a lighthouse-like fashion. Some rotate hundreds of times each second. Since pulsars spin with such phenomenal speed and regularity, astronomers can use them as the cosmic equivalent of atomic clocks. Such precise timekeeping helps astronomers study the nature of spacetime, measure the masses of stellar objects, and improve their understanding of general relativity.

In the case of this binary system, which is nearly edge-on in relation to Earth, this cosmic precision provided a pathway for astronomers to calculate the mass of the two stars.

As the ticking pulsar passes behind its white dwarf companion, there is a subtle (on the order of 10 millionths of a second) delay in the arrival time of the signals. This phenomenon is known as "Shapiro Delay." In essence, gravity from the white dwarf star slightly warps the space surrounding it, in accordance with Einstein's general theory of relativity. This warping means the pulses from the rotating neutron star have to travel just a little bit farther as they wend their way around the distortions of spacetime caused by the white dwarf.

Astronomers can use the amount of that delay to calculate the mass of the white dwarf. Once the mass of one of the co-orbiting bodies is known, it is a relatively straightforward process to accurately determine the mass of the other.

Cromartie is the principal author on a paper accepted for publication in Nature Astronomy. The GBT observations were research related to her doctoral thesis, which proposed observing this system at two special points in their mutual orbits to accurately calculate the mass of the neutron star.

"The orientation of this binary star system created a fantastic cosmic laboratory," said Scott Ransom, an astronomer at NRAO and coauthor on the paper. "Neutron stars have this tipping point where their interior densities get so extreme that the force of gravity overwhelms even the ability of neutrons to resist further collapse. Each "most massive" neutron star we find brings us closer to identifying that tipping point and helping us to understand the physics of matter at these mindboggling densities."

Credit: 
Green Bank Observatory

Sweet success of parasite survival could also be its downfall

University of York scientists are part of an international team which has discovered how a parasite responsible for spreading a serious tropical disease protects itself from starvation once inside its human host.

The findings provide a new understanding of the metabolism of the Leishmania parasite and this new knowledge could potentially be used in its eradication.
The disease the parasite causes is called Leishmaniasis and it is spread by the bite of sand flies. It kills between 20-40,000 people every year.

In a collaboration between the University of Melbourne and the University of York, researchers found that Leishmania make an unusual carbohydrate reserve, called mannogen, that protects them from fluctuating nutrient levels in the host, enabling their survival.

They then identified a new family of enzymes that use sugars scavenged from the host to make mannogen. University of York researchers defined the 3-D structure of these enzymes and this allowed researchers to map the evolution of this new enzyme family whose members acquired the ability to both make and degrade mannogen, and regulate the metabolism of these pathogens.
This knowledge is now being used to identify drug molecules that bind and block enzyme activity and may be used to develop new therapies.

Professor Gideon Davies from the University of York's Department of Chemistry said: "Our three-dimensional structural insight provides new opportunities for drug design against this pathogen. We look forward to targeting the disease in future. The team of PhD students and York Chemistry MChem project students did a fantastic job of the structural analyses."

Professor Malcolm McConville from the University of Melbourne said: "As mannogen metabolism is critical for the survival of these parasites, developing inhibitors to block the enzymes that regulate this carbohydrate store is a potential way to specifically kill Leishmania parasites. We can exploit the parasite's food preference for mannogen and specifically target this metabolic pathway, without side-effects to humans.

"Similar enzymes and carbohydrates are made by other pathogens, such as the bacteria that cause tuberculosis, and this work may contribute to developing new classes of drugs to treat other infectious diseases."

Leishmania are able to persist for many years in their human host by hiding inside immune cells, such as macrophages. Macrophages are normally responsible for killing invading pathogens, but Leishmania are able to avoid this fate and grow stealthily within these host cells, eventually forming large 'granuloma' lesions that can lead to open ulcerating sores, organ damage and, in some cases, death.

Many people who carry the parasite remain asymptomatic, but immunosuppressed individuals, for example those with HIV/AIDs or suffering from malnutrition, are particularly vulnerable. Until recently very little was known about how Leishmania managed to grow within these host cells and resist most antibiotics.

Leishmaniasis is increasing in many regions of the world, including the Middle East, Africa and Central America where there are regional conflicts and breakdown in health services. There is currently no effective vaccine against it.

Credit: 
University of York

Measuring ethanol's deadly twin

image: The millimeter-sized black dot in the center of the gold section is the alcohol sensor.

Image: 
Van den Broek J et al. <i>Nature Communications</i> 2019

Methanol is sometimes referred to as ethanol's deadly twin. While the latter is the intoxicating ingredient in wine, beer and schnapps, the former is a chemical that becomes highly toxic when metabolised by the human body. Even a relatively small amount of methanol can cause blindness or even prove fatal if left untreated.

Cases of poisoning from the consumption of alcoholic beverages tainted with methanol occur time and again, particularly in developing and emerging countries, because alcoholic fermentation also produces small quantities of methanol. Whenever alcohol is unprofessionally distilled in backyard operations, relevant amounts of methanol may end up in the liquor. Beverages that have been adulterated with windscreen washer fluid or other liquids containing methanol are another potential cause of poisoning.

Beverage analyses and the breath test

Until now, methanol could be distinguished from ethanol only in a chemical analysis laboratory. Even hospitals require relatively large, expensive equipment in order to diagnose methanol poisoning. "These appliances are rarely available in emerging and developing countries, where outbreaks of methanol poisoning are most prevalent," says Andreas Güntner, a research group leader at the Particle Technology Laboratory of ETH Professor Sotiris Pratsinis and a researcher at the University Hospital Zurich.

He and his colleagues have now developed an affordable handheld device based on a small metal oxide sensor. It is able to detect adulterated alcohol within two minutes by "sniffing out" methanol and ethanol vapours from a beverage. Moreover, the tool can also be used to diagnose methanol poisoning by analysing a patient's exhaled breath. In an emergency, this helps ensure the appropriate measures are taken without delay.

Separating methanol from ethanol

There's nothing new about using metal oxide sensors to measure alcoholic vapours. However, this method was unable to distinguish between different alcohols, such as ethanol and methanol. "Even the breathalyser tests used by the police measure only ethanol, although some devices also erroneously identify methanol as ethanol," explains Jan van den Broek, a doctoral student at ETH and the lead author of the study.

First, the ETH scientists developed a highly sensitive alcohol sensor using nanoparticles of tin oxide doped with palladium. Next, they used a trick to differentiate between methanol and ethanol. Instead of analysing the sample directly with the sensor, the two types of alcohol are first separated in an attached tube filled with a porous polymer, through which the sample air is sucked by a small pump. As its molecules are smaller, methanol passes through the polymer tube more quickly than ethanol.

The measuring device proved to be exceptionally sensitive. In laboratory tests, it detected even trace amounts of methanol contamination selectively in alcoholic beverages, down to the low legal limits. Furthermore, the scientists analysed breath samples from a person who had previously drunk rum. For test purposes, the researchers subsequently added a small quantity of methanol to the breath sample.

Patent pending

The researchers have filed a patent application for the measuring method. They are now working to integrate the technology into a device that can be put to practical use. "This technology is low cost, making it suitable for use in developing countries as well. Moreover, it's simple to use and can be operated even without laboratory training, for example by authorities or tourists," Güntner says. It is also ideal for quality control in distilleries.

Methanol is more than just a nuisance in conjunction with alcoholic beverages, it is also an important industrial chemical - and one that might come to play an even more important role: methanol is being considered as a potential future fuel, since vehicles can be powered with methanol fuel cells. So a further application for the new technology could be as an alarm sensor to detect leaks in tanks.

Credit: 
ETH Zurich

Violent video games blamed more often for school shootings by white perpetrators

WASHINGTON - People are more likely to blame violent video games as a cause of school shootings by white perpetrators than by African American perpetrators, possibly because of racial stereotypes that associate minorities with violent crime, according to new research published by the American Psychological Association.

Researchers analyzed more than 200,000 news articles about 204 mass shootings over a 40-year period and found that video games were eight times more likely to be mentioned when the shooting occurred at a school and the perpetrator was a white male than when the shooter was an African American male. Another experiment conducted with college students had similar findings.

"When a violent act is carried out by someone who doesn't match the racial stereotype of what a violent person looks like, people tend to seek an external explanation for the violent behavior," said lead researcher Patrick Markey, PhD, a psychology professor at Villanova University. "When a white child from the suburbs commits a horrific violent act like a school shooting, then people are more likely to erroneously blame video games than if the child was African American."

The research was published in the journal Psychology of Popular Media Culture.

Numerous scientific studies have not found a link between violent video games and mass shootings, but some politicians and media coverage often cite violent video games as a potential cause, particularly for school shootings. Video games are often associated with young people even though the average age of players is in the 30s, Markey said.

"Video games are often used by lawmakers and others as a red herring to distract from other potential causes of school shootings," Markey said. "When a shooter is a young white male, we talk about violent video games as a cause for the shooting. When the shooter is an older man or African American, we don't."

In one experiment in this study, 169 college students (65 percent female, 88 percent white) read a mock newspaper article describing a fictional mass shooting by an 18-year-old male youth who was described as an avid fan of violent video games. Half of the participants read an article featuring a small mug shot of a white shooter while the other half saw a mug shot of an African American shooter. In their responses to a questionnaire, participants who read the article with the photo of a white shooter were significantly more likely to blame video games as a factor in causing the teen to commit the school shooting than participants who saw an African American shooter.

Participants who didn't play video games also were more likely to blame violent video games for school shootings. Participants were asked if the perpetrator's "social environment" contributed to the school shooting, but their responses to that question didn't affect the findings.

The researchers also created a large database by collecting 204,796 news articles about 204 mass shootings in the United States dating from 1978 (a year after the release of the Atari 2600 game console) to 2018. A mass shooting was defined as having three or more victims, not including the shooter, that wasn't identifiably related to gangs, drugs or organized crime. The analysis found different results for school shootings than mass shootings in other settings. Video games were mentioned in 6.8 percent of the articles about school shootings with white perpetrators, compared with 0.5 percent for school shootings with African American perpetrators. However, video games were mentioned at essentially the same frequency in news articles about mass shootings at other locations for white shooters (1.8 percent) or African American shooters (1.7 percent).

In 2015, the APA Council of Representatives issued a resolution based on a task force report about violent video games. The resolution noted that more than 90 percent of children in the United States played video games, and 85 percent of video games on the market contained some form of violence. The task force's review of relevant research found an association between violent video game exposure and some aggressive behavior but insufficient research linking violent video games to lethal violence. However, some recent research hasn't found any link between violent video games and aggressive behavior.

Blaming violent video games for school shootings by white perpetrators could be a sign of a larger racial issue where African American perpetrators are assigned a greater degree of culpability for their crimes, which could lead to unfair treatment in the justice system, Markey said.

Credit: 
American Psychological Association

Physicians report high refusal rates for the HPV vaccine and need for improvement

Despite its proven success at preventing cancer, many adolescents are still not getting the HPV vaccine. A new study from the University of Colorado School of Medicine at the Anschutz Medical Campus shows that physicians' delivery and communication practices must improve to boost vaccination completion rates.

Health care providers must also learn to deal with parents hesitant to get their children vaccinated with HPV vaccine.

The study, published today in Pediatrics, is the first to examine pediatricians and family physicians' delivery practices for the vaccine since the new 2-dose schedule came out for adolescents 11 or 12-years-old.

"A physician recommendation is one of the most important factors in vaccine acceptance by parents," said Allison Kempe, MD, MPH, lead author and professor of pediatrics at the University of Colorado School of Medicine. "However, we're seeing a lack of understanding from healthcare providers about the need for vaccination early in adolescence and high rates of refusal on the part of parents. The vaccine is underutilized, with less than half of American adolescents completing the vaccination. We need to maximize methods of introducing the vaccine that we know to be more effective, as well as the use of reminder and delivery methods at the practice in order to improve this rate."

Every year, HPV causes over 33,500 cases of cancer in women and men in the United States, according to the Centers for Disease Control and Prevention.

"The earlier someone is vaccinated, the better the immune system responds. It also increases the chances of being vaccinated before having exposure to HPV strains," Kempe said. "If we can increase the rate of vaccination in early adolescence, then we can prevent cancers that develop in later years."

The study surveyed 588 pediatricians and family physicians and found that refusal rates from parents remain high, especially for 11 to 12-year-olds, the target population for vaccination.

But physicians who use a `presumptive style' approach have higher acceptance rates. Presumptive style means physicians introduce the HPV vaccine and recommend it in the same manner and as strongly as the other recommended adolescent vaccines for meningitis and Tdap.

For example, a doctor could say, "We've got three vaccines today: Tdap, HPV and Meningitis," rather than isolating HPV as an option that is not as important.

Still, the survey found some encouraging signs:

Despite a high refusal rate, pediatricians who strongly recommend the vaccine increased from 60% in 2013 to 85% in 2018 for 11 or 12-year-old females and from 52% to 83% for 11 to 12-year-old males.

Some 89% of pediatricians and 79% of family pediatricians reported more adolescents under age 15 are completing the HPV series now that only 2 doses are recommended.

Along with improving physician communication styles, HPV delivery could also be optimized by increased use of standing orders and alert systems in the medical record to remind providers of the need for vaccination at the point of care.

Credit: 
University of Colorado Anschutz Medical Campus

Dartmouth research advances noise cancelling for quantum computers

HANOVER, N.H. - September 16, 2019 - A team from Dartmouth College and MIT has designed and conducted the first lab test to successfully detect and characterize a class of complex, "non-Gaussian" noise processes that are routinely encountered in superconducting quantum computing systems.

The characterization of non-Gaussian noise in superconducting quantum bits is a critical step toward making these systems more precise.

The joint study, published in Nature Communications, could help accelerate the realization of quantum computing systems. The experiment was based on earlier theoretical research conducted at Dartmouth and published in Physical Review Letters in 2016.

"This is the first concrete step toward trying to characterize more complicated types of noise processes than commonly assumed in the quantum domain," said Lorenza Viola, a professor of physics at Dartmouth who led the 2016 study as well as the theory component of the present work. "As qubit coherence properties are being constantly improved, it is important to detect non-Gaussian noise in order to build the most precise quantum systems possible."

Quantum computers differ from traditional computers by going beyond the binary "on-off" sequencing favored by classical physics. Quantum computers rely on quantum bits - also known as qubits - that are built out of atomic and subatomic particles.

Essentially, qubits can be placed in a combination of both "on" and "off" positions at the same time. They can also be "entangled," meaning that the properties of one qubit can influence another over a distance.

Superconducting qubit systems are considered one of the leading contenders in the race to build scalable, high-performing quantum computers. But, like other qubit platforms, they are highly sensitive to their environment and can be affected by both external noise and internal noise.

External noise in quantum computing systems could come from control electronics or stray magnetic fields. Internal noise could come from other uncontrolled quantum systems such as material impurities. The ability to reduce noise is a major focus in the development of quantum computers.

"The big barrier preventing us from having large-scale quantum computers now is this noise issue." said Leigh Norris, a postdoctoral associate at Dartmouth that co-authored the study. "This research moves us toward understanding the noise, which is a step toward cancelling it, and hopefully having a reliable quantum computer one day."

Unwanted noise is often described in terms of simple "Gaussian" models, in which the probability distribution of the random fluctuations of noise creates a familiar, bell-shaped Gaussian curve. Non-Gaussian noise is harder to describe and detect because it falls outside the range of validity of these assumptions and because there may simply be less of it.

Whenever the statistical properties of noise are Gaussian, a small amount of information can be used to characterize the noise - namely, the correlations at only two distinct times, or equivalently, in terms of a frequency-domain description, the so-called "noise spectrum."

Thanks to their high sensitivity to the surrounding environment, qubits can be used as sensors of their own noise. Building on this idea, researchers have made progress in developing techniques for identifying and reducing Gaussian noise in quantum systems, similar to how noise-cancelling headphones work.

While not as common as Gaussian noise, identifying and cancelling non-Gaussian noise is an equally important challenge toward optimally designing quantum systems.

Non-Gaussian noise is distinguished by more complicated patterns of correlations that involve multiple points in time. As a result, much more information about the noise is required in order for it to be identified.

In the study, researchers were able to approximate characteristics of non-Gaussian noise using information about correlations at three different times, corresponding to what is known as the "bispectrum" in the frequency domain.

"This is the first time that a detailed, frequency-resolved characterization of non-Gaussian noise has been able to be done in a lab with qubits. This result significantly expands the toolbox that we have available for doing accurate noise characterization and therefore crafting better and more stable qubits in quantum computers," said Viola.

A quantum computer that cannot sense non-Gaussian noise could be easily confused between the quantum signal it is supposed to process and unwanted noise in the system. Protocols for achieving non-Gaussian noise spectroscopy did not exist until the Dartmouth study in 2016.

While the MIT experiment to validate the protocol won't immediately make large-scale quantum computers practically viable, it is a major step toward making them more precise.

"This research started on the white board. We didn't know if someone was going to be able to put it into practice, but despite significant conceptual and experimental challenges, the MIT team did it," said Felix Beaudoin, a former Dartmouth postdoctoral student in Viola's group who also played an instrumental role in bridging between theory and experiment in the study.

"It's been an absolute joy to collaborate with Lorenza Viola and her fantastic theory team at Dartmouth," said William Oliver, a professor of physics at MIT. "We've been working together for years now on several projects and, as quantum computing transitions from scientific curiosity to technical reality, I anticipate the need for more such interdisciplinary and interinstitutional collaboration."

According to the research team, there are still years of additional work required in order to perfect the detection and cancellation of noise in quantum systems. In particular, future research will move from a single-sensor system to a two-sensor system, enabling the characterization of noise correlations across different qubits.

Credit: 
Dartmouth College

Hope for coral recovery may depend on good parenting

video: Scientists at USC and in Australia study coral to learn how they adapt to global warming.

Image: 
University of Southern California

The fate of the world's coral reefs could depend on how well the sea creatures equip their offspring to cope with global warming.

About half the world's coral has been lost due to warming seas that make their world hostile. Instead of vivid and floral, coral bleach pale as temperatures rise. This happens because the peculiar animal cohabitates with algae, which expel under stress. When that happens, coral lose their color and a life partner that sustains them, so they starve.

Yet, hope occurs in aquariums at the USC campus near downtown Los Angeles and at the Australian Institute of Marine Science. There, biologists study coral's unusual ability to shuffle their so-called symbionts -- the algae colonies inside their cells -- as a coping mechanism to potentially gain an advantage in a changing environment.

For the first time, the researchers have shown that adult coral can pass along this ability to shuffle their symbionts to their offspring. It's a process that occurs in addition to traditional DNA transfer, and it's never been seen before until scientists began captive breeding research in labs on both sides of the Pacific Ocean.

"What we're finding is that corals can pass their shuffled complement of algal partners, or symbionts, to their offspring to bestow a potential survival advantage, and that's a new discovery," said Carly Kenkel, an assistant professor of biology at the USC Dornsife College of Letters, Arts and Sciences. "We care about this because coral reefs do so much for us. A reef provides breakwater for storms, fish protein people need and biodiversity we love and find beautiful."

The findings appear in a research paper published today in Scientific Reports.

Scientists have known for a long time that coral and algae live in mutual harmony. The two creatures live as one: a soft-bodied polyp animal similar to a sea anemone or jellyfish and an algae living within its cells. The animal provides algae safety and substances for photosynthesis; the algae produce oxygen, help remove wastes and supply the coral with energy. Corals use the energy to make calcium carbonate, the rigid architecture that builds reefs, while the algae contribute to the creatures' jewel-tone hues that make coral spectacular.

They live amicably together until environmental stress disrupts the partnership. When this happens, some coral succumb whereas others are capable of shuffling their symbionts, favoring some algae over others depending on water conditions, competition or available nutrients.

"It's a messy divorce," Kenkel said.

Yet in this breakup, the kids could benefit. Kenkel wanted to understand if parent coral could pass along the reshuffled symbionts to its offspring. It's a tricky proposition because the algae exist independent of the cell nucleus and therefore are not part of the nuclear DNA transfer, parent to offspring, during reproduction.

Her curiosity took her to the Great Barrier Reef and Orpheus Island, off northeast Australia, where she joined scientists from James Cook University and the Australian Institute of Marine Science. The research team focused on a particular coral, Montipora digitata, which is common to the western Pacific Ocean and large swaths of the Great Barrier Reef.

The scientists focused on two consecutive spawning seasons: one under normal conditions during 2015 and the other during the global mass coral bleaching event of 2016. Using DNA sequencing, the scientists screened which corals showed potential to shuffle their symbionts and if the change was reflected in gametes. They found that the numbers and types of algal cells differed considerably from one year to the next, as measured in cell densities and photosynthesis output. It's a finding consistent with other research.

Montipora digitata coral can package algae in their eggs when they reproduce. In looking at the eggs between the two years, they discovered that rearrangements of the algae communities in the adults were also reflected in the coral's eggs, indicating that they could be passed down to offspring from the parents.

"To our knowledge, this is the first evidence that shuffled Symbiodiniaceae (symbiont) communities ... can be inherited by offspring and supports the hypothesis that shuffling in microbial communities may serve as a mechanism of rapid coral acclimation to changing environmental conditions," the study said.

The process is perhaps similar to how mitochondrial DNA works in humans. In that analog, the mitochondria -- an energy-producing unit inside the cell but outside the nucleus -- shares genetic material with offspring via the mother's egg. However, the researchers have not identified the mechanism for transfer in coral; they plan to answer that mystery in the next study.

The findings show coral may be more adaptable than thought, but is it enough?

Corals face an enormous challenge as ocean warming is increasing. According to a United Nations report, the world's coral reefs are at the epicenter for climate change impacts and species loss. If the world warms another 0.9 degrees Fahrenheit, which is likely, coral reefs will probably dwindle by 70% to 90%. A gain of 1.8 degrees, the report says, means 99% of the world's coral will be in trouble.

In some regions, the threat to coral is already severe. For example, as much as 80% of Caribbean Sea coral has been lost in the past three decades, according to the Smithsonian Institution.

"Corals have more mechanisms than we thought to deal with climate change, but they're fighting with a tiny sword against a foe that's like a tank," Kenkel said. "Their adaptability may not be enough. They need time so they can adapt."

Credit: 
University of Southern California

In mice: Transplanted brain stem cells survive without anti-rejection drugs

image: GRPs confocol in brain

Image: 
Johns Hopkins Medicine

In experiments in mice, Johns Hopkins Medicine researchers say they have developed a way to successfully transplant certain protective brain cells without the need for lifelong anti-rejection drugs.

A report on the research, published Sept. 16 in the journal Brain, details the new approach, which selectively circumvents the immune response against foreign cells, allowing transplanted cells to survive, thrive and protect brain tissue long after stopping immune-suppressing drugs.

The ability to successfully transplant healthy cells into the brain without the need for conventional anti-rejection drugs could advance the search for therapies that help children born with a rare but devastating class of genetic diseases in which myelin, the protective coating around neurons that helps them send messages, does not form normally. Approximately 1 of every 100,000 children born in the U.S. will have one of these diseases, such as Pelizaeus-Merzbacher disease. This disorder is characterized by infants missing developmental milestones such as sitting and walking, having involuntary muscle spasms, and potentially experiencing partial paralysis of the arms and legs, all caused by a genetic mutation in the genes that form myelin.

"Because these conditions are initiated by a mutation causing dysfunction in one type of cell, they present a good target for cell therapies, which involve transplanting healthy cells or cells engineered to not have a condition to take over for the diseased, damaged or missing cells," says Piotr Walczak, M.D., Ph.D., associate professor of radiology and radiological science at the Johns Hopkins University School of Medicine.

A major obstacle to our ability to replace these defective cells is the mammalian immune system. The immune system works by rapidly identifying 'self' or 'nonself' tissues, and mounting attacks to destroy nonself or "foreign" invaders. While beneficial when targeting bacteria or viruses, it is a major hurdle for transplanted organs, tissue or cells, which are also flagged for destruction. Traditional anti-rejection drugs that broadly and unspecifically tamp down the immune system altogether frequently work to fend off tissue rejection, but leave patients vulnerable to infection and other side effects. Patients need to remain on these drugs indefinitely.

In a bid to stop the immune response without the side effects, the Johns Hopkins Medicine team sought ways to manipulate T cells, the system's elite infection-fighting force that attacks foreign invaders.

Specifically, Walczak and his team focused on the series of so-called "costimulatory signals" that T cells must encounter in order to begin an attack.

"These signals are in place to help ensure these immune system cells do not go rogue, attacking the body's own healthy tissues," says Gerald Brandacher, M.D., professor of plastic and reconstructive surgery and scientific director of the Vascularized Composite Allotransplantation Research Laboratory at the Johns Hopkins University School of Medicine and co-author of this study.

The idea, he says, was to exploit the natural tendencies of these costimulatory signals as a means of training the immune system to eventually accept transplanted cells as "self" permanently.

To do that, the investigators used two antibodies, CTLA4-Ig and anti-CD154, which keep T cells from beginning an attack when encountering foreign particles by binding to the T cell surface, essentially blocking the 'go' signal. This combination has previously been used successfully to block rejection of solid organ transplants in animals, but had not yet been tested for cell transplants to repair myelin in the brain, says Walczak.

In a key set of experiments, Walczak and his team injected mouse brains with the protective glial cells that produce the myelin sheath that surrounds neurons. These specific cells were genetically engineered to glow so the researchers could keep tabs on them.

The researchers then transplanted the glial cells into three types of mice: mice genetically engineered to not form the glial cells that create the myelin sheath, normal mice and mice bred to be unable to mount an immune response.

Then the researchers used the antibodies to block an immune response, stopping treatment after six days.

Each day, the researchers used a specialized camera that could detect the glowing cells and capture pictures of the mouse brains, looking for the relative presence or absence of the transplanted glial cells. Cells transplanted into control mice that did not receive the antibody treatment immediately began to die off, and their glow was no longer detected by the camera by day 21.

The mice that received the antibody treatment maintained significant levels of transplanted glial cells for over 203 days, showing they were not killed by the mouse's T cells even in the absence of treatment.

"The fact that any glow remained showed us that cells had survived transplantation, even long after stopping the treatment," says Shen Li, M.D., lead author of the study. "We interpret this result as a success in selectively blocking the immune system's T cells from killing the transplanted cells."

The next step was to see whether the transplanted glial cells survived well enough to do what glial cells normally do in the brain -- create the myelin sheath. To do this, the researchers looked for key structural differences between mouse brains with thriving glial cells and those without, using MRI images. In the images, the researchers saw that the cells in the treated animals were indeed populating the appropriate parts of the brain.

Their results confirmed that the transplanted cells were able to thrive and assume their normal function of protecting neurons in the brain.

Walczak cautioned that these results are preliminary. They were able to deliver these cells and allow them to thrive in a localized portion of the mouse brain.

In the future, they hope to combine their findings with studies on cell delivery methods to the brain to help repair the brain more globally.

Credit: 
Johns Hopkins Medicine

New research: More than every second female homicide is committed by the partner

image: Intimate partner homicide -- that is women who are killed by their partner -- constitutes a significant proportion of the homicide statistics in Denmark. A new and extensive research study from the Department of Forensic Medicine at Aarhus University examines all homicides in Denmark over a quarter of a century.

Image: 
Lars Kruse, Aarhus University

Out of the 536 women who were killed between 1992-2016 in Denmark, 300 were killed by their partner. This figure corresponds to 57 per cent of all homicides with female victims.

This is shown by a survey carried out by The Department of Forensic Medicine at Aarhus University, Denmark. As part of his PhD dissertation, forensic pathologist Asser Hedegård Thomsen from the department has reviewed all Danish homicide cases in the period 1992 to 2016 - a total of 1,417 homicides. The study has just been published in the international scientific journal Forensic Science International: Synergy.

Asser Hedegård Thomsen is not surprised that intimate partner homicides are so prevalent in the statistics. This corresponds to what he sees on a daily basis as a forensic pathologist at Aarhus University.

"Gang-related homicides takes up a lot of space in the media and on the political agenda, and of course they're also serious, but compared to intimate partner homicides they actually account for a very small part of the total," he says.

Men are also killed by a partner. During the twenty-five years in question, 79 men were killed by their partner. However, the research project from Aarhus University shows that the homicides often took place after prior threats and violence against the woman. This knowledge is also supported by international studies.

"You can read about it in the police reports, but it can also be observed from the forensic medicine examinations that clearly show the injuries to the women," says Asser Hedegård Thomsen.

This is the first time since 1970 that such detailed data on homicides in Denmark has been produced. The lack of detail and up-to-date knowledge about homicides was also what led Asser Hedegård Thomsen to begin collating and systemising data on homicides. To begin with alongside his full-time work as a forensic pathologist and subsequently as part of a PhD project.

"My aim is to create knowledge that can be utilised as a background for political action, and to do that we must have precise figures that people can count on," he says.

After intimate partner homicide, the typical homicide happens in nightlife and as a rule under the influence of alcohol or drugs. Here the majority of victims are men, he says.

According to Asser Hedegård Thomsen, family is one of the most dangerous groups to be involved in - viewed from a forensic medicine perspective. One in four homicides in Denmark are carried out by the partner, and women are the primary victims. Asser Hedegård Thomsen's study documents that more than two-thirds of the homicide victims are the woman in the family.

Despite there being fewer homicides in Europe, homicides within the family constitute an increasing proportion of the statistics. They are not declining as steadily as the crime-related homicides. This is shown by international research. According to international studies, the explanation for the decline in crime-related homicides should be found in less abuse and less violence in general.

The basis for Asser Hedegård Thomsen's research is autopsy reports, police reports and photos from all homicide cases between 1992 and 2016. Information is registered about the homicide itself, the victims, perpetrators, mutual relations and motive.

The article in Forensic Science International: Synergy is the first of three which together form Asser Hedegård Thomsen's PhD dissertation. For the final two articles in the PhD dissertation, each individual lesion and organ injury suffered by the homicide victims has been registered with a view to demonstrating changes in injury patterns and measuring the effect of improved medical treatment.

Credit: 
Aarhus University

A modelling tool to rapidly predict weed spread risk

image: Each branch of Hudson pear is thickly covered in long barbed spines.

Image: 
Queensland Department of Agriculture and Fisheries.

A new statistical modelling tool will enable land management authorities to predict where invasive weed species are most likely to grow so they can find and eliminate plants before they have time to spread widely.

In the study, published in the journal Methods in Ecology and Evolution, the researchers developed the tool that uses information about the features of a weed species and the geography of the area in which it has been reported to predict specific locations where the weed is likely to spread to first.

Dr Jens Froese, a Postdoctoral fellow with CSIRO, is the lead of the paper co-authored with QUT's Associate Professor Grant Hamilton and Alan Pearse. Dr Froese was with QUT's Quantitative Applied Spatial Ecology Group when the research was conducted.

"When a weed is first introduced, population growth and spread is typically slow," Dr Froese said.

"This 'invasion lag' often presents the only window of opportunity where weed eradication or effective containment can be achieved, and long-term negative impacts avoided.

"Anyone who has ever battled with a bad weed infestation in their backyard knows it's best to get in early and decisively.

"Responding to new weed incursions early and rapidly is very important, but decisions in the field about where to target surveillance and control activities are often made under considerable time, knowledge and capacity constraints."

Professor Hamilton said the tool developed by the researches used a Bayesian statistical model and GIS mapping software to predict weed spread risk in a quick but still accurate manner.

"The model looks at things like habitat suitability, where the weed is likely to grow well and reproduce, as well as habitat susceptibility, which is where weed seeds are likely to arrive from the source plants," Professor Hamilton said.

The tool would be used by land management authorities to identify at-risk areas in need of careful monitoring, and they would then decide on the best method to remove the weed.

"A key feature of our tool is that it allows land managers to evaluate these at-risk areas relatively quickly and with limited data," Professor Hamilton said.

The researchers have posted the free tool online, via a collection of web apps called riskmapr, for land management authorities and researchers to use.

The tool can be used for any weed species in the early stages of invasion.

To test the tool's effectiveness, the researchers evaluated data from two species that are extremely invasive in Queensland, the fast growing Mexican bean tree and the spiny cactus Hudson pear.

They took locations from early reports of the weeds, used the modelling tool to predict high risk areas, and compared that to how the weeds had actually spread in later years.

"We found that locations predicted as high risk were much more likely to be actually invaded by Mexican bean tree and Hudson pear, than locations with moderate or low invasion risk.

These results suggest that surveillance and control activities can be confidently focused on high-risk areas," Dr Froese said.

Credit: 
Queensland University of Technology

Reduced-dose IMRT w/ cisplatin meets predetermined benchmarks for PFS and swallowing-related QOL

Results of the NRG Oncology clinical trial NRG-HN002 indicated that the combination of intensity-modulated radiotherapy (IMRT) and cisplatin was able to meet acceptability criteria for progression-free survival (PFS) and swallowing-related quality of life for patients who have p16-positive, non-smoking-associated oropharyngeal cancer. These results were highlighted during the plenary session at the American Society for Radiation Oncology's (ASTRO) Annual Meeting in September 2019 and this presentation was the recipient of ASTRO and ABR Foundation's Steven A. Leibel Memorial Award at the Meeting. The abstract was one of four abstracts chosen from over 3,000 submitted abstracts for the plenary session.

NRG-HN002 was designed with two parallel experimental arms, consisting of either IMRT with cisplatin over six weeks or modestly accelerated IMRT alone over five weeks. The primary goal of the trial was to determine which treatment option would achieve an acceptable PFS rate without unacceptable swallowing-related quality of life as measured by the MD Anderson Dysphagia Inventory (MDADI). Additionally, overall survival (OS) was tracked as a secondary endpoint among patients on the treatment arms. Following treatment, the 2-year PFS rate was 90.5% and the 1-year mean of MDADI score was 85.3 points for patients that received IMRT and cisplatin. These rates indicate that IMRT and cisplatin treatment met the acceptability criteria for both 2-year PFS and 1-year MDADI. Patients on the IMRT and cisplatin arm reported higher rates of grade 3 or greater acute toxicities than patients who received IMRT alone, but grade 3 or greater late toxicity rates and overall survival rates were statistically similar on each treatment arm.

"Based on the study design and the 2-year PFS estimate of 90.5% for the IMRT and cisplatin arm, there is data evidence to conclude that the 2-year PFS for the IMRT and cisplatin arm in this population is greater than 85%. In the radiation alone arm, there is not enough data evidence to conclude that the 2-year PFS in this population is greater than 85%, despite the 2-year PFS estimate of 87.6% in this arm. Both treatment arms were able to pass the MDADI threshold," stated Sue S. Yom, MD, PhD, of the University of California, San Francisco and principal investigator of NRG-HN002.

NRG-HN002 had 306 patients that were eligible and randomly assigned to receive either IMRT with cisplatin (157) or IMRT alone (149). Of 157 patients assigned to received IMRT and cisplatin, 80.9% completed 5 or more cisplatin cycles and 72.6% received at least 200mg/m2. On the IMRT and cisplatin arm there were 15.1% grade 4 and 64.5% grade 3 acute toxicities, 1.3% grade 4 and 20.0% grade 3 late toxicities reported. Patients on the IMRT alone treatment arm experienced 2.0% grade 4 and 50.3% grade 3 acute toxicities and 1.4% grade 4 and 16.7% grade 3 late toxicities. No grade 5 toxicities were reported on either arm. Estimated 2-year OS rates were 96.7% for IMRT and cisplatin treatment and 97.3% for IMRT alone. Nine patients withdrew consent from the trial and five participants did not have 2-year assessments to include in the results.

Credit: 
NRG Oncology

Cosmetic changes are equivalent after WBI vs PBI for women with early stage breast cancer

Results from the Quality of Life substudy of the NRG Oncology clinical trial NSABP B-39/RTOG 0413 indicate that women rated post-lumpectomy partial breast irradiation (PBI) as equivalent to whole breast irradiation (WBI) in terms of cosmetic outcomes and satisfaction from baseline to three years following radiotherapy treatment. Treating physicians from the accruing site rated PBI as inferior to WBI while physicians who performed central review of digital photos blinded to treatment arm and time-point rated cosmetic outcome from PBI equivalent to WBI. These results were presented at the annual meeting of the American Society for Radiation Oncology (ASTRO) during the Plenary Session. The abstract was one of four abstracts chosen from over 3,000 submitted abstracts for the plenary session.

The initial results of this phase III trial were reported at the San Antonio Breast Cancer Symposium in 2018 and indicated that, although outcomes were close, PBI was not considered equivalent to WBI because the hazard ratio between treatment arms fell short of meeting statistical equivalence at 10 years following treatment. Due to the small differences in ipsilateral breast tumor recurrence (IBTR) rates of less than 1% between trial arms, researchers determined that PBI treatment could still be beneficial for a portion of breast cancer patients undergoing lumpectomy for breast conservation.

"It is crucial that we provide breast cancer patients with the most up-to-date information possible as they are facing decisions about radiation treatment options for breast-conservation. Partial breast irradiation delivered over several days has the potential to reduce the time commitment, burden and healthcare costs associated with WBI. For those breast cancer patients that PBI and WBI result in similar cancer control, the appearance of the breast after treatment may be the key factor in selecting the radiation method they will pursue after lumpectomy. This affects thousands of breast cancer patients annually," stated Julia R. White, MD, of the Ohio State University Comprehensive Cancer Center and the lead author of the NRG-NSABP B-39/RTOG 0413 manuscript. "This substudy of NSABP B-39/RTOG 0413 was intended to address these questions regarding cosmetic outcome from PBI and WBI to inform patients and providers who are facing decisions about which radiotherapy method to choose following breast conserving surgery"

NRG Oncology's Quality of Life substudy of NSABP B-39/RTOG 0413 was designed to determine patient reported outcomes, and the focus this analysis is the cosmetic outcome between PBI and WBI treatments. Women were analyzed separately depending on whether they received chemotherapy or if they did not receive chemotherapy as a part of their treatment; with the primary focus of this presentation on the outcome of chemotherapy use groups combined. Of the 900 women that were analyzed from this trial, 420 of the women received chemotherapy and 480 women did not receive chemotherapy. Cosmetic outcome was measured by patient-rated global cosmetic score (GCS) and satisfaction, physician-rated GCS, and digital photos of the breasts, all collected from baseline through 1 year, and then 3 years following radiotherapy. The digital photos documenting cosmetic outcome at each time point were scored for GCS by central physician review blinded to treatment, breast treated, chemotherapy use, and time points.

The change in the Global Cosmetic Score over time in the PBI and WBI groups was examined using longitudinal mixed models controlling for chemo use (yes vs no), race (white vs. not white), age (continuous), axillary dissection (yes vs no), and the treatment by time point interaction. The patient-rated GCS, in the entire group, PBI was equivalent to WBI in cosmetic outcomes. From the Physicians at the accruing site rating GCS, PBI was worse than WBI at 36 months but change in GCS over time were equivalent. Cosmetic outcomes based on GCS rated by central review of digital photos by physicians blinded to treatment arm and time point indicated that PBI and WBI were equivalent.

Credit: 
NRG Oncology