Culture

Researchers and rats play 'hide and seek,' illuminating playful behavior in animals

Rats can be taught to play hide and seek with humans and can become quite skilled at the game, according to a new study, which presents a novel paradigm for studying insights into the neurobiology of playful behavior in animals. The inherent characteristics of animal play behavior – that it is free and provides no benefits beyond the game – make it difficult to evaluate using the traditional methods of neuroscience, which often rely on strict control and conditioning. As a result, very little is known about the prevalence or neural basis of playful behaviors in animals. Annika Reinhold and colleagues taught rats to play a simplified, rat-versus-human version of “Hide and Seek,” a game played by humans across cultures worldwide. After a few weeks, the rats were not only able to play the game but learned how to alternate between hiding and seeking roles. They learned how to perform each role at a highly proficient level. According to Reinhold et al., when seeking, the animals learned to look for a hidden human and to keep looking until they found them. When in a hiding role, the rats remained in place until discovered by the human player. Rather than food, the authors rewarded successful hiding and seeking behaviors with playful social interactions, such as tickling, petting or rough-and-tumble-like play. The results show that the animals became more strategic players over time – employing systematic searches, visual cues and investigating the past hiding places of their human counterparts. When hiding, they remained silent and changed their locations, preferring to be concealed in opaque cardboard boxes over transparent boxes. The authors also observed rat vocalizations unique to each role and associated neuronal recordings revealed intense activity in the prefrontal-cortex that varied with game events.

Credit: 
American Association for the Advancement of Science (AAAS)

JILA's novel atomic clock design offers 'tweezer' control

image: JILA/NIST physicist Adam Kaufman adjusts the set-up for a laser that controls and cools the strontium atoms in the optical tweezer clock. The atoms are trapped individually by 10 tweezers -- laser light focused into tiny spots -- inside the square orange container behind Kaufman's hand.

Image: 
Burrus/NIST

JILA physicists have demonstrated a novel atomic clock design that combines near-continuous operation with strong signals and high stability, features not previously found together in a single type of next-generation atomic clock. The new clock, which uses laser "tweezers" to trap, control and isolate the atoms, also offers unique possibilities for enhancing clock performance using the tricks of quantum physics.

Described in a paper to be published online Sept. 12 by the journal Science, the new clock platform is an array of up to 10 strontium atoms confined individually by 10 optical tweezers, which are created by an infrared laser beam aimed through a microscope and deflected into 10 spots.

JILA is a joint research and training institute operated by the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder.

While JILA researchers have yet to fully evaluate the new clock's performance, preliminary data suggest the design is promising. The tweezer clock is "on duty" self-verifying its performance 96% of the time because it needs little downtime to prepare new atoms, and the atoms are well-isolated so they are less likely to interfere with one another. Both of these strengths are shared with one of the world's leading clocks, a clock based on a single ion (electrically charged atom). The tweezer clock also can provide the strong signals and stability of a multi-atom lattice clock, which traps atoms in a grid of laser light.

"The tweezer design's long-term promise as a competitive clock is rooted in its unique balancing of these capabilities," JILA/NIST physicist and project leader Adam Kaufman said.

Next-generation atomic clocks stabilize the color, or frequency, of a laser to atoms "ticking" between two energy levels. The tweezer clock traps and controls atoms individually to maintain ticking stability and detects this behavior without losing them, and thus can reuse the same atoms many times without needing to constantly reload new ones.

"The tweezer design addresses various issues with other atomic clocks," Kaufman said. "Using our technique, we can hold onto atoms and reuse them for as long as 16 seconds, which improves the duty cycle--the fraction of time spent using the atoms' ticking to correct the laser frequency--and precision. The tweezer clock can also get a single atom very rapidly into a trap site, which means there is less interference and you get a more stable signal for a longer time."

NIST and JILA researchers have been building next-generation atomic clocks for many years. These clocks operate at optical frequencies, which are much higher than current time standards based on microwave frequencies. The research is helping to prepare for the future international redefinition of the second, which has been based on the cesium atom since 1967. Optical clocks also have applications beyond timekeeping such as measuring Earth's shape based on gravity measurements (called geodesy), searching for the elusive dark matter thought to make up most of the matter in the universe, and enhancing quantum information sciences.

To create the tweezer clock, an infrared laser beam is aimed into a microscope and focused to a small spot. Radio waves at 10 different frequencies are applied sequentially to a special deflector to create 10 spots of light for trapping individual atoms. The traps are refilled every few seconds from a pre-chilled cloud of atoms overlapped with the tweezer light.

The atoms held by the tweezers are excited by a laser stabilized by a silicon crystal cavity, in which light bounces back and forth at a specific frequency. This "clock laser" light--provided by co-author and NIST/JILA Fellow Jun Ye's lab--is applied perpendicular to the tweezer light, along with an applied magnetic field. Non-destructive imaging reveals whether the atoms are ticking properly; the atoms only emit light, or fluoresce, when in the lower-energy state.

Too many atoms in the system can lead to collisions that destabilize the clock, so to get rid of extra atoms, the researchers apply a pulse of light to create weakly bound molecules, which then break apart and escape the trap. Tweezer sites are left either with one atom or empty; with each run of the experiment, each tweezer has about a 50% chance of being empty or containing a single atom. Having at most one atom per site keeps the ticking stable for longer time periods.

Like ordinary metal tweezers, the laser tweezers offer pinpoint control, which enables researchers to vary the spacing between atoms and tweak their quantum properties. Kaufman has previously used optical tweezers to "entangle" two atoms, a quantum phenomenon that links their properties even at a distance. The tweezers are used to excite the atoms so their electrons are more weakly bound to the nucleus. This "fluffy" state makes it easier to trap the atoms in opposing internal magnetic states called spin up and spin down. Then a process called spin exchange entangles the atoms. Special quantum states like entanglement can improve measurement sensitivity and thus may enhance clock precision.

The research team now plans to build a larger clock and formally evaluate its performance. Specifically, the researchers plan to use more tweezers and atoms, with a target of about 150 atoms. Kaufman also plans to add entanglement, which could improve clock sensitivity and performance and, in a separate application, perhaps provide a new platform for quantum computing and simulation.

Credit: 
National Institute of Standards and Technology (NIST)

Stem cell researchers reactivate 'back-up genes' in the lab

image: "The first step towards developing new treatments is figuring out how X chromosome reactivation actually works." PhD researchers Adrian Janiszewski (left) and Irene Talon (middle) with Assistant Professor Vincent Pasque (right) from KU Leuven.

Image: 
KU Leuven

Vincent Pasque and his team at KU Leuven have unravelled parts of a mechanism that may one day help to treat Rett syndrome and other genetic disorders linked to the X chromosome.

Women and most female mammals have two X chromosomes, but only one of these is active in any given cell. This active X chromosome is selected through a flip-of-the-coin process in the very early stages of embryonic development: each chromosome has a 50/50 chance of remaining active and getting to express its genes, or to be inactivated through a process called X chromosome inactivation.

X chromosome inactivation is a perfectly normal process, but the consequences can be devastating when one of the X chromosomes carries a defective gene. This is the case in female patients with Rett syndrome: after one chromosome in each cell becomes inactive, about half of the patient's cells will use the defective gene. Once born, the girls suffer a progressive loss of motor skills and speech. Male patients with Rett syndrome have only one X chromosome and therefore no healthy copy of the gene to compensate for the defective one; these patients usually die before birth.

So how can we treat Rett syndrome and other X-linked disorders? In theory, the answer is simple: in cells that use the defective gene, we reactivate the healthy copy on the inactive X chromosome. In practice, however, that's easier said than done.

Stem cell researchers from the Vincent Pasque Lab at KU Leuven, together with researchers from the Jean-Christophe Marine lab (VIB/KU Leuven) and the Edith Heard lab (EMBL, Germany) have now solved part of the puzzle. In a paper published in Genome Research, they present new findings on the underlying mechanism of X chromosome reactivation.

Reactivating genes in the lab

The first step towards developing new treatments is figuring out how X chromosome reactivation actually works, explains Adrian Janiszewski (KU Leuven), a co-lead author of the study. "Under normal circumstances, inactive X chromosomes only become active again during one of the very early stages of embryonic development. Rather than studying embryos, we used a technique known as cell reprogramming: we took adult cells from female mice and reprogrammed them in the culture dish into so-called induced pluripotent stem cells or iPS cells, which resemble embryonic stem cells but are not derived from early embryos."

Assistant Professor Vincent Pasque, the senior author of this study, continues: "Working with iPS cells has numerous advantages. Most importantly, when you reprogramme female adult cells into iPS cells, both X chromosomes become active again. In other words: X chromosome reactivation starts happening right under your microscope."

Irene Talon (KU Leuven), the second co-lead author of the study, continues: "We monitored almost 200 different X-linked genes throughout the X chromosome reactivation process. What we found is that reactivation happens gradually: different genes require different amounts of time to become active again. Our findings suggest that the explanation for this speed difference is a combination of the location of the gene in 3D space on the X chromosome and the role of proteins (transcription factors), and enzymes (histone deacetylases), in particular."

Long-term therapeutic potential

While this is one part of the puzzle, a lot of work remains to be done, Vincent Pasque concludes. "It's important to remember that we're talking about very fundamental research here. Contributing to the development of a cure for Rett syndrome and similar disorders is our long-term goal, but it will take us a while to get there, and there are many hurdles to overcome."

"We still need to figure out how to use the mechanism for a single gene, how to do it safely in patients, and how to target the right cells in the brain. We do not yet know how to overcome these formidable challenges but we do know that gaining a fundamental understanding of how things work is the crucial first step. That's how science works: it's a slow process."

"Now that we have pinpointed three factors involved in X chromosome reactivation, we can start experimenting to find out more about their precise role. We need to know the ins and outs of X chromosome reactivation before we can try to use the mechanism for therapeutic purposes. This is why fundamental research is so important."

Credit: 
KU Leuven

Cells that make bone marrow also travel to the womb to help pregnancy

image: An immunofluorescent section of a pregnant mouse uterus on embryonic day E9.5 showing the distribution of bone marrow-derived cells, labeled with green fluorescent protein (GFP). Cell nuclei are labeled with DAPI (blue). Numerous bone marrow-derived cells are recruited to the pregnant uterus where they contribute to embryo implantation and pregnancy.

Image: 
Reshef Tal

Bone marrow-derived cells play a role in changes to the mouse uterus before and during pregnancy, enabling implantation of the embryo and reducing pregnancy loss, according to research published September 12 in the open-access journal PLOS Biology. Although the study was done in mice, it raises the possibility that dysfunction of bone marrow-derived progenitor cells may contribute to implantation failure and pregnancy loss in women.

Bone marrow progenitors can become either blood or tissue cells. Within the uterus they differentiate into endometrial tissue cells in the lining of the womb, but until now it was not known if they have a function in pregnancy. Reshef Tal and colleagues from Yale School of Medicine developed a bone marrow transplantation protocol that preserved ovary and reproductive function, allowing them for the first time to track these cells in pregnancy, showing that during pregnancy cells from the bone marrow were preferentially recruited to the uterus and were concentrated near the site of implantation, on the maternal side of the placenta. The authors demonstrate that after reaching the uterus, these cells proliferate and become so-called decidual cells, specialized uterine cells which are critical for nurturing the embryo and supporting its implantation.

The authors then made use of mice that lack Hoxa11, a protein which is found in uterine cells but also in bone marrow-derived progenitor cells; these mice are known to have defects in the womb lining and are unable to become pregnant. Mice with partial Hoxa11 deficiency can become pregnant but have recurrent pregnancy losses. Strikingly, the authors found that after receiving bone marrow transplants from healthy mice, the Hoxa11-deficient mice switched on genes involved in preparing the womb lining for pregnancy and became pregnant. In mice with partial Hoxa11 deficiency, bone marrow transplant from healthy mice prevented pregnancy loss resulting in normal litter numbers.

Recurrent pregnancy loss in humans affects 1-2% of couples and is usually unexplained. Hoxa11 production has been implicated in human implantation and several studies have shown that levels of Hoxa11 protein are decreased in conditions associated with pregnancy failure such as endometriosis, submucosal leiomyomas and pregnancy loss. The findings reveal the role of Hoxa11-positive bone marrow-derived progenitor cells in the mouse uterus and the authors suggest further work be done to investigate the role of these cells in human implantation and pregnancy.

The authors of the study say, "The common thinking about bone marrow in relation to pregnancy is that it is the origin of many immune cells which play important roles at the maternal-fetal interface to promote successful pregnancy. This study shows for the first time that adult bone marrow is also a source of non-immune cells in the pregnant uterus. We demonstrate that bone marrow progenitors are mobilized to the circulation in pregnancy and home to the uterus where they become decidual cells and have profound effects on gene expression in the uterine environment and ultimately help prevent pregnancy loss."

The authors go on to say, "We are currently translating these findings into humans to better understand the role that these cells play in recurrent implantation failure and recurrent pregnancy loss, two conditions that are unexplained in the majority of cases and have no effective treatment. The findings of this study open exciting new avenues for research into the cause of these conditions as well as developing new treatments for women suffering from them."

Credit: 
PLOS

Big game hunting for a more versatile catalyst

image: Betley and his team of collaborators have characterized the architecture of a copper-nitrenoid complex, a catalyst hunted for over a half century.

Image: 
Harvard University

To make soap, just insert an oxygen atom into a carbon-hydrogen bond. The recipe may sound simple. But carbon-hydrogen bonds, like gum stuck in hair, are difficult to pull apart. Since they provide the foundation for far more than just soap, finding a way to break that stubborn pair could revolutionize how chemical industries produce everything from pharmaceuticals to household goods.

Now, researchers at Harvard University and Cornell University have done just that: For the first time, they discovered exactly how a reactive copper-nitrene catalyst--which like the peanut butter used to loosen gum's grip on hair, helps nudge a chemical reaction to occur--could transform one of those strong carbon-hydrogen bonds into a carbon-nitrogen bond, a valuable building block for chemical synthesis.

In a paper published in Science, Kurtis Carsch, a Ph.D. student in the Graduate School of Arts and Sciences at Harvard University, Ted Betley, the Erving Professor of Chemistry at Harvard, Kyle Lancaster, Associate Professor of Chemistry at Cornell University, and their team of collaborators, not only describe how a reactive copper-nitrene catalyst performs its magic, but also how to bottle the tool to break those stubborn carbon-hydrogen bonds and make products like solvents, detergents, and dyes with less waste, energy, and cost.

Industries often forge the foundation of such products (amines) through a multi-step process: First, raw alkane materials are converted to reactive molecules, often with high-cost, sometimes noxious catalysts. Then, the transformed substrate needs to exchange a chemical group, which often requires a whole new catalytic system. Avoiding that intermediate step--and instead instantly inserting the desired function directly into the starting material--could reduce the overall materials, energy, cost, and potentially even the toxicity of the process.

That's what Betley and his team aimed to do: Find a catalyst that could skip chemical steps. Even though researchers have hunted for the exact make-up of a reactive copper-nitrene catalyst for over a half century and even speculated that copper and nitrogen might be the core of the chemical tool, the exact formation of the pair's electrons remained unknown. "Electrons are like real estate, man. Location is everything," Betley said.

"The disposition of electrons in a molecule is intimately tied to its reactivity," said Lancaster, who, along with Ida DiMucci, a graduate student in his lab, helped establish the inventories of electrons on the copper and the nitrogen. Using X-ray spectroscopy to find energies where photons would be absorbed--the mark of an electron's absence--they found two distinct holes on the nitrogen.

"This flavor of nitrogen--in which you have these two electrons missing--has been implicated in reactivity for decades, but nobody has provided direct experimental evidence for such a species."

They have now. Typically, if a copper atom binds to a nitrogen, both give up some of their electrons to form a covalent bond, in which they share the electrons equitably. "In this case," Betley said, "it's the nitrogen with two holes on it, so it has two free radicals and it's just bound by a lone pair into the copper."

That binding prevents the volatile nitrene from whizzing off and performing destructive chemistry with whatever gets in its way. When someone gets a cut on their leg, for example, the body sends out a reactive oxygen species, similar to these nitrene radicals. The reactive oxygen species attacks invading parasites or infectious agents, but they can damage DNA, too.

So, to contain the reactive nitrene, first-author Carsch built a massive cage in the form of a ligand. The ligand--like organic shrubbery surrounding the copper nitrene pair--keeps the catalyst intact. Cut back that shrubbery and introduce another substance--like a carbon-hydrogen bond--and the fiery nitrene gets to work.

Betley calls the catalyst a skeleton key, a tool with the potential to unlock bonds that would otherwise be too strong to use in synthesis. "Hopefully, we can generate these chemical species that are now going to be so reactive that they render the most inert kind of substances we have around us as something we can play with," he said. "That would be really, really powerful." Since the building blocks--like copper and amines--are abundant and cheap, the skeleton key could unlock more practical ways to make pharmaceuticals or household products.

When Carsch first made the molecule, "he was literally bounding with joy," Betley said. "I was like, 'OK, settle down.'" But the results got more interesting: the nitrene reacts better than expected even though "the molecule has no right to be stable," and the bonding structure looked different than any of the designs proposed during the last six decades of research. "Had we proposed it at the outset, I think people would have scoffed at us."

Even though Betley chased this elusive species--what Lancaster calls "big game hunting"--ever since he launched his lab in 2007, he cares less about his win and more about his collaborators. "I get all my enjoyment from seeing Kurtis and my other students get super fired up about what they've actually been able to make." Carsch faced both critics and chemical walls but persisted in his hunt nonetheless. "I'm glad he's stubborn, as stubborn as I am," Betley said. They both might be as stubborn as the bonds they can now break.

At Cornell, when Lancaster and fifth-year graduate student DiMucci confirmed the findings, he "sent a rather colorful email" to the Betley team. But he, too, credits his collaborators. DiMucci spent seven days at the Stanford Synchrotron Radiation Lightsource analyzing the catalyst's electronic structure with their team. "Without their new experimental capabilities," Lancaster said, "we really would not have had the signal to noise and the low background that made identifying this thing pretty easy."

Next, the team could draw inspiration from this new design to build catalysts with even broader-reaching applications, like mirroring nature's way of converting dangerous methane into methanol. "A real holy grail would be to say, 'OK, that C-H bond there, that particular one in this molecule, I want to turn that into a C-N bond or a C-O bond,'" Lancaster said. That may be a distant goal, but his so-called "dream team" could be the right one to hunt down the solution.

Credit: 
Harvard University

UBC researchers design roadmap for hydrogen supply network

image: Hoda Talebian

Image: 
UBC

Transportation is the largest source of greenhouse gas emissions in British Columbia. Researchers at the University of British Columbia have developed a hydrogen supply chain model that can enable the adoption of zero-emission, hydrogen-powered cars--transforming them from a novelty into everyday transportation in just 30 years.

In a new study published this week, UBC researchers provide an analysis of the infrastructure needed to support hydrogen cars, SUVs and mini vans in British Columbia. They recommend a refuelling infrastructure extending from Prince George in the north to Kamloops and Vancouver in the south and Victoria in the west. Production plants would capture by-product hydrogen from chemical plants or produce it from water electrolysis and steam methane reforming. A network of refuelling stations would be established to serve consumers in major urban centres.

"Hydrogen-powered vehicles are a strong alternative to battery electric vehicles, which don't always comply with fast-refuelling, long-distance travel or cold weather requirements," says lead author Hoda Talebian, a PhD candidate in the department of mechanical engineering at UBC. "We believe we have created the most comprehensive model for hydrogen adoption in a region like B.C., where demand is still low for these types of vehicles."

The researchers, all affiliated with UBC's Clean Energy Research Centre (CERC), analyzed future demand for light-duty hydrogen vehicles and included the potential effects of policy tools like B.C.'s carbon tax and the low carbon fuel standard.

"Provided B.C. maintains those policies, and assuming enough hydrogen vehicles are available, our model sees hydrogen demand growing significantly every year," said co-author and CERC program manager Omar Herrera.

The researchers note that hydrogen cars like the Toyota Mirai and Hyundai's Nexo are already available in B.C., and a public retail hydrogen station opened in Vancouver last year--Canada's first. By 2020, Greater Vancouver and Victoria are projected to have a network of six stations.

"The momentum for hydrogen vehicles is growing, and B.C. is leading developments in Canada by providing supports like car sales rebates and incentives for building fuelling stations," said senior study author Walter Mérida, an engineering professor at UBC who studies clean energy technologies and leads the transportation futures research group in the faculty of applied science.

"However, we need a solid refuelling network to truly promote mass adoption. We hope that our framework contributes to its development and to the CleanBC plan, which includes a zero-emission vehicle mandate by 2040."

"We do see a future where hydrogen can be economically competitive with gasoline, while significantly reducing greenhouse gas emissions," added Mérida. "This study is part of a broad, multidisciplinary effort on the future of transportation. As the energy system becomes smart and decarbonized, hydrogen will become a critical bridge between renewable energy and transportation."

Credit: 
University of British Columbia

Early detection is key: Screening test could improve lives of cats with heart disease

image: This new, two-minute screening technique could help save cats from dying prematurely of heart disease. Developed by Morris Animal Foundation-funded researchers at the Cummings School of Veterinary Medicine at Tufts University, the protocol can be used by veterinarians in general practice to increase detection of cardiac issues in cats that aren't outwardly showing signs of disease.

Image: 
Dr. Elizabeth Rozanski, Cummings School of Veterinary Medicine at Tufts University

DENVER/September 12, 2019 - A new, two-minute screening technique could help save cats from dying prematurely of heart disease. Morris Animal Foundation-funded researchers at the Cummings School of Veterinary Medicine at Tufts University recently developed a focused cardiac ultrasound (FCU) protocol for use by veterinarians in general practice to increase detection of cardiac issues in cats that aren't outwardly showing signs of disease. The team published their study in the Journal of Veterinary Internal Medicine.

"Heart disease is one of the biggest killers of our cats. It's very common but often undiagnosed, as many cats don't reveal symptoms," said Dr. Elizabeth Rozanski, veterinary researcher, clinician and associate professor at Cummings School. "This method is something small animal practitioners can add to their yearly physical exams as cats gets older to catch heart disease earlier."

Some studies indicate that up to 20% of cats die from heart disease every year. Many cats don't show any noticeable signs of distress until they are already in heart failure. Cats hide disease well and have been evolved to hide illnesses and vulnerabilities to survive predation. They also usually live sedentary lifestyles that help hide signs of disease.

Full echocardiograms can successfully detect heart problems, but can be costly, require special training and are usually reserved for when a cat is already showing distress - often too late to make a difference. Dr. Rozanski proposed that a focused cardiac ultrasound (FCU), an abbreviated echocardiogram using equipment already available in a practice, could screen and determine if a cat should receive a more in-depth evaluation.

FCUs already help detect hidden heart disease in human patients and require less instruction. Veterinarians could be trained to specifically look for some easily measurable criteria of feline heart disease. Based on their findings, they could recommend further evaluation if cats met the criteria for heart disease as determined by the FCU.

To test this, Dr. Rozanski's team taught 22 general practice veterinarians in the New England and Philadelphia area to perform FCUs on about 300 cats. None of the practitioners had any prior, formal cardiac ultrasound training. The cats were all at least 1 year old and none had shown any clinical signs of heart disease.

The veterinarians first performed standard physical examinations and electrocardiograms on each of their cats. Then they performed FCUs and were asked to indicate yes, no or equivocal as to whether they believed clinically-significant heart disease was present. A board-certified cardiologist then evaluated each of the cats to confirm their status.

Even with limited training, the veterinarians were 93% successful at diagnosing cats with moderate heart disease and 100% successful at diagnosing severe heart disease.

Dr. Rozanski has already helped produce a video to teach veterinarians in general practice how to perform this technique. She also said the practitioners she trained are now using it on a regular basis.

"This appears to be a very feasible and useful tool for general practitioners to accurately identify cats that would benefit from going to see a cardiologist," said Dr. Janet Patterson-Kane, Morris Animal Foundation Chief Scientific Officer. "Early detection is so important, not only for cats, but for the owners who love them."

Credit: 
Morris Animal Foundation

Princeton researchers explore how a carbon-fixing organelle forms via phase separation

image: The left panel shows a type of single-celled algae known as Chlamydomonas reinhardtii with its chloroplast (purple) containing only a single pyrenoid (green). The SAGA1 mutant shown on the right displays several extra pyrenoids.

Image: 
Alan Itakura

Plants, algae and other photosynthetic organisms remove carbon dioxide from the air, incorporating it into starches in a process known as carbon fixation. In green algae, which contribute up to a third of global carbon fixation, this activity is greatly enhanced by an organelle called the pyrenoid. A new paper by Princeton researcher Martin Jonikas, assistant professor of molecular biology, and colleagues, which appeared online in the journal Proceedings of the National Academy of Sciences on August 27, 2019, investigates a gene important for regulating pyrenoid shape and number, and enhances our understanding of this essential component of the global carbon cycle.

In algae, as in plants, the task of carbon fixation is carried out by an enzyme known as Rubisco within the chloroplast, the cellular compartment where photosynthesis takes place. In plants, Rubisco occurs throughout the chloroplast, but in algae, Rubisco molecules cluster together to form a distinct structure, the pyrenoid.

This structure assembles via a process known as phase separation in a manner similar to how oil forms clusters when placed in water. The pyrenoid is surrounded by a starch-based sheath. In most species, the pyrenoid also contains tubules that extend into it from the thylakoid, where light-dependent reactions of photosynthesis occur. Thylakoid tubules convey concentrated carbon dioxide to Rubisco, greatly improving the enzyme's efficiency--something that is very important for algae since they live in aquatic environments where carbon dioxide can be hard to access.

Prior studies by Jonikas' group have shown that the pyrenoid is not a permanent feature of the cell, but instead dissolves during cell division then re-forms as small clusters of phase-separated proteins that coalesce into a larger mass. Algal chloroplasts normally have only one pyrenoid because, like oil poured onto water, phase-separated protein clusters aggregate together to minimize their exposed surface area.

While conducting a screen for genes affecting pyrenoid function in the green alga Chlamydomonas reinhardtii, graduate student and the study's first author Alan Itakura and postdoctoral researcher Leif Pallesen, both in the Jonikas group, uncovered a gene called SAGA1 (Starch Granules Abnormal-1), the loss of which causes cells to grow poorly. When the researchers, including the study's co-first author, Kher Xing (Cindy) Chan in Howard Griffiths' group at the University of Cambridge, examined the mutant cells, they noticed that SAGA1 mutants possess multiple pyrenoids--up to 10 per cell. This was surprising since normal cells almost always contain just one pyrenoid. Intrigued, the team decided to investigate further.

Because the SAGA1 protein is predicted to contain a starch-binding domain, the researchers first explored whether loss of the SAGA1 gene affects the architecture of the starch plates that make up the pyrenoid sheath. Indeed, the pyrenoids in SAGA1-deficient cells have fewer and abnormally elongated starch plates in their sheaths. The authors also found evidence that the SAGA1 protein binds to Rubisco. Together, these data suggest that SAGA1 helps direct the proper formation of the pyrenoid's starch sheath, and the attachment of Rubisco to it.

But why would loss of SAGA1 affect pyrenoid number? The study results suggest that the increased surface area of the defective starch sheaths lead to the formation of multiple pyrenoids. Normally, the starch plates are sized appropriately to create a single pyrenoid, but the elongated starch plates in SAGA1 mutants pinch off portions of the matrix, resulting in extra pyrenoids.

Although this model explains why more pyrenoids might appear in SAGA1 mutants, it doesn't explain why excess pyrenoids hamper cell growth. The authors found that Rubisco levels are unchanged in SAGA1 mutants, suggesting that the same amount of protein is distributed across multiple pyrenoids. However, the researchers noticed that most of these extra pyrenoids lack a thylakoid tubule network. Pyrenoids without a thylakoid network would be starved of carbon dioxide, suggesting the Rubisco they contain is idled and not contributing to growth.

The work provides a useful new model to explain how a peripheral component, the starch sheath, helps cells regulate their number of pyrenoids. The authors suggest that such a mechanism may also apply to the biogenesis of other phase-separated organelles such as stress granules.

Credit: 
Princeton University

Elaborate Komodo dragon armor defends against other dragons

image: Bony plates called osteoderms (colored orange) cover the skull of an adult Komodo dragon.

Image: 
The University of Texas at Austin / Jackson School of Geosciences.

Just beneath their scales, Komodo dragons wear a suit of armor made of tiny bones. These bones cover the dragons from head to tail, creating a "chain mail" that protects the giant predators. However, the armor raises a question: What does the world's largest lizard - the dominant predator in its natural habitat - need protection from?

After scanning Komodo dragon specimens with high-powered X-rays, researchers at The University of Texas at Austin think they have an answer: other Komodo dragons.

Jessica Maisano, a scientist in the UT Jackson School of Geosciences, led the research, which was published Sept.10 in the journal The Anatomical Record. Her co-authors are Christopher Bell, a professor in the Jackson School; Travis Laduc, an assistant professor in the UT College of Natural Sciences; and Diane Barber, the curator of cold-blooded animals at the Fort Worth Zoo.

The scientists came to their conclusion by using computed tomography (CT) technology to look inside and digitally reconstruct the skeletons of two deceased dragon specimens - one adult and one baby. The adult was well-equipped with armor, but it was completely absent in the baby. It's a finding that suggests that the bony plates don't appear until adulthood. And the only thing adult dragons need protection from is other dragons.

"Young komodo dragons spend quite a bit of time in trees, and when they're large enough to come out of the trees, that's when they start getting in arguments with members of their own species," Bell said. "That would be a time when extra armor would help."

Many groups of lizards have bones embedded in their skin called osteoderms. Scientists have known about osteoderms in Komodo dragons since at least the 1920s, when naturalist William D. Burden noted their presence as an impediment to the mass production of dragon leather. But since the skin is the first organ removed when making a skeleton, scientists do not have much information about how they are shaped or arranged inside the skin.

The researchers were able to overcome this issue by examining the dragons at UT's High-Resolution X-ray Computed Tomography Facility, which is managed by Maisano. The lab's CT scanners are similar to a clinical CT scanner but use higher-energy X-rays and finer detectors to reveal the interiors of specimens in great detail.

Due to size constraints of the scanner, the researchers only scanned the head of the nearly 9-foot-long adult Komodo dragon, which was donated by the Fort Worth Zoo when it passed away at 19½ years old. The San Antonio Zoo donated the 2-day-old baby specimen.

The CT scans revealed that the osteoderms in the adult Komodo dragon were unique among lizards in both their diversity of shapes and sheer coverage. Whereas the heads of other lizards examined by the researchers for comparison usually had one or two shapes of osteoderms, and sometimes large areas free of them, the Komodo had four distinct shapes and a head almost entirely encased in armor. The only areas lacking osteoderms on the head of the adult Komodo dragon were around the eyes, nostrils, mouth margins and pineal eye, a light-sensing organ on the top of the head.

"We were really blown away when we saw it," Maisano said. "Most monitor lizards just have these vermiform (worm-shaped) osteoderms, but this guy has four very distinct morphologies, which is very unusual across lizards."

The adult dragon that the researchers examined was among the oldest known Komodo dragons living in captivity when it died. Maisano said that the advanced age may partially explain its extreme armor; as lizards age, their bones may continue to ossify, adding more and more layers of material, until death. She said that more research on Komodo dragons of different ages can help reveal how their armor develops over time - and may help pinpoint when Komodos first start to prepare for battle with other dragons.

Credit: 
University of Texas at Austin

Device generates light from the cold night sky

image: In this photograph, the thermoelectric generator harnesses temperature differences to produce renewable electricity without active heat input. Here it is generating light.

Image: 
Aaswath Raman

An inexpensive thermoelectric device harnesses the cold of space without active heat input, generating electricity that powers an LED at night, researchers report September 12 in the journal Joule.

"Remarkably, the device is able to generate electricity at night, when solar cells don't work," says lead author Aaswath Raman (@aaraman), an assistant professor of materials science and engineering at the University of California, Los Angeles. "Beyond lighting, we believe this could be a broadly enabling approach to power generation suitable for remote locations, and anywhere where power generation at night is needed."

While solar cells are an efficient source of renewable energy during the day, there is currently no similar renewable approach to generating power at night. Solar lights can be outfitted with batteries to store energy produced in daylight hours for night-time use, but the addition drives up costs.

The device developed by Raman and Stanford University scientists Wei Li and Shanhui Fan sidesteps the limitations of solar power by taking advantage of radiative cooling, in which a sky-facing surface passes its heat to the atmosphere as thermal radiation, losing some heat to space and reaching a cooler temperature than the surrounding air. This phenomenon explains how frost forms on grass during above-freezing nights, and the same principle can be used to generate electricity, harnessing temperature differences to produce renewable electricity at night, when lighting demand peaks.

Raman and colleagues tested their low-cost thermoelectric generator on a rooftop in Stanford, California, under a clear December sky. The device, which consists of a polystyrene enclosure covered in aluminized mylar to minimize thermal radiation and protected by an infrared-transparent wind cover, sat on a table one meter above roof level, drawing heat from the surrounding air and releasing it into the night sky through a simple black emitter. When the thermoelectric module was connected to a voltage boost convertor and a white LED, the researchers observed that it passively powered the light. They further measured its power output over six hours, finding that it generated as much as 25 milliwatts of energy per square meter.

Since the radiative cooler consists of a simple aluminum disk coated in paint, and all other components can be purchased off the shelf, Raman and the team believe the device can be easily scaled for practical use. The amount of electricity it generates per unit area remains relatively small, limiting its widespread applications for now, but the researchers predict it can be made twenty times more powerful with improved engineering--such as by suppressing heat gain in the radiative cooling component to increase heat-exchange efficiency--and operation in a hotter, drier climate.

"Our work highlights the many remaining opportunities for energy by taking advantage of the cold of outer space as a renewable energy resource," says Raman. "We think this forms the basis of a complementary technology to solar. While the power output will always be substantially lower, it can operate at hours when solar cells cannot."

Credit: 
Cell Press

Bone, not adrenaline, drives fight or flight response

NEW YORK, NY (September 12, 2019)--When faced with a predator or sudden danger, the heart rate goes up, breathing becomes more rapid, and fuel in the form of glucose is pumped throughout the body to prepare an animal to fight or flee.

These physiological changes, which constitute the "fight or flight" response, are thought to be triggered in part by the hormone adrenaline.

But a new study from Columbia researchers suggests that bony vertebrates can't muster this response to danger without the skeleton. The researchers found in mice and humans that almost immediately after the brain recognizes danger, it instructs the skeleton to flood the bloodstream with the bone-derived hormone osteocalcin, which is needed to turn on the fight or flight response.

"In bony vertebrates, the acute stress response is not possible without osteocalcin," says the study's senior investigator Gérard Karsenty, MD, PhD, chair of the Department of Genetics and Development at Columbia University Vagelos College of Physicians and Surgeons.

"It completely changes how we think about how acute stress responses occur."

Why Bone?

"The view of bones as merely an assembly of calcified tubes is deeply entrenched in our biomedical culture," Karsenty says. But about a decade ago, his lab hypothesized and demonstrated that the skeleton has hidden influences on other organs.

The research revealed that the skeleton releases osteocalcin, which travels through the bloodstream to affect the functions of the biology of the pancreas, the brain, muscles, and other organs.

A series of studies since then have shown that osteocalcin helps regulate metabolism by increasing the ability of cells to take in glucose, improves memory, and helps animals run faster with greater endurance.

Why does bone have all these seemingly unrelated effects on other organs?

"If you think of bone as something that evolved to protect the organism from danger -- the skull protects the brain from trauma, the skeleton allows vertebrates to escape predators, and even the bones in the ear alert us to approaching danger -- the hormonal functions of osteocalcin begin to make sense," Karsenty says. If bone evolved as a means to escape danger, Karsenty hypothesized that the skeleton should also be involved in the acute stress response, which is activated in the presence of danger.

Osteocalcin Necessary to React to Danger

If osteocalcin helps bring about the acute stress response, it must work fast, in the first few minutes after danger is detected.

In the new study, the researchers presented mice with predator urine and other stressors and looked for changes in the bloodstream. Within 2 to 3 minutes, they saw osteocalcin levels spike.

Similarly in people, the researchers found that osteocalcin also surges in people when they are subjected to the stress of public speaking or cross-examination.

When osteocalcin levels increased, heart rate, body temperature, and blood glucose levels in the mice also rose as the fight or flight response kicked in.

In contrast, mice that had been genetically engineered so that they were unable to make osteocalcin or its receptor were totally indifferent to the stressor. "Without osteocalcin, they didn't react strongly to the perceived danger," Karsenty says. "In the wild, they'd have a short day."

As a final test, the researchers were able to bring on an acute stress response in unstressed mice simply by injecting large amounts of osteocalcin.

Adrenaline Not Necessary for Fight or Flight

The findings may also explain why animals without adrenal glands and adrenal-insufficient patients -- with no means of producing adrenaline or other adrenal hormones -- can develop an acute stress response.

Among mice, this capability disappeared when the mice were unable to produce large amounts of osteocalcin.

"This shows us that circulating levels of osteocalcin are enough to drive the acute stress response," says Karsenty.

Physiology the New Frontier of Biology

Physiology may sound like old-fashioned biology, but new genetic techniques developed in the last 15 years have established it as a new frontier in science.

The ability to inactivate single genes in specific cells inside an animal, and at specific times, has led to the identification of many new inter-organ relationships. The skeleton is just one example; the heart and muscles are also exerting influence over other organs.

"I have no doubt that there are many more new inter-organ signals to be discovered," Karsenty says, "and these interactions may be as important as the ones discovered in the early part of the 20th century."

Credit: 
Columbia University Irving Medical Center

Diet impacts the sensitivity of gut microbiome to antibiotics, mouse study finds

image: Rod-shaped bacteria commonly found in the gut. A new study sheds light on how diet influence the way antibiotics affect gut bacteria populations.

Image: 
Belenky Lab / Brown University

PROVIDENCE, R.I. [Brown University] -- Antibiotics save countless lives each year from harmful bacterial infections -- but the community of beneficial bacteria that live in human intestines, known as the microbiome, frequently suffers collateral damage.

Peter Belenky, an assistant professor of molecular microbiology and immunology at Brown University, studies ways to minimize this side effect, which can lead to C. diff infections and other life-threatening imbalances in the microbiome. In a new study published on Thursday, Sept. 12, in Cell Metabolism, Belenky and his colleagues found that antibiotics change the composition and metabolism of the gut microbiome in mice, and that a mouse's diet can mitigate or exacerbate these changes.

The findings are a step, Belenky said, toward helping humans to better tolerate antibiotic treatment.

"Doctors now know that each antibiotic prescription has the potential to lead to some very harmful microbiome-related health outcomes, but they do not have reliable tools to protect this critical community while also treating deadly infections," Belenky said. "The goal of my lab is to identify new ways to protect the microbiome, which may alleviate some of the worst antibiotic side effects."

The gut microbiome is an ecosystem comprising trillions of bacteria that have specifically coevolved with its host. This community helps the host in many ways such as breaking down dietary fiber and assisting in the maintenance of overall intestinal health -- by ensuring the intestinal cells form a tight barrier and competing for resources with harmful bacteria, Belenky said.

Lead study author Damien Cabral, a doctoral student in Brown's pathobiology program, treated three groups of mice with different antibiotics, then monitored how the composition of bacteria in the mouse guts changed and how the bacteria adapted at a metabolic level after antibiotic treatment.

Amoxicillin, commonly used to treat ear infections and strep throat, drastically reduced the kinds of bacteria present in the gut and changed the genes used by the remaining bacteria. The researchers also studied ciprofloxacin, used to treat urinary tract infections and typhoid fever, and doxycycline, often applied in treating Lyme disease and sinus infections. The changes associated with those drugs were less pronounced.

One type of potentially beneficial bacteria common in the human gut, Bacteroides thetaiotaomicron, actually flourished after amoxicillin treatment. Following treatment, this bacteria increased its reliance on enzymes that digest fiber, a change that appears to both allow it to thrive in the changed ecosystem and somehow protected it from the antibiotic, Belenky said.

In general, the bacteria decreased the use of genes involved in normal growth, such as making new proteins and DNA. At the same time, they also increased use of genes critical for stress resistance.

Interestingly, Belenky's team found that adding glucose to a mouse's diet -- which is typically high in fiber and low in simple sugars -- increased Bacteroides thetaiotaomicron's susceptibility to amoxicillin. This hints that diet can protect some beneficial gut bacteria from the ravages of antibiotics.

"For a long time we've known that antibiotics impact the microbiome," Belenky said. "We have also known that diet impacts the microbiome. This is the first paper that brings those two facts together."

Belenky cautioned that the study was conducted in rodents, and there is still much to learn about the interplay between host diet, microbiome metabolism and vulnerability to different antibiotics.

"Now that we know diet is important for bacterial susceptibility to antibiotics, we can ask new questions about which nutrients are have an impact and see if we can predict the influence of different diets," he said.

Belenky's team is exploring how different kinds of dietary fibers impact how the microbiome changes after antibiotic treatment, and how diabetes might impact the microbiome's metabolic environment and antibiotic susceptibility.

Credit: 
Brown University

Delaying start of head, neck cancer treatment in underserved, urban patients associated with worse outcomes

Bottom Line: This observational study looked at the factors and outcomes associated with delaying the start of treatment for head and neck squamous cell carcinoma (HNSCC) in an underserved urban population. The analysis included 956 patients with HNSCC treated at a health center in New York City. The authors report that delaying the initiation of treatment beyond 60 days was associated with poorer survival and an increased risk of HNSCC recurrence. Factors associated with delaying treatment included African American race/ethnicity, Medicaid insurance, being underweight and having an initial diagnosis at a different institution. Common reasons for delaying treatment were missed appointments leading up to the initial treatment and extensive pretreatment evaluation. Knowing predictive factors and reasons for delaying treatment can help identify at-risk patients and areas to reduce delay. Limitations of the study include the possibility of miscoding errors with the use of cancer registry and patient medical records.

Authors: Vikas Mehta, M.D., M.P.H., Montefiore Health System, Bronx, New York, and coauthors

(doi:10.1001/jamaoto.2019.2414)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Researchers produce synthetic Hall Effect to achieve one-way radio transmission

image: This is a microstrip circuit used to demonstrate Hall Effect for radio waves.

Image: 
University of Illinois at Urbana-Champaign Department of Mechanical Engineering

Researchers at the University of Illinois at Urbana-Champaign have replicated one of the most well-known electromagnetic effects in physics, the Hall Effect, using radio waves (photons) instead of electric current (electrons). Their technique could be used to create advanced communication systems that boost signal transmission in one direction while simultaneously absorbing signals going in the opposite direction.

The Hall Effect, discovered in 1879 by Edwin Hall, occurs because of the interaction between charged particles and electromagnetic fields. In an electric field, negatively charged particles (electrons) experience a force opposite to the direction of the field. In a magnetic field, moving electrons experience a force in the direction perpendicular to both their motion and the magnetic field. These two forces combine in the Hall Effect, where perpendicular electric and magnetic fields combine to generate an electric current. Light isn't charged, so regular electric and magnetic fields can't be used to generate an analogous "current of light". However, in a recent paper published in Physical Review Letters, researchers have done exactly this with the help of what they call "synthetic electric and magnetic fields".

Principal investigator Gaurav Bahl's research group has been working on several methods to improve radio and optical data transmission as well as fiber optic communication. Earlier this year, the group exploited an interaction between light and sound waves to suppress the scattering of light from material defects and published its results in Optica. In 2018, team member

Christopher Peterson was the lead author in a Science Advances paper which explained a technology that promises to halve the bandwidth needed for communications by allowing an antenna to send and receive signals on the same frequency simultaneously through a process called nonreciprocal coupling.

In the current study, Peterson has provided another promising method to directionally control data transmission using a principle similar to the Hall Effect. Instead of an electric current, the team generated a "current of light" by creating synthetic electric and magnetic fields, which affect light the same way the normal fields affect electrons. Unlike conventional electric and magnetic fields, these synthetic fields are created by varying the structure that light propagates through in both space and time.

"Although radio waves not carry charge and therefore do not experience forces from electric or magnetic fields, physicists have known for several years that equivalent forces can be produced by confining light in structures that vary in space or time," Peterson explained. "The rate of change of the structure in time is effectively proportional to the electric field, and the rate of change in space is proportional to the magnetic field. While these synthetic fields were previously considered separately, we showed that their combination affects photons in the same way that it affects electrons."

By creating a specially designed circuit to enhance the interaction between these synthetic fields and radio waves, the team leveraged the principle of the Hall Effect to boost radio signals going in one direction, increasing their strength, while also stopping and absorbing signals going in the other direction. Their experiments showed that with the right combination of synthetic fields, signals can be transmitted through the circuit more than 1000-times as effectively in one direction than in the opposite direction. Their research could be used to produce new devices that protect sources of radio waves from potentially harmful interference, or that help ensure sensitive quantum mechanical measurements are accurate. The team is also working on experiments that extend the concept to other kinds of waves, including light and mechanical vibrations, as they look to establish a new class of devices based on applying the Hall Effect outside of its original domain.

Credit: 
University of Illinois Grainger College of Engineering

Distractions distort what's real, study suggests

COLUMBUS, Ohio - We live in a world of distractions. We multitask our way through our days. We wear watches that alert us to text messages. We carry phones that buzz with breaking news.

You might even be reading this story because you got distracted.

A new study suggests that distractions - those pesky interruptions that pull us away from our goals - might change our perception of what's real, making us believe we saw something different from what we actually saw.

Even more troubling, the study suggests people might not realize their perception has changed - to the contrary, they might feel great confidence in what they think they saw.

"We wanted to find out what happens if you're trying to pay attention to one thing and something else interferes," said Julie Golomb, senior author and associate professor of psychology at The Ohio State University. "Our visual environment contains way too many things for us to process in a given moment, so how do we reconcile those pressures?"

The results, published online recently in the Journal of Experimental Psychology: Human Perception and Performance, indicate that, sometimes, we don't.

Results showed that people sometimes confused the color of an object they were supposed to remember with one that was a distraction. Others overcompensated and thought the color they were supposed to remember was even more different from the distraction object than it actually was.

"It implies that there are deeper consequences of having your attention drawn away that might actually change what you are perceiving," said Golomb, who is director of Ohio State's Vision and Cognitive Neuroscience Laboratory. "It showed us that we clearly don't understand the full implications of distraction."

To evaluate how distraction interacts with reality, the researchers showed study participants four different-colored squares on a computer screen. The researchers asked participants to focus on one specific square. But sometimes a bright distractor appeared around a different square, pulling the participant's attention away, even briefly, from the original square of focus.

The researchers then showed study participants a color wheel containing the entire color spectrum and asked them to click on the wheel where the color most closely matched the color of the original square.

Participants then highlighted a range of the color wheel to indicate how confident they were in their choice. Highlighting a narrow range indicated great confidence; highlighting a wider range indicated less confidence.

The results showed that the distraction color "bled" into the focus color in one of two ways: Either people thought the focus square was the color of the distraction square, or they overcompensated, choosing a hue of the focus color that was farther away on the color wheel from the distraction color.

For example, if the focus square was green and the distraction color orange, participants clicked in the blue-green area of the wheel - close to the original color, but farther away from the distraction color, as if to overcompensate.

Even more striking, the results showed participants were just as confident when they clicked on the distraction color as when they selected the correct color.

"It means that, on average, those two types of responses were associated with the same confidence range size," Golomb said. "On the trials where they reported the distractor color, they didn't seem aware that it was an error."

This study included 26 participants. Additional research is already underway at Ohio State to attempt to answer more questions about the ways in which distractions interact with reality.

"It raises an interesting consequence for memory - could it be that, if distraction happens with the right timing, you might adopt elements from the distraction into the thing you think you remember? Could it mean that some of our memory errors might be because we perceived something wrong in the first place?" said Jiageng Chen, lead author and graduate student researcher at Ohio State's Vision and Cognitive Neuroscience Laboratory. "We don't know yet, but it is an interesting area for future study."

Credit: 
Ohio State University