Culture

New nanomedicine slips through the cracks

image: Laser scanning microscope images of blood vessels in a mouse following treatment.

Image: 
© 2019 Kanjiro Miyata

In a recent study in mice, researchers found a way to deliver specific drugs to parts of the body that are exceptionally difficult to access. Their Y-shaped block catiomer (YBC) binds with certain therapeutic materials forming a package 18 nanometers wide. The package is less than one-fifth the size of those produced in previous studies, so can pass through much smaller gaps. This allows YBCs to slip through tight barriers in cancers of the brain or pancreas.

The fight against cancer is fought on many fronts. One promising field is gene therapy, which targets genetic causes of diseases to reduce their effect. The idea is to inject a nucleic acid-based drug into the bloodstream - typically small interfering RNA (siRNA) - which binds to a specific problem-causing gene and deactivates it. However, siRNA is very fragile and needs to be protected within a nanoparticle or it breaks down before reaching its target.

"siRNA can switch off specific gene expressions that may cause harm. They are the next generation of biopharmaceuticals that could treat various intractable diseases, including cancer," explained Associate Professor Kanjiro Miyata of the University of Tokyo, who jointly supervised the study. "However, siRNA is easily eliminated from the body by enzymatic degradation or excretion. Clearly a new delivery method was called for."

Presently, nanoparticles are about 100 nanometers wide, one-thousandth the thickness of paper. This is small enough to grant them access to the liver through the leaky blood vessel wall. However some cancers are harder to reach. Pancreatic cancer is surrounded by fibrous tissues and cancers in the brain by tightly connected vascular cells. In both cases the gaps available are much smaller than 100 nanometers. Miyata and colleagues created an siRNA carrier small enough to slip through these gaps in the tissues.

"We used polymers to fabricate a small and stable nanomachine for the delivery of siRNA drugs to cancer tissues with a tight access barrier," said Miyata. "The shape and length of component polymers is precisely adjusted to bind to specific siRNAs, so it is configurable."

The team's nanomachine is called a Y-shaped block catiomer, as two component molecules of polymeric materials are connected in a Y-shape formation. The YBC has several sites of positive charge which bind to negative charges in siRNA. The number of positive charges in YBC can be controlled to determine which kind of siRNA it binds with. When YBC and siRNA are bound, they are called a unit polyion complex (uPIC), which are under 20 nanometers in size.

"The most surprising thing about our creation is that the component polymers are so simple, yet uPIC is so stable," concluded Miyata. "It has been a great but worthy challenge over many years to develop efficient delivery systems for nucleic acid drugs. It is early days, but I hope to see this research progress from mice to help treat people with hard-to-treat cancers one day."

Credit: 
University of Tokyo

Research sheds light on genomic features that make plants good candidates for domestication

image: Left: highly branched plants of teosinte, a wild relative of corn. Right: tiny pods on the vine of Glycine soja, wild relative of soybean. New research sheds light on how domestication affects the genomes of corn and soybeans.

Image: 
Sherry Flint-Garcia (teosinte) and Scott Jackson (<em>Glycine soja</em>)

AMES, Iowa - New research published this week identifies the genomic features that might have made domestication possible for corn and soybeans, two of the world's most critical crop species.

The research, published Thursday in the peer-reviewed academic journal Genome Biology, has implications for how scientists understand domestication, or the process by which humans have been able to breed plants for desirable traits through centuries of cultivation. The researchers drew on vast amounts of data on the genomes of corn and soybeans and compared particular sections of the genomes of wild species and domestic varieties, noting where the genomes diverged most markedly.

Iowa State University researchers worked with scientists from the University of Georgia, Cornell University and the University of Minnesota. The researchers studied more than 100 accessions from comparisons of corn with teosinte, its progenitor species. They also looked at 302 accessions from a dataset of wild and domesticated soybeans.

"We sliced the genomes into specific sections and compared them," said Jianming Yu, professor of agronomy and Pioneer Distinguished Chair in Maize Breeding. "It's a fresh angle not many have looked at concerning genome evolution and domestication. We searched for 'macro-changes,' or major genome-wide patterns - and we found them."

Human cultivation created a bottleneck in the genetic material associated with corn and soybeans, Yu said. As humans selected for particular traits they found desirable in their crops, they limited the genetic variation available in the plant's genome. However, the researchers found several areas in the genomes of the species involved in the study where genome divergence seemed to concentrate.

"These patterns in genome-wide base changes offer insight into how domestication affects the genetics of species," said Jinyu Wang, the first author of the paper and a graduate student in agronomy.

Variation in nucleotide bases between wild and domesticated species appeared more pronounced in non-genic portions of the genomes, or the parts of the genomes that do not code for proteins. The study also found greater variation in pericentromeric regions, or in areas near the centromere of chromosomes, and in areas of high methylation, or areas in which methyl groups are added to a DNA molecule. Methylation can change the activity of a DNA segment without changing its sequence.

The study looked at the occurrence of mutations in the genomes of the domesticated crops and their progenitor species.

"We now think it's likely that good candidates for domestication, such as corn and soybeans, occupy a middle ground in their willingness to mutate," said Xianran Li, adjunct associate professor of agronomy and a co-corresponding author of the study.

"If there's no mutation, then everything stays the same and we don't have evolution," Yu said. "But too many mutations can wipe out a species."

The study's findings pointed to important links between UV radiation from the sun and genome evolution. UV radiation is a natural mutagen, and it leaves a special footprint when it occurs, Yu said. The study's authors found many more of these footprints in modern corn and soybeans than their wild relatives.

Credit: 
Iowa State University

Genetic testing in women diagnosed with breast cancer decreases cost of care nationwide

image: Cancer prognostic tests such as Oncotype DX and MammaPrint appear to lower the cost of breast cancer care in the US. Such tests provide a score based on her tumor's molecular makeup that indicates when a woman can forgo expensive chemotherapy.

Image: 
Genomic Health

WASHINGTON -- A new study suggests that Oncotype DX-guided treatment could reduce the cost for the first year of breast cancer care in the U.S. by about $50 million (about 2 percent of the overall costs in the first year). The study by Georgetown Lombardi Comprehensive Cancer Center and National Cancer Institute researchers was published April 24, in JNCI.

The landmark TAILORx (Trial Assigning IndividuaLized Options for Treatment) trial results suggested that use of the Oncotype DX® gene test can offer women valuable information about treatment options, potentially sparing 70 percent of women from needing chemotherapy if they are newly diagnosed with the most common subtype of breast cancer. The information only applied to women with early-stage breast cancer that is hormone positive (ER/PR+), HER2neu negative, and has not spread to the lymph nodes.

The projected cost savings in the new study are based on two factors: about a 50 percent increased cost in gene testing assuming that it will be done for all newly diagnosed women, which would be offset by an approximately 8 percent expected drop in the overall cost of chemotherapy due to fewer women being recommended this form of treatment. Chemotherapy is considerably more expensive than gene testing, hence the 8 percent reduction in its cost is greater, dollar-wise, than the 50 percent increase in cost for gene testing.

"Individual women's decisions should not be about dollars and cents, but what is right for them based on consideration of the best evidence and personal preferences," says Jeanne S. Mandelblatt, MD, MPH, professor of oncology and medicine at Georgetown Lombardi.

Mandelblatt and her colleagues examined the monetary impact in the U.S. of providing care based on evidence from TAILORx. The researchers looked at statistics on gene testing and chemotherapy use in National Cancer Institute and Medicare databases, before and after the TAILORx trial results were announced in 2018. The periods during which costs accrue are usually grouped into three timeframes: initial costs (diagnosis and treatment), terminal costs (last year of life) and everything in between (continuing care costs). This study only estimated initial costs.

The estimated individual Oncotype DX test costs in this analysis were about $3,400 and based on Medicare reimbursement rates. Many insurers, including Medicare, cover the cost for most women diagnosed with early-stage breast cancer. Another gene test that is used less often in the U.S., called MammaPrint®, has similar costs.

The investigators estimated that, prior to 2018, the mean initial costs of healthcare nationwide for newly diagnosed women with breast cancer, which were fairly stable for a number of years, included two components: chemotherapy costs of $2.701 billion and Oncotype DX testing costs estimated to be $115 million, for a total healthcare cost of $2.816 billion in the first year after diagnosis. The researchers estimated that only 34.8 to 57.2 percent of women were receiving the Oncotype DX testing in this period as the clinical application of such tests was still uncertain.

In 2018, the TAILORx findings showed no benefit from chemotherapy for women whose tumors had lower risks for recurrence based on Oncotype DX scores. For their modeling, the researchers projected that all women with scores of 0-25 (low to intermediate risk) would forgo chemotherapy starting in 2018, so those treatment costs would go down by 8 percent, saving $338 million in initial chemotherapy costs. The researchers also assumed 100 percent of women would get Oncotype DX testing, raising those costs by $116 million (from $115 to $231 million). The total initial costs for this period were therefore projected to be $2.766 billion.

Comparing the two initial 12-month costs of care for the pre-2018 and 2018 and after periods, the researchers projected that the combined treatment and testing costs would decrease from $2.816 to $2.766 billion, for a net savings of about $50 million (1.8 percent decrease).

"This study only answers the question about whether, in the first 12 months after diagnosis, costs of gene testing are likely to be offset by savings in avoided costs of chemotherapy - and the answer is yes. We did not estimate how the trial results could diffuse into medical practice, since those data will not be available for several years," Mandelblatt says. "The gene tests are not perfect predictors of who will ultimately have a recurrence of breast cancer, so it will be important to model the long-term outcomes and costs from diagnosis to death."

Credit: 
Georgetown University Medical Center

Smelling with your tongue

image: Dr. Ozdener is a cell biologist at the Monell Center.

Image: 
Monell Center

PHILADELPHIA (April 24, 2019) - Scientists from the Monell Center report that functional olfactory receptors, the sensors that detect odors in the nose, are also present in human taste cells found on the tongue. The findings suggest that interactions between the senses of smell and taste, the primary components of food flavor, may begin on the tongue and not in the brain, as previously thought.

"Our research may help explain how odor molecules modulate taste perception," said study senior author Mehmet Hakan Ozdener, MD, PhD, MPH, a cell biologist at Monell. "This may lead to the development of odor-based taste modifiers that can help combat the excess salt, sugar, and fat intake associated with diet-related diseases such as obesity and diabetes."

While many people equate flavor with taste, the distinctive flavor of most foods and drinks comes more from smell than it does from taste. Taste, which detects sweet, salty, sour, bitter, and umami (savory) molecules on the tongue, evolved as a gatekeeper to evaluate the nutrient value and potential toxicity of what we put in our mouths. Smell provides detailed information about the quality of food flavor, for example, is that banana, licorice, or cherry? The brain combines input from taste, smell, and other senses to create the multi-modal sensation of flavor.

Until now, taste and smell were considered to be independent sensory systems that did not interact until their respective information reached the brain. Ozdener was prompted to challenge this belief when his 12-year-old son asked him if snakes extend their tongues so they can smell.

In the study, published online ahead of print in Chemical Senses, Ozdener and colleagues used methods developed at Monell to maintain living human taste cells in culture. Using genetic and biochemical methods to probe the taste cell cultures, the researchers found that the human taste cells contain many key molecules known to be present in olfactory receptors.

They next used a method known as calcium imaging to show that the cultured taste cells respond to odor molecules in a manner similar to olfactory receptor cells.

Together, the findings provide the first demonstration of functional olfactory receptors in human taste cells, suggesting that olfactory receptors may play a role in the taste system by interacting with taste receptor cells on the tongue. Supporting this possibility, other experiments by the Monell scientists demonstrated that a single taste cell can contain both taste and olfactory receptors.

"The presence of olfactory receptors and taste receptors in the same cell will provide us with exciting opportunities to study interactions between odor and taste stimuli on the tongue," said Ozdener.

In addition to providing insight into the nature and mechanisms of smell and taste interactions, the findings also may provide a tool to increase understanding of how the olfactory system detects odors. Scientists still do not know what molecules activate the vast majority of the 400 different types of functional human olfactory receptors. Because the cultured taste cells respond to odors, they potentially could be used as screening assays to help identify which molecules bind to specific human olfactory receptors.

Moving forward, the scientists will seek to determine whether olfactory receptors are preferentially located on a specific taste cell type, for example, sweet- or salt-detecting cells. Other studies will explore how odor molecules modify taste cell responses and, ultimately, human taste perception.

Credit: 
Monell Chemical Senses Center

Socioeconomic deprivation increases risk of developing chronic kidney disease in England

The George Institute for Global Health at the University of Oxford investigated whether there was an association between socioeconomic deprivation (measured using the English Index of Multiple Deprivation combining income, employment, health and disability, education, skills and training, barriers to housing and services, crime, and living environment, into a single score by geographical area) and onset of advanced CKD and related outcomes, specifically end-stage renal disease (ESRD).

They found that adjusting for known risk factors for CKD development, such as body mass index, high blood pressure, diabetes and cardiovascular disease, reduced the risk of CKD amongst the most deprived group (indexed as the bottom 20% of the population) as against the least deprived group (top 20%) by 32%, but the disadvantage remained marked at 36%.

"CKD is generally associated with old age, cardiovascular disease, diabetes and high blood pressure. Using contemporary NHS data, we were able to directly explore the relationship between socioeconomic deprivation and risk of advanced CKD and ESRD, and our findings bear relevance to the 2.6 million people in England currently living with CKD [1]", said Dr Misghina Weldegiorgis, Epidemiologist at The George Institute, who led the research using participants in the Clinical Practice Research Datalink.

Socioeconomic deprivation adversely influences development of CKD and progression to ESRD by exacerbating these established risk factors. So too, deprived patients may have poorer access to high-quality care and treatments, including kidney transplantation, which improve survival, than more advantaged peers.

"The good news for clinicians is that the greater risk of developing advanced CKD experienced by the most socioeconomically deprived may be modifiable by established risk factors. Appropriate management of these is crucial to delay the progression of CKD to ESRD and so improve patients' quality of life and survival, irrespective of their socioeconomic status".

"However, wider systemic change beyond healthcare will be required if the general population is to experience risk of CKD equitably, regardless of their socioeconomic status", added Weldegiorgis.

CKD is a long-term, irreversible condition where the kidneys do not work as well as they should to 'clean the blood', and makes individuals more susceptible to worsened kidney functioning at times when they may be unwell for other reasons [2]. Those with advanced CKD have an increased risk of hospital admission and death [3].

The research team recommend that interventions be tailored to engage socioeconomically deprived groups through targeted communication to improve screening and treatment rates; close follow-up of patients following diagnosis; and targeted public health education to reduce behavioural risk factors.

Credit: 
University of Oxford

Shining light on rare nerve tumors illuminates a fresh path for fighting cancer

image: New findings published in Science Signaling suggest that targeting mechanical signals between cells may become a fresh approach to fighting cancer. The concept flows from discovering a network of proteins that appear to interact with a protein called Merlin when people have the rare disease neurofibromatosis 2 (NF2). Merlin is closely associated with a network of signaling proteins at cellular junctions. This implicates Merlin in signaling at these structures and suggests that this signaling is lost in tumor cells when Merlin does not function, say experts at Cincinnati Children's. This image shows where these proteins are localized in cells. Merlin is glowing green while Merlin-interacting proteins glow red. The thick yellow line is a cell junction where Merlin and this network of interacting proteins are both located.

Image: 
Cincinnati Children's

(CINCINNATI) When a protein named "Merlin" fails to do its job, people can develop slow-growing, life-disrupting auditory nerve tumors that can disrupt their hearing and balance. This rare condition is called neurofibromatosis 2 (NF2).

Now scientists at Cincinnati Children's have discovered much more about how Merlin does its job--by working behind the scenes through a network of more than 50 other proteins. For people afflicted by NF2, this discovery advances the hunt for drug therapies for a condition that, so far, has required surgery to treat.

But the findings also advance a concept with far wider implications for cancer research--that tumors have impaired response to physical forces and other mechanical signals. This opens up new possibilites to develop novel cancer therapies.

These findings, led by first author Robert Hennigan, PhD, and senior author Nancy Ratner, PhD, were published online April 23, 2019, in Science Signaling. The paper reflects the culmination of more than seven years of research.

"The neurofibromatosis community will be very interested in these findings," says Ratner, a leading authority on neurofibromatosis. "This is the first time this technique (called proximity biotinylation) has been used at Cincinnati Children's and one of only a few findings based on this approach worldwide. Long term, this approach has lots of potential applications."

What is NF2?

NF2 is rare genetic condition that affects approximately one in 40,000 people. This condition produces tumors along nerve fibers, including the auditory nerve. Such tumors tend to start causing symptoms in the teen years, then get worse into adulthood.

The tumors are "benign" in the sense that they do not spread to other organs. However, when they grow in the confined space of the skull they can severely disrupt hearing and balance. Once the tumors become enough of a threat, surgeons remove as much tumor tissue as they can.

The NF2 gene produces the oddly named Merlin protein. That magical name loosely stands for "Moesin-Ezrin-Radixin-Like Protein," which connects the NF2 -generated protein to a larger protein family.

What did the new study reveal?

"This approach represents an entirely new way of thinking about NF2," says Hennigan, a cancer biologist who has worked with Ratner for more than a decade. "Instead of shining a flashlight in a dark room to spot one protein involved, this study was like turning on the lights and seeing all the proteins in the room."

The key implications include:

A big step forward for battling neurofibromatosis. The paper produced a "census" of protein binding information that identifies more than 50 other proteins in close proximity to Merlin.

Each of these relationships will require further exploration, and drug development will require several more years of study, Hennigan says. However, one potential target already stands out: a protein called ASPP2.

Evidence builds for mechanical signaling as cancer control target. The study reports that ASPP2, a regulator of the widely-studied cancer tumor suppressor protein p53, is directly connected to Merlin.

"Finding that protein in our census was like finding a neon sign," Hennigan says.

Based on new information about how Merlin works, the ASPP2 connection suggests that targeting mechanical signaling pathways could become an important approach to stopping tumor growth when other treatments fail.

A boon for lab science. The techniques used in this study demonstrate a cheaper, faster and better way of determining how proteins bind to each other--which could influence a wide range of disease research.

Surveying a cellular neighborhood with tools derived from camels and fireflies

Using a technique called proximity biotinylation, the lab team revealed not just which proteins interact with Merlin, but where the action happens. Merlin operates at the fringes, in places called cell junction complexes. That location matters, Hennigan says.

When functioning properly, the data suggest that Merlin helps cells understand their physical environment, including which other cells belong in their neighborhood, Hennigan says. When the street gets too crowded with cells jostling each other, Merlin shuts down excess construction, possibly through ASPP2, and other proteins identified in this study. But when Merlin fails, a tumor can settle in, grow and start pushing neighboring cells around to make room.

"Previously, researchers suspected that Merlin's growth-suppressing function likely worked by inhibiting signals generated by external growth factors," Hennigan says. "But this study instead suggests that Merlin responds to physical cues like mechanical force."

Diving this deep into the inner workings of NF2 required unusual effort. The proximity biotinylation technique was developed more than a decade ago and has emerged as a tool for identifying protein-protein interactions. But for this study, the lab team had to adapt a range of new technologies to develop rapid and sensitive methods for confirming that proteins bind to each other.

"Instead of a more complicated process that can take two days to get a result, and still leave some questions about accuracy, this process can be done in an hour and a half with 10 times the sensitivity," Hennigan says. "There is, essentially, no question that the proteins we found are actually binding to each other."

What's next?

Now that scientists can "see all the proteins in the room," the next major steps in battling NF2 will be determining how the proteins function as a network, then finding ways to influence the network.

"You can't understand how to fix something that's malfunctioning until you understand how it is supposed to work," Hennigan says. "These results have provided a dramatic change in our perspective."

Credit: 
Cincinnati Children's Hospital Medical Center

Carbon dioxide from Silicon Valley affects the chemistry of Monterey Bay

image: This map shows how carbon dioxide blows from land areas out across Monterey Bay with morning land breezes

Image: 
Base image: Google Earth

MOSS LANDING, CA--MBARI researchers recently measured high concentrations of carbon dioxide in air blowing out to sea from cities and agricultural areas, including Silicon Valley. In a new paper in PLOS ONE, they calculate that this previously undocumented process could increase the amount of carbon dioxide dissolving into coastal ocean waters by about 20 percent.

Extending their calculations to coastal areas around the world, the researchers estimate that this process could add 25 million additional tons of carbon dioxide to the ocean each year, which would account for roughly one percent of the ocean's total annual carbon dioxide uptake. This effect is not currently included in calculations of how much carbon dioxide is entering the ocean because of the burning of fossil fuels.

Less than half of the carbon dioxide that humans have released over the past 200 years has remained in the atmosphere. The remainder has been absorbed in almost equal proportions by the ocean and terrestrial ecosystems. How quickly carbon dioxide enters the ocean in any particular area depends on a number of factors, including the wind speed, the temperature of the water, and the relative concentrations of carbon dioxide in the surface waters and in the air just above the sea surface.

MBARI has been measuring carbon dioxide concentrations in the air and seawater of Monterey Bay almost continuously since 1993. But it wasn't until 2017 that researchers began looking carefully at the atmospheric data collected from sea-surface robots. "One of our summer interns, Diego Sancho-Gallegos, analyzed the atmospheric carbon dioxide data from our research moorings and found much higher levels than expected," explained MBARI Biological Oceanographer Francisco Chavez.

Chavez continued, "If these measurements had been taken on board a ship, researchers would have thought the extra carbon dioxide came from the ship's engine exhaust system and would have discounted them. But our moorings and surface robots do not release carbon dioxide to the atmosphere."

In early 2018 MBARI Research Assistant Devon Northcott started working on the data set, analyzing hourly carbon dioxide concentrations in the air over Monterey Bay. He noticed another striking pattern--carbon dioxide concentrations peaked in the early morning.

Although atmospheric scientists had previously noticed early-morning peaks in carbon dioxide concentrations in some cities and agricultural areas, this was the first time such peaks had been measured over ocean waters. The finding also contradicted a common scientific assumption that concentrations of carbon dioxide over ocean areas do not vary much over time or space.

Northcott was able to track down the sources of this extra carbon dioxide using measurements made from a robotic surface vessel called a Wave Glider, which travels back and forth across Monterey Bay making measurements of carbon dioxide in the air and ocean for weeks at a time.

"Because we had measurements from the Wave Glider at many different locations around the bay," Northcott explained, "I could use the Wave Glider's position and the speed and direction of the wind to triangulate the direction the carbon dioxide was coming from."

The data suggested two main sources for the morning peaks in carbon dioxide--the Salinas and Santa Clara Valleys. The Salinas Valley is one of California's largest agricultural areas, and many plants release carbon dioxide at night, which may explain why there was more carbon dioxide in the air from this region. Santa Clara Valley [aka Silicon Valley] is a dense urban area, where light winds and other atmospheric conditions in the early morning could concentrate carbon dioxide released from cars and factories.

Typical morning breezes blow directly from the Salinas Valley out across Monterey Bay. Morning breezes also carry air from the Santa Clara Valley southward and then west through a gap in the mountains (Hecker Pass) and out across Monterey Bay.

"We had this evidence that the carbon dioxide was coming from an urban area," explained Northcott. "But when we looked at the scientific literature, there was nothing about air from urban areas affecting the coastal ocean. People had thought about this, but no one had measured it systematically before."

The researchers see this paper not as a last word, but as a "wake-up call" to other scientists. "This brings up a lot of questions that we hope other researchers will look into," said Chavez. "One of first and most important things would be to make detailed measurements of carbon dioxide in the atmosphere and ocean in other coastal areas. We need to know if this is a global phenomenon. We would also like to get the atmospheric modelling community involved."

"We've estimated that this could increase the amount of carbon dioxide entering coastal waters by roughly 20 percent," said Chavez. "This could have an effect on the acidity of seawater in these areas. Unfortunately, we don't have any good way to measure this increase in acidity because carbon dioxide takes time to enter the ocean and carbon dioxide concentrations vary dramatically in coastal waters."

"There must be other pollutants in this urban air that are affecting the coastal ocean as well," he added.

"This is yet another case where the data from MBARI's autonomous robots and sensors has led us to new and unexpected discoveries," said Chavez. "Hopefully other scientists will see these results and will want to know if this is happening in their own backyards."

Credit: 
Monterey Bay Aquarium Research Institute

Less colonialism could improve archaeological output, says anthropologist

image: An excavation site in Petra, Jordan.

Image: 
Allison Mickel

Archaeological excavation has, historically, operated in a very hierarchical structure, according to archaeologist Allison Mickel. The history of the enterprise is deeply entangled with Western colonial and imperial pursuits, she says. Excavations have been, and often still are, according to Mickel, led by foreigners from the West, while dependent on the labor of scores of people from the local community to perform the manual labor of the dig.

In a recently published paper examining some of this history specifically in the context archaeological excavations undertaken in the Middle East?Mickel writes: "Even well into the 20th century, locally hired excavation workers continued to benefit little from working on archaeological projects, still predominantly directed by European and American researchers who paid extremely low wages and did not share their purpose, progress, hypotheses, or conclusions with local community members."

Over time, the teams have gotten smaller in size, but hiring and labor practices remain the same, explains Mickel, an assistant professor of anthropology at Lehigh University, who specializes in the Middle East.

"We haven't really changed the hierarchy of how we hire or the fact that workers are paid minimum wage--sometimes as little as a few dollars a day, which is not very much to spend even in their own context, for work that is dangerous and has a lot of risk to it," she says.

In a new paper, "Essential Excavation Experts: Alienation and Agency in the History of Archaeological Labor," published in Archaeologies: Journal of the World Archaeological Congress, Mickel illuminates the ways that nineteenth century archaeologists working in the Middle East managed local labor in ways that reflected capitalist labor management models. She focuses on two case studies from early Middle Eastern archaeology by examining the memoirs of two 19th century archaeologists: Italian archaeologist Giovanni Battista Belzoni, known for his work in Egypt, and British archaeologist Sir Austen Henry Layard, best known for his work in Nimrud, an ancient Assyrian city about 20 miles south of Mosul, Iraq.

Mickel's analysis reveals the different ways local laborers responded to similar conditions. Her examination ultimately reveals how much archaeological knowledge has fundamentally relied upon the active choices made by the local laborers who do the digging.

Divergent responses to exploitative labor practices

Mickel argues that the framework established by the German philosopher and economist Karl Marx of the capitalist mode of production can be seen in 19th century archaeological work in the Middle East?and, in many ways, in archaeological projects today. This includes Marx's assertion that, she writes, "...the capitalist mode of production leads to workers experiencing a sense of powerlessness and an inability to fulfill the potential of their own skills, expertise, and abilities."

In Mickel's analysis, Belzoni's approach to securing and retaining local laborers for his work in Egypt, which began in 1816, exemplified the conditions of modes of production that lead to his workers' "...alienation in the Marxist sense," beginning with how little he paid them.

She writes: "Monetarily devaluing the archaeological work of native Egyptians in this way engenders an understanding that archaeological labor is quite literally of little worth--one that in Marx's view deeply impacts the self-image of the workers in a production process. Not only were the workers paid next to nothing for performing the manual labor of Belzoni's endeavors, they were also not involved in the conceptualization of the project. In the end, the antiquities were subsequently shipped thousands of miles away, challenging both ideologically and spatially any relationship between the workers and the archaeological objects being unearthed through excavation, as well as the knowledge gleaned from them."

Mickel also writes about Belzoni's use of strongarm tactics to maintain the workforce he employed. These include resorting to physical violence and bribery?strategies Belzoni used, in one example, on a foreman to force laborers to return to work during a strike.

During his famed excavation of the Memnon Head in 1816, Belzoni had to leave the site for an extended period of time in order to raise funds. He believed, writes Mickel, "...that the workers and their families were too lazy to dig on their own..."

"Indeed," she continues, "no substantial digging proceeded in Belzoni's absence by the time he returned. The reasons for this surely have nothing to with any indolence on the part of the native Egyptian workforce, but rather can be explained in terms of alienation."

In examining Layard's memoir, Mickel finds that although Layard worked in the same region and during the same time period as Belzoni, his workers' responded to similar working conditions very differently.

"Operating under extremely similar circumstances," writes Mickel, "the groups of workers examined here made very divergent decisions about how best to respond to an exploitative labor system, whether to rise up demonstratively against it or to resist the devaluation of their work by establishing themselves as essential to the production of artifacts and historical knowledge."

Layard's strategies for hiring and managing a local labor force had much in common with Belzoni's, including elements of capitalist labor relations modes such as low wages. Additionally, Layard's memoirs suggest "...that he viewed the total excavation endeavor as metaphorically signifying the superiority of Western civilization over Oriental peoples and cultures."

And yet Layard's workmen, explains Mickel, often appear in his writing as trusted experts in the excavation process: "These men developed impressive excavation abilities that Layard himself recognized, repeatedly hiring the same groups of people for season after season and site after site. One native Assyrian man whom he hired again and again, Hormuzd Rassam, ultimately went on to lead his own excavations on behalf of the British Museum at places like Nimrud and Nineveh; Rassam even published his own archaeological memoirs for popular distribution like Layard and other archaeologists of the time"

Mickel compares these two contexts and concludes: "Operating under extremely similar circumstances, the groups of workers examined here made very divergent decisions about how best to respond to an exploitative labor system, whether to rise up demonstratively against it or to resist the devaluation of their work by establishing themselves as essential to the production of artifacts and historical knowledge."

Focusing attention on the divergent decision these two groups of laborers made reveals how much is owed to archaeological workers' localized responses to a structure designed to maximize benefit to the archaeologists and minimize workers' control within the project, asserts Mickel.

She writes: "What would the archaeological record look like if this was not the case? How would archaeological knowledge be transformed if the means of its production were not controlled by archaeologists alone but shared with local stakeholders?"

Digging and questioning

As part of her work, Mickel supervises and participates in excavations in regions such as Petra, Jordan and Catalhoyuk, Turkey, while researching the history of archaeology and its contemporary practice.

Mickel has spent two to three months each summer in Turkey and Jordan, and between 2011 and 2015 spent a year at both sites, conducting dissertation fieldwork on a Fulbright grant.

"What I find in [Petra and Catalhoyuk] is relevant to a lot of other contexts because archaeology is fairly regional in its practice," she says.

Beyond digging, Mickel examines records of archaeological excavations for the individuals listed as site workers. She visits their homes and asks questions about the site workers' experiences on the excavations.

"I found that this system has led to one in which workers are doing this dance all the time in archaeology where they are integral to carrying out an excavation, they work for almost nothing, they are good at what they do, they have decades of experience in addition to generational knowledge that's been handed down. ... Most of these people, for context, their fathers worked in archaeology, their grandfathers worked in archaeology--it's almost like a family business for them to be there. So they have a ton of knowledge, but if I tell them how much I admire their expertise, they react really negatively to that label of expertise."

Mickel believes that an improvement of labor practices would benefit not just workers, but archaeology as a whole. She argues for ways in which the field could be producing better science if archaeologists were to change their labor practices.

"This isn't charity work," says Mickel. "If we want to have better archaeology, if we want to know more about the past, then we need to find ways to benefit from the knowledge that local people have been hiding for decades and decades and decades from us."

Credit: 
Lehigh University

Researchers describe the mechanism of a protein upon infection of the 'Fasciola hepatica'

Fasciola hepatica is a parasite that causes on average 3.2 million in losses in the agricultural sector every year worldwide. It is a two-centimeter-long worm at adult size that mainly affects ruminants by means of water or raw vegetables that act as vehicles of infection. Moveover, in developing countries with deficient sanitary control systems, more than five million people have been infected. Though it does not have high death rates, it causes liver damage and makes the host more prone to catching other diseases.

Now, for the first time, several research groups at the University of Cordoba have described that the parasite induces an overexpression of a protein, on which, in a way, it depends if the pathogen makes itself at home within the infected animal or not. The gene in question is FOXP3, present in a regulatory lymphocyte that erases the immune response of the infected organism. That is to say, a protein that sends the false message that everything is alright to the organism's defense system. According to the results of the research, from a sheep's first day of infection, this protein's genetic expression increases in the tissues in which the pathogen is circulating. This increase is no coincidence. The parasite itself, in a way, handles the task of stimulating proliferation of the gene in order to eliminate the host's immune response and survive better within the host.

This is one of the main results of the research published in Scientific Reports, but certainly not the only one. In addition, the same study was able to validate three reference or housekeeping genes in sheep to be used as controls in quantitative PCR techniques, a molecular biology technique that can quickly, easily and simultaneously quantify transcriptions of dozens of genes involved in a specific biological process.

These genes were selected from among ten analyzed candidate genes and from now on, they will allow researchers to delve more deeply into the host-pathogen relationship. They will do so because they will turn into the markers that science needs to analyze cytokine activity. Cytokines are a kind of middleman that are able to activate inflammatory cells in charge of immune responses.

These proteins spend little time in the blood and therefore are hard to analyze in the most common kinds of blood tests. Nevertheless, as explained by José Pérez Arévalo, one of the main researchers on this study, identifying new genetic markers will help enable future quantitative PCR studies on cytokines in sheep to be carried out successfully.

This research on sheep, one of the main victims of Fasciola hepatica, could mean an important step toward improving the effectiveness of vaccines in the future. Although some of them have had promising results in the laboratory, none have attained enough protection to be developed commercially. For this reason, current control of this disease is based on using drugs, meaning high costs and what is more, they make the parasite more resistant, as well as adding residues in products such as milk and meat.

Credit: 
University of Córdoba

Microglia, immune cells of the central nervous system, shown to regulate neuroinflammation

image: Retinal microglia (green), the resident immune cell of the central nervous system, and the retinal vasculature (magenta), in a retinal flat mount.

Image: 
Connor Laboratory - Dong Ho Park, M.D.

A research team at Massachusetts Eye and Ear has shown that microglia, the primary immune cells of the central nervous system--including the retina of the eye--serve as "gatekeepers," or biosensors and facilitators, of neuroinflammation in a preclinical model of autoimmune uveitis. Uveitis is one of the leading causes of blindness, accounting for approximately 10% of significant visual impairment worldwide.

In a report published online today in Proceedings of the National Academy of Sciences (PNAS), the researchers describe, for the first time, a role for microglia in directing the initiation of autoimmune uveitis by orchestrating the inflammatory response within the retina. In reaction to disease induction, microglia closely associate with the retinal vasculature and facilitate inflammatory immune cell entry past the blood brain, or ocular, barrier into the retina. When the researchers depleted microglia in this model, they observed that the disease was completely blocked.

"Normally, the blood brain barrier serves as an impediment and prevents the immune response from going into tissues of the central nervous system, including the retina. However, our results provide clear evidence, that in the context of uveitis, microglia can facilitate entry of inflammatory immune cells into the retina, and enable the host immune responses to attack cells that are not normally recognized by the immune system," said senior author Kip M. Connor, PhD, vision researcher at Mass. Eye and Ear and Associate Professor of Ophthalmology at Harvard Medical School. "Until now, the role of microglia in retinal disease has not been fully understood, but our research shows--for the first time--that these cells serve as gatekeepers from the immune system to the central nervous system. This gateway not only has implications for treating uveitis, but may provide future avenues for drug delivery across the blood brain barrier for other diseases of the central nervous system."

Despite significant advances in research and therapeutics, the prevalence of uveitis has not been reduced in the past 30 years. Uveitis is characterized as inflammation of the retina as well as the uveal tissues, optic nerve and vitreous, wherein a large influx of immune cells into the eye coincides with elevated inflammatory destruction. Uveitis caused by an autoimmune disease occurs in a variety of diseases including Bechet's disease, sarcoidosis, and Vogt-Koyanagi-Harada disease. Patients with uveitis often suffer serious visual loss after persistent inflammation due to immune mediated damage to the neuronal cells of the retina.

Since microglia have multiple phenotypes and/or different stages of activation that can be associated with either harmful or beneficial effects in disease pathogenesis, their role and function in disease progression is not well defined. Researchers across all fields of medicine have recently begun to elucidate the function of microglial cells in various conditions. For example, in Alzheimer's, Parkinson's and other neurodegenerative diseases of the brain, microglia are thought to be harmful. In ophthalmology, it is known that microglial cells are activated in response to a number of developmental and disease indications and their roles in disease are thought to be context dependent, where they can be either beneficial or harmful.

"These findings provide the first insights into how microglia respond and function during a systemic autoimmune disease targeting the eye," said lead author Yoko Okunuki, MD, PhD, an investigator in Dr. Connor's laboratory and Instructor of Ophthalmology at Harvard Medical School.

"This novel work by Dr. Connor and colleagues identifies that microglia regulate entry through the blood-retinal barrier, and it is our hope that these finding can be harnessed for future targeted therapies for uveitis," says Joan W. Miller, MD, the David Glendenning Cogan Professor and Chair of Ophthalmology at Harvard Medical School, Chief of Ophthalmology at Mass. Eye and Ear and Massachusetts General Hospital, and Ophthalmologist-in-Chief at Brigham and Women's Hospital. "It is becoming increasingly clear that microglia are involved in a number of retinal disorders as well as neuroinflammatory disorders of the central nervous system.

Credit: 
Mass Eye and Ear

Slime mold absorbs substances to memorize them

image: This is a fusion of the venous network of two blobs.

Image: 
© David Villa / CNRS Photothèque

Physarum polycephalum is a complex single-cell organism that has no nervous system. It can learn and transfer its knowledge to its fellow slime moulds via fusion. How it did so was a mystery. Researchers at the Centre de Recherches sur la Cognition Animale (CNRS/UT3 Paul Sabatier)* have recently demonstrated that slime moulds learn to tolerate a substance by absorbing it.

This discovery stems from an observation: slime moulds only exchange information when their venous networks fuse. In that case, does knowledge circulate through these veins? Is it the substance that the slime mould gets used to that supports its memory?

First the team of scientists forced the slime moulds to cross salty environments for six days to habituate them to salt. Then they evaluated the salt concentration inside the slime moulds: they contained ten times more salt than "naive" slime moulds. The researchers then placed the habituated slime moulds in a neutral environment and observed that they excreted the salt absorbed within two days, losing the "memory". This experiment therefore seemed to show a link between the salt concentration within the organism and the "memory" of the habituation.

To go further and confirm this hypothesis, the scientists introduced the "memory" into naive blobs by injecting a salt solution directly into the organisms. Two hours later, the slime moulds were no longer naive and behaved like slime moulds that had undergone a six day training

When the environmental conditions deteriorate, slime moulds can enter into a dormant stage. The researchers demonstrated that slime moulds habituated to salt stored the salt absorbed before entering the dormant stage and could store the knowledge for up to a month.

The results of this study prove that the aversive substance could be the support of the slime mould's memory. The researchers are now trying to establish whether if the slime moulds can memorise several aversive substances at the same time and to what extent they can get used to them.

Credit: 
CNRS

How does wildlife fare after fires?

Fire ecologists and wildlife specialists at La Trobe University have made key discoveries in how wildlife restores itself after bushfires, and what Australian conservationists can do to assist the process.

The study, published this week in Wildlife Research journal, looks at various reserves in the Australian state of Victoria after bushfires had taken place. It finds that the surrounding area of any fire dictates what species survived and went onto thrive.

Key findings of the study included:

Invasive species such as Australian ravens, magpies and house mice were commonly found recolonising burnt areas surrounded by agriculture;

Native species such as crested bellbirds, hopping mice and white-eared honeyeater were commonly found recolonising burnt areas surrounded by mallee vegetation; and

Other native species such as Major Mitchell's cockatoos, mallee ringnecks and white-winged choughs were commonly found recolonising burnt areas surrounded by a mix of mallee vegetation and sparse grassy woodland.

To minimise damage of large bushfires and to protect important species and vegetation, strategic burns create firebreaks - vital in slowing the spread of fire.

La Trobe honours student Angela Simms, author of the study, said the study opens up an important discussion on how conservationists can make informed decisions on planned and strategic burning.

"With our dry climate, Australia has seen more than its fair share of destructive bushfires. This, in conjunction with increasing habitat loss more generally, means that it's not uncommon to see localized extinctions of species after a fire.

"We can conclude that the surrounding habitat and vegetation to a fire directly determines the species that will recolonize that area.

"From this, we know the wider landscape context - particularly surrounding species and vegetation - should be a key consideration for conservation programs when planning locations for strategic burns.

"But it also tells us a lot about what to expect when a bushfire does take place, and what preparations they can undertake to restore native wildlife that could be vulnerable when needed.

"There is still much to learn about post-fire fauna communities but this study marks the start of a journey to better understand how we can protect important wildlife before, during and after fires."

Credit: 
La Trobe University

A universal framework combining genome annotation and undergraduate education

image: Workflow describing various steps in the manual annotation of protein-coding genes.

Image: 
Prashant Hosmani

As genome sequencing becomes cheaper and faster, resulting in an exponential increase in data, the need for efficiency in predicting gene function is growing, as is the need to train the next generation of scientists in bioinformatics. Researchers in the lab of Lukas Mueller, a faculty member of the Boyce Thompson Institute (BTI), have developed a strategy to fulfill both of these needs, benefiting students and researchers in the process.

The Mueller Lab created a framework using the tremendous influx of new genome sequences as a training resource for undergraduates interested in learning genome annotation. This framework was published online in PLOS Computational Biology on April 3, 2019.

What is genome annotation, and why is it important?

After researchers determine the sequence of the millions of base pairs of DNA in an organism's genome, they need to figure out two things: which DNA segments are genes that encode proteins, and what are those proteins' functions. This process of identifying genes and predicting their functions is called genome annotation.

"The prediction of genes and their functions is what most biologists are interested in. That's where most understanding of biological processes is happening," says Prashant Hosmani, a Bioinformatics Analyst in the Mueller Lab and first author on the paper.

A genome is annotated by comparing its sequence to gene sequences from other related organisms. The most accurate method of genome annotation is manual curation, where a person does the analysis. In contrast, utilizing a computer program to identify genes and their functions is faster but is sometimes less accurate.

"Manual annotation is very time-intensive and thus expensive," said Surya Saha, Senior Bioinformatics Analyst in the Mueller Lab and the project coordinator. "The trick is to do both: first use automatic annotation, and then focus on genes and biochemical pathways of interest and annotate them manually."

The paper outlines a set of logical steps to begin an undergraduate annotation program from the ground up. When students first join the project, they are trained by team leaders and expert annotators on the tools of the trade.

Throughout the project, students keep careful notes of their research and results, ultimately compiling them into a report about the biochemical pathway of interest and the member gene families, which may be published. Indeed, this method has been used to generate a peer-reviewed publication with more than 20 undergraduate authors.

"Working is one thing, and receiving an acknowledgement for that work is also really important," says Hosmani. "That acts as a real motivation for the students."

Other student benefits include working with international collaborators, networking, practicing communication and peer review skills, and gaining valuable insights into career options. Undergraduates may also receive research or capstone project credits for their work, which increases their commitment to the project. More and more science-based graduate programs also require knowledge of bioinformatics, so these skills will prove valuable in many fields.

In the end, the researchers gain high-quality genome annotations for any species--not just plants--which offer a better understanding of how the organism functions, ultimately benefitting society in many fields, such as agriculture, biofuels and medicine.

The authors hope that other institutions will adapt and build upon this framework, no matter their size, access to resources or annotation goals. To make the framework easy to use, the authors designed their figures and tables to be stand-alone and printer-friendly for easy reference.

"Anybody who has a research problem, a sequenced genome, and interested students can implement a system by building off our workflow," said Saha.

Credit: 
Boyce Thompson Institute

Video plus brochure helps patients make lung cancer scan decision

image: Video and brochure more effective than brochure alone in helping patients decide whether or not to get a low-dose CT scan to detect lung cancer early.

Image: 
ATS

April 19, 2019-- A short video describing the potential benefits and risks of low-dose CT screening for lung cancer in addition to an informational brochure increased patients' knowledge and reduced conflicted feelings about whether to undergo the scan more than the informational brochure alone, according to a randomized, controlled trial published online in the Annals of the American Thoracic Society.

In "Impact of a Lung Cancer Screening Information Film on Informed Decision-Making: A Randomized Trial," Sam M. Janes, MBBS, PhD, and co-authors report on a study of 229 participants at a London hospital who met any of three criteria used to select patients who may benefit from the screening. One criterion was the U.S. Preventative Services Task Force recommendation of a 30 or more pack-year smoking history in patients who quit less than 15 years ago.

The authors note that studies show that fewer than 2 percent of the 7.6 million former smokers in the U.S. who are eligible for the screening actually undergo the CT scan, despite the fact that it has been shown to reduce lung cancer mortality by 20 percent.

The most significant potential harms from screening come from detecting nodules, or small masses of tissue, that are usually benign but cause anxiety and may require additional scans or biopsies to determine if they are cancerous or not.

Dr. Janes, senior author and head of the Respiratory Research Department at University College London and director of London's Lung Cancer Board, said the goal of the video was to produce a tool that would help facilitate a conversation between patients and their physicians and lead to shared decision-making--a requirement for Medicare and Medicaid reimbursement in the U.S.

"We used feedback from other patients eligible for lung cancer screening to create a film that individuals from a variety of educational backgrounds could understand and that presented the information in a clear, simple and palatable manner," he added.

The researchers used a questionnaire to measure participants' knowledge before and after reading the 10-page brochure or reading the brochure and watching the five-and-a-half-minute video. The researchers also measured the level of conflict the participants experienced in deciding whether to be screened or not.

At enrollment, participants answered 5 of 10 questions correctly on average. After reading the brochure, participants in that arm of the study answered seven questions correctly. After reading the brochure and watching the video, those in that arm of the study answered eight questions correctly.

Those who read the brochure and watched the video were more likely to answer two questions correctly compared to those who read only the brochure. The first question focused on the fact that uncertainty about whether a pulmonary nodule found on the scan is cancerous or not does not mean that there is a high risk of cancer. The second question focused on the fact that the amount of radiation produced by a single scan is about the equivalent of a year's background radiation.

Members of both groups were just as likely to undergo the scan, with more than three in four choosing to be screened. However, those who watched the video as well as read the booklet were more certain of their decision to either have the scan done or not. On a scale of zero to nine, their level of certainty was 8.5, compared to 8.2 among those who only read the book.

Study limitations include the fact that all participants were enrolled in a larger lung cancer study, and half of them would have seen the informational booklet before participating in the study reported in AnnalsATS.

"There is an urgent unmet need to provide information to individuals considering lung cancer screening, but for this to be done in a non-intimidating, friendly and simple way," said Mamta Ruparel, MBBS, PhD, lead study author and a researcher at the Lungs for Living Research Centre at University College London. "This study demonstrates that an information film can enhance shared decision-making, while reducing the conflicted feelings patients may have about undergoing the procedure without reducing low-dose CT screening participation."

Credit: 
American Thoracic Society

Russian blue chips prove their pricing potential

Researchers from HSE University and London Business School have carried out research into the dynamics of the prices for Russian companies' stocks and depositary receipts. The research indicates that, thanks to their price differences, there are opportunities for profitable trading with zero or, at least, minimum risk.

https://www.tandfonline.com/doi/full/10.1080/1540496X.2018.1564276?scroll=top&needAccess=true

This was the first time such research was carried out with respect to Russian companies. The key information was drawn from Datastream database, where researchers tracked the prices for the depositary receipts of Russian companies. They analyzed only those companies with stocks that had high liquidity from 2006 and 2013 and were sold on other exchanges outside of Russia. These were mainly stock markets in London and Frankfurt. A total of nine top Russian companies were included in the sample: Lukoil, Rosneft, Sberbank, Surgutneftegas, Gazpromneft, NLMK, MMK, Uralkali, and PIK Group.

Firstly, the researchers looked at the opportunities for doing arbitrage conversion deals. This is a type of investment strategy, which means getting profit with zero risk. This can be done by almost simultaneously buying and selling related assets (stocks and depositary receipts of the same company) on different stock exchanges. For example, an investor buys a company's shares at one price on the Frankfurt Stock Exchange and immediately sells the same shares at a higher price at the Moscow exchange. There are usually transaction costs for the conversion, but amount is insignificant.

Another investment strategy discussed in paper is arbitrage without conversion, or a 'buy-and-hold' approach. This, too, foresees speculating on the difference in prices for the same company's shares and depositary receipts. However, unlike conversion arbitrage, in this case, deals are not concluded immediately, but within a period of time. As such, the investor buys stocks when their prices on the two exchanges differ, and sells when they are the same.

The 'buy-and-hold' strategy is considered to be the riskier approach. Theoretically, the prices for the same asset on two different exchanges should equalize, but in reality, this can be a long and not always predictable process. Therefore, the investor may have to bear losses.

The results of the analysis indicate that this is more common when prices for depository receipts on international exchanges are lower than for shares. Furthermore, the research notes the difference in prices that may persist for long periods of time. Thus, there are opportunities for profitable arbitrage deals. 'Buy-and-hold' strategies, despite being risky, tend to generate returns, which are about twice as high as the returns from conversion strategies.

Credit: 
National Research University Higher School of Economics