Saturday, 18 January 2020

Meteorite contains the oldest material on Earth: 7-billion-year-old stardust


Illustration of meteor entering Earth's atmosphere

Stars have life cycles. They're born when bits of dust and gas floating through space find each other and collapse in on each other and heat up. They burn for millions to billions of years, and then they die. When they die, they pitch the particles that formed in their winds out into space, and those bits of stardust eventually form new stars, along with new planets and moons and meteorites. And in a meteorite that fell fifty years ago in Australia, scientists have now discovered stardust that formed 5 to 7 billion years ago -- the oldest solid material ever found on Earth.
"This is one of the most exciting studies I've worked on," says Philipp Heck, a curator at the Field Museum, associate professor at the University of Chicago, and lead author of a paper describing the findings in the Proceedings of the National Academy of Sciences. "These are the oldest solid materials ever found, and they tell us about how stars formed in our galaxy."
The materials Heck and his colleagues examined are called presolar grains-minerals formed before the Sun was born. "They're solid samples of stars, real stardust," says Heck. These bits of stardust became trapped in meteorites where they remained unchanged for billions of years, making them time capsules of the time before the solar system..
But presolar grains are hard to come by. They're rare, found only in about five percent of meteorites that have fallen to Earth, and they're tiny-a hundred of the biggest ones would fit on the period at the end of this sentence. But the Field Museum has the largest portion of the Murchison meteorite, a treasure trove of presolar grains that fell in Australia in 1969 and that the people of Murchison, Victoria, made available to science. Presolar grains for this study were isolated from the Murchison meteorite for this study about 30 years ago at the University of Chicago.
"It starts with crushing fragments of the meteorite down into a powder ," explains Jennika Greer, a graduate student at the Field Museum and the University of Chicago and co-author of the study. "Once all the pieces are segregated, it's a kind of paste, and it has a pungent characteristic-it smells like rotten peanut butter."
This "rotten-peanut-butter-meteorite paste" was then dissolved with acid, until only the presolar grains remained. "It's like burning down the haystack to find the needle," says Heck.
Once the presolar grains were isolated, the researchers figured out from what types of stars they came and how old they were. "We used exposure age data, which basically measures their exposure to cosmic rays, which are high-energy particles that fly through our galaxy and penetrate solid matter," explains Heck. "Some of these cosmic rays interact with the matter and form new elements. And the longer they get exposed, the more those elements form.
"I compare this with putting out a bucket in a rainstorm. Assuming the rainfall is constant, the amount of water that accumulates in the bucket tells you how long it was exposed," he adds. By measuring how many of these new cosmic-ray produced elements are present in a presolar grain, we can tell how long it was exposed to cosmic rays, which tells us how old it is.
The researchers learned that some of the presolar grains in their sample were the oldest ever discovered-based on how many cosmic rays they'd soaked up, most of the grains had to be 4.6 to 4.9 billion years old, and some grains were even older than 5.5 billion years. For context, our Sun is 4.6 billion years old, and Earth is 4.5 billion.
But the age of the presolar grains wasn't the end of the discovery. Since presolar grains are formed when a star dies, they can tell us about the history of stars. And 7 billion years ago, there was apparently a bumper crop of new stars forming-a sort of astral baby boom.
"We have more young grains that we expected," says Heck. "Our hypothesis is that the majority of those grains, which are 4.9 to 4.6 billion years old, formed in an episode of enhanced star formation. There was a time before the start of the Solar System when more stars formed than normal."
This finding is ammo in a debate between scientists about whether or not new stars form at a steady rate, or if there are highs and lows in the number of new stars over time. "Some people think that the star formation rate of the galaxy is constant," says Heck. "But thanks to these grains, we now have direct evidence for a period of enhanced star formation in our galaxy seven billion years ago with samples from meteorites. This is one of the key findings of our study."
Heck notes that this isn't the only unexpected thing his team found. As almost a side note to the main research questions, in examining the way that the minerals in the grains interacted with cosmic rays, the researchers also learned that presolar grains often float through space stuck together in large clusters, "like granola," says Heck. "No one thought this was possible at that scale."
Heck and his colleagues look forward to all of these discoveries furthering our knowledge of our galaxy. "With this study, we have directly determined the lifetimes of stardust. We hope this will be picked up and studied so that people can use this as input for models of the whole galactic life cycle," he says.
Heck notes that there are lifetimes' worth of questions left to answer about presolar grains and the early Solar System. "I wish we had more people working on it to learn more about our home galaxy, the Milky Way," he says.
"Once learning about this, how do you want to study anything else?" says Greer. "It's awesome, it's the most interesting thing in the world."
"I always wanted to do astronomy with geological samples I can hold in my hand," says Heck. "It's so exciting to look at the history of our galaxy. Stardust is the oldest material to reach Earth, and from it, we can learn about our parent stars, the origin of the carbon in our bodies, the origin of the oxygen we breathe. With stardust, we can trace that material back to the time before the Sun."
"It's the next best thing to being able to take a sample directly from a star," says Greer.

Astronomers discover class of strange objects near our galaxy's enormous black hole

Center of Milky Way galaxy
Astronomers from UCLA's Galactic Center Orbits Initiative have discovered a new class of bizarre objects at the center of our galaxy, not far from the supermassive black hole called Sagittarius A*. They published their research today in the journal Nature.
"These objects look like gas and behave like stars," said co-author Andrea Ghez, UCLA's Lauren B. Leichtman and Arthur E. Levine Professor of Astrophysics and director of the UCLA Galactic Center Group.
The new objects look compact most of the time and stretch out when their orbits bring them closest to the black hole. Their orbits range from about 100 to 1,000 years, said lead author Anna Ciurlo, a UCLA postdoctoral researcher.
Ghez's research group identified an unusual object at the center of our galaxy in 2005, which was later named G1. In 2012, astronomers in Germany made a puzzling discovery of a bizarre object named G2 in the center of the Milky Way that made a close approach to the supermassive black hole in 2014. Ghez and her research team believe that G2 is most likely two stars that had been orbiting the black hole in tandem and merged into an extremely large star, cloaked in unusually thick gas and dust.
"At the time of closest approach, G2 had a really strange signature," Ghez said. "We had seen it before, but it didn't look too peculiar until it got close to the black hole and became elongated, and much of its gas was torn apart. It went from being a pretty innocuous object when it was far from the black hole to one that was really stretched out and distorted at its closest approach and lost its outer shell, and now it's getting more compact again."
"One of the things that has gotten everyone excited about the G objects is that the stuff that gets pulled off of them by tidal forces as they sweep by the central black hole must inevitably fall into the black hole," said co-author Mark Morris, UCLA professor of physics and astronomy. "When that happens, it might be able to produce an impressive fireworks show since the material eaten by the black hole will heat up and emit copious radiation before it disappears across the event horizon."
But are G2 and G1 outliers, or are they part of a larger class of objects? In answer to that question, Ghez's research group reports the existence of four more objects they are calling G3, G4, G5 and G6. The researchers have determined each of their orbits. While G1 and G2 have similar orbits, the four new objects have very different orbits.
Ghez believes all six objects were binary stars -- a system of two stars orbiting each other -- that merged because of the strong gravitational force of the supermassive black hole. The merging of two stars takes more than 1 million years to complete, Ghez said.
"Mergers of stars may be happening in the universe more often than we thought, and likely are quite common," Ghez said. "Black holes may be driving binary stars to merge. It's possible that many of the stars we've been watching and not understanding may be the end product of mergers that are calm now. We are learning how galaxies and black holes evolve. The way binary stars interact with each other and with the black hole is very different from how single stars interact with other single stars and with the black hole."
Ciurlo noted that while the gas from G2's outer shell got stretched dramatically, its dust inside the gas did not get stretched much. "Something must have kept it compact and enabled it to survive its encounter with the black hole," Ciurlo said. "This is evidence for a stellar object inside G2."
"The unique dataset that Professor Ghez's group has gathered during more than 20 years is what allowed us to make this discovery," Ciurlo said. "We now have a population of 'G' objects, so it is not a matter of explaining a 'one-time event' like G2."
The researchers made observations from the W.M. Keck Observatory in Hawaii and used a powerful technology that Ghez helped pioneer, called adaptive optics, which corrects the distorting effects of the Earth's atmosphere in real time. They conducted a new analysis of 13 years of their UCLA Galactic Center Orbits Initiative data.
In September 2019, Ghez's team reported that the black hole is getting hungrier and it is unclear why. The stretching of G2 in 2014 appeared to pull off gas that may recently have been swallowed by the black hole, said co-author Tuan Do, a UCLA research scientist and deputy director of the Galactic Center Group. The mergers of stars could feed the black hole.
The team has already identified a few other candidates that may be part of this new class of objects, and are continuing to analyze them.
Ghez noted the center of the Milky Way galaxy is an extreme environment, unlike our less hectic corner of the universe.
"The Earth is in the suburbs compared to the center of the galaxy, which is some 26,000 light-years away," Ghez said. "The center of our galaxy has a density of stars 1 billion times higher than our part of the galaxy. The gravitational pull is so much stronger. The magnetic fields are more extreme. The center of the galaxy is where extreme astrophysics occurs -- the X-sports of astrophysics."
Ghez said this research will help to teach us what is happening in the majority of galaxies.

Taking the temperature of dark matter

Distant galaxies
Warm, cold, just right? Physicists at the University of California, Davis are taking the temperature of dark matter, the mysterious substance that makes up about a quarter of our universe.
We have very little idea of what dark matter is and physicists have yet to detect a dark matter particle. But we do know that the gravity of clumps of dark matter can distort light from distant objects. Chris Fassnacht, a physics professor at UC Davis and colleagues are using this distortion, called gravitational lensing, to learn more about the properties of dark matter.
The standard model for dark matter is that it is 'cold,' meaning that the particles move slowly compared to the speed of light, Fassnacht said. This is also tied to the mass of dark matter particles. The lower the mass of the particle, the 'warmer' it is and the faster it will move.
The model of cold (more massive) dark matter holds at very large scales, Fassnacht said, but doesn't work so well on the scale of individual galaxies. That's led to other models including 'warm' dark matter with lighter, faster-moving particles. 'Hot' dark matter with particles moving close to the speed of light has been ruled out by observations.
Former UC Davis graduate student Jen-Wei Hsueh, Fassnacht and colleagues used gravitational lensing to put a limit on the warmth and therefore the mass of dark matter. They measured the brightness of seven distant gravitationally lensed quasars to look for changes caused by additional intervening blobs of dark matter and used these results to measure the size of these dark matter lenses.
If dark matter particles are lighter, warmer and more rapidly-moving, then they will not form structures below a certain size, Fassnacht said.
"Below a certain size, they would just get smeared out," he said.
The results put a lower limit on the mass of a potential dark matter particle while not ruling out cold dark matter, he said. The team's results represent a major improvement over a previous analysis, from 2002, and are comparable to recent results from a team at UCLA.
Fassnacht hopes to continue adding lensed objects to the survey to improve the statistical accuracy.
"We need to look at about 50 objects to get a good constraint on how warm dark matter can be," he said.

Deep learning enables real-time imaging around corners

Busy intersection
Researchers have harnessed the power of a type of artificial intelligence known as deep learning to create a new laser-based system that can image around corners in real time. With further development, the system might let self-driving cars "look" around parked cars or busy intersections to see hazards or pedestrians. It could also be installed on satellites and spacecraft for tasks such as capturing images inside a cave on an asteroid.
"Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds," said research team leader Christopher A. Metzler from Stanford University and Rice University. "These attributes enable applications that wouldn't otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner."
In Optica, The Optical Society's journal for high-impact research, Metzler and colleagues from Princeton University, Southern Methodist University, and Rice University report that the new system can distinguish submillimeter details of a hidden object from 1 meter away. The system is designed to image small objects at very high resolutions but can be combined with other imaging systems that produce low-resolution room-sized reconstructions.
"Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense," said co-author Felix Heide from Princeton University. "Our work takes a step toward enabling its use in a variety of such applications."
Solving an optics problem with deep learning
The new imaging system uses a commercially available camera sensor and a powerful, but otherwise standard, laser source that is similar to the one found in a laser pointer. The laser beam bounces off a visible wall onto the hidden object and then back onto the wall, creating an interference pattern known as a speckle pattern that encodes the shape of the hidden object.
Reconstructing the hidden object from the speckle pattern requires solving a challenging computational problem. Short exposure times are necessary for real-time imaging but produce too much noise for existing algorithms to work. To solve this problem, the researchers turned to deep learning.
"Compared to other approaches for non-line-of-sight imaging, our deep learning algorithm is far more robust to noise and thus can operate with much shorter exposure times," said co-author Prasanna Rangarajan from Southern Methodist University. "By accurately characterizing the noise, we were able to synthesize data to train the algorithm to solve the reconstruction problem using deep learning without having to capture costly experimental training data."

Molecular switch for repairing central nervous system disorders

Neurons illustration
A molecular switch has the ability to turn on a substance in animals that repairs neurological damage in disorders such as multiple sclerosis (MS), Mayo Clinic researchers discovered. The early research in animal models could advance an already approved Food and Drug Administration therapy and also could lead to new strategies for treating diseases of the central nervous system.
Research by Isobel Scarisbrick, Ph.D., published in the Journal of Neuroscience finds that by genetically switching off a receptor activated by blood proteins, named Protease Activated Receptor 1 (PAR1), the body switches on regeneration of myelin, a fatty substance that coats and protects nerves.
"Myelin regeneration holds tremendous potential to improve function. We showed when we block the PAR1 receptor, neurological healing is much better and happens more quickly. In many cases, the nervous system does have a good capacity for innate repair," says Dr. Scarisbrick, principal investigator and senior author. "This sets the stage for development of new clinically relevant myelin regeneration strategies."
Myelin, Thrombin and the Nervous System
Myelin acts like a wire insulator that protects electrical signals sent through the nervous system. Demyelination, or injury to the myelin, slows electrical signals between brain cells, resulting in loss of sensory and motor function. Sometimes the damage is permanent. Demyelination is found in disorders such as MS, Alzheimer's disease, Huntington's disease, schizophrenia and spinal cord injury.
Thrombin is a protein in blood that aids in healing. However, too much thrombin triggers the PAR1 receptor found on the surface of cells, and this blocks myelin production. Oligodendrocyte progenitor cells capable of myelin regeneration are often found at sites of myelin injury, including demyelinating injuries in multiple sclerosis.
"These oligodendroglia fail to differentiate into mature myelin regenerating cells for reasons that remain poorly understood," says Dr. Scarisbrick. "Our research identifies PAR1 as a molecular switch of myelin regeneration. In this study, we demonstrate that blocking the function of the PAR1, also referred to as the thrombin receptor, promotes myelin regeneration in two unique experimental models of demyelinating disease."
The Research
The research focused on two mouse models. One was an acute model of myelin injury and the other studied chronic demyelination, each modeling unique features of myelin loss present in MS, Alzheimer's disease and other neurological disorders. Researchers genetically blocked PAR1 to block the action of excess thrombin.
The research not only discovered a new molecular switch that turns on myelin regeneration, but also discovered a new interaction between the PAR1 receptor and a very powerful growth system called brain derived neurotropic factor (BDNF). BDNF is like a fertilizer for brain cells that keeps them healthy, functioning and growing.
Significantly, the researchers found that a current Food and Drug Administration-approved drug that inhibits the PAR1 receptor also showed ability to improve myelin production in cells tested in the laboratory.
"It is important to say that we have not and are not advocating that patients take this inhibitor at this time," says Dr. Scarisbrick. "We have not used the drug in animals yet, and it is not ready to put in patients for the purpose of myelin repair. Using cell culture systems, we are showing that this has the potential to improve myelin regeneration."

A replacement for exercise?

Mouse in exercise wheel 
Whether it be a brisk walk around the park or high intensity training at the gym, exercise does a body good. But what if you could harness the benefits of a good workout without ever moving a muscle?
Michigan Medicine researchers studying a class of naturally occurring protein called Sestrin have found that it can mimic many of exercise's effects in flies and mice. The findings could eventually help scientists combat muscle wasting due to aging and other causes.
"Researchers have previously observed that Sestrin accumulates in muscle following exercise," said Myungjin Kim, Ph.D., a research assistant professor in the Department of Molecular & Integrative Physiology. Kim, working with professor Jun Hee Lee, Ph.D. and a team of researchers wanted to know more about the protein's apparent link to exercise. Their first step was to encourage a bunch of flies to work out.
Taking advantage of Drosophila flies' normal instinct to climb up and out of a test tube, their collaborators Robert Wessells, Ph.D. and Alyson Sujkowski of Wayne State University in Detroit developed a type of fly treadmill. Using it, the team trained the flies for three weeks and compared the running and flying ability of normal flies with that of flies bred to lack the ability to make Sestrin.
"Flies can usually run around four to six hours at this point and the normal flies' abilities improved over that period," says Lee. "The flies without Sestrin did not improve with exercise."
What's more, when they overexpressed Sestrin in the muscles of normal flies, essentially maxing out their Sestrin levels, they found those flies had abilities above and beyond the trained flies, even without exercise. In fact, flies with overexpressed Sestrin didn't develop more endurance when exercised.
The beneficial effects of Sestrin include more than just improved endurance. Mice without Sestrin lacked the improved aerobic capacity, improved respiration and fat burning typically associated with exercise.
"We propose that Sestrin can coordinate these biological activities by turning on or off different metabolic pathways," says Lee. "This kind of combined effect is important for producing exercise's effects."

Beauty sleep could be real, say body clock biologists

Woman waking up from bed 
Biologists from The University of Manchester have explained for the first time why having a good night's sleep really could prepare us for the rigours of the day ahead.


The study in mice and published in Nature Cell Biology, shows how the body clock mechanism boosts our ability to maintain our bodies when we are most active.
And because we know the body clock is less precise as we age, the discovery, argues lead author Professor Karl Kadler, may one day help unlock some of the mysteries of aging.
The discovery throws fascinating light on the body's extracellular matrix -which provides structural and biochemical support to cells in the form of connective tissue such as bone, skin, tendon and cartilage.
Over half our body weight is matrix, and half of this is collagen -- and scientists have long understood it is fully formed by the time we reach the age of 17.
But now the researchers have discovered there are two types of fibrils -- the rope-like structures of collagen that are woven by the cells to form tissues.
Thicker fibrils measuring about 200 nanometres in diameter -- a million million times smaller than a pinhead -- are permanent and stay with us throughout our lives, unchanged from the age of 17.
But thinner fibrils measuring 50 nanometres, they find, are sacrificial, breaking as we subject the body to the rigours of the day but replenishing when we rest at night.
The collagen was observed by mass spectrometry and the mouse fibrils were observed using state of the art volumetric electron microscopy -- funded by the Wellcome Trust -- every 4 hours over 2 days.
When the body clock genes where knocked out in mice, the thin and thick fibrils were amalgamated randomly.
"Collagen provides the body with structure and is our most abundant protein, ensuring the integrity, elasticity and strength of the body's connective tissue," said Professor Kadler
"It's intuitive to think our matrix should be worn down by wear and tear, but it isn't and now we know why: our body clock makes an element which is sacrificial and can be replenished, protecting the permanent parts of the matrix.
He added: "So if you imagine the bricks in the walls of a room as the permanent part, the paint on the walls could be seen as the sacrificial part which needs to be replenished every so often.
"And just like you need to oil a car and keep its radiator topped up with water, these thin fibrils help maintain the body's matrix."
"Knowing this could have implications on understanding our biology at its most fundamental level. It might, for example, give us some deeper insight into how wounds heal, or how we age.

Scientists breach brain barriers to attack tumors

Illustration of human brain and tumor
The brain is a sort of fortress, equipped with barriers designed to keep out dangerous pathogens. But protection comes at a cost: These barriers interfere with the immune system when faced with dire threats such glioblastoma, a deadly brain tumor for which there are few effective treatments.
Yale researchers have found a novel way to circumvent the brain's natural defenses when they're counterproductive by slipping immune system rescuers through the fortresses' drainage system, they report Jan. 15 in the journal Nature.
"People had thought there was very little the immune system could do to combat brain tumors," said senior corresponding author Akiko Iwasaki. "There has been no way for glioblastoma patients to benefit from immunotherapy."
Iwasaki is the Waldemar Von Zedtwitz Professor of Immunobiology and professor of molecular, cellular, and developmental biology and an investigator for the Howard Hughes Medical Institute.
While the brain itself has no direct way for disposing of cellular waste, tiny vessels lining the interior of the skull collect tissue waste and dispose of it through the body's lymphatic system, which filters toxins and waste from the body. It is this disposal system that researchers exploited in the new study.
These vessels form shortly after birth, spurred in part by the gene known as vascular endothelial growth factor C, or VEGF-C.
Yale's Jean-Leon Thomas, associate professor of neurology at Yale and senior co-corresponding author of the paper, wondered whether VEGF-C might increase immune response if lymphatic drainage was increased. And lead author Eric Song, a student working in Iwasaki's lab, wanted to see if VEGF-C could specifically be used to increase the immune system's surveillance of glioblastoma tumors. Together, the team investigated whether introducing VEGF-C through this drainage system would specifically target brain tumors.
The team introduced VEGF C into the cerebrospinal fluid of mice with glioblastoma and observed an increased level of T cell response to tumors in the brain. When combined with immune system checkpoint inhibitors commonly used in immunotherapy, the VEGF-C treatment significantly extended survival of the mice. In other words, the introduction of VEGF-C, in conjunction with cancer immunotherapy drugs, was apparently sufficient to target brain tumors.
"These results are remarkable," Iwasaki said. "We would like to bring this treatment to glioblastoma patients. The prognosis with current therapies of surgery and chemotherapy is still so bleak."

The mysterious, legendary giant squid's genome is revealed


How did the monstrous giant squid -- reaching school-bus size, with eyes as big as dinner plates and tentacles that can snatch prey 10 yards away -- get so scarily big?


Today, important clues about the anatomy and evolution of the mysterious giant squid (Architeuthis dux) are revealed through publication of its full genome sequence by a University of Copenhagen-led team that includes scientist Caroline Albertin of the Marine Biological Laboratory (MBL), Woods Hole.
Giant squid are rarely sighted and have never been caught and kept alive, meaning their biology (even how they reproduce) is still largely a mystery. The genome sequence can provide important insight.
"In terms of their genes, we found the giant squid look a lot like other animals. This means we can study these truly bizarre animals to learn more about ourselves," says Albertin, who in 2015 led the team that sequenced the first genome of a cephalopod (the group that includes squid, octopus, cuttlefish, and nautilus).
Led by Rute da Fonseca at University of Copenhagen, the team discovered that the giant squid genome is big: with an estimated 2.7 billion DNA base pairs, it's about 90 percent the size of the human genome.
Albertin analyzed several ancient, well-known gene families in the giant squid, drawing comparisons with the four other cephalopod species that have been sequenced and with the human genome.
She found that important developmental genes in almost all animals (Hox and Wnt) were present in single copies only in the giant squid genome. That means this gigantic, invertebrate creature -- long a source of sea-monster lore -- did NOT get so big through whole-genome duplication, a strategy that evolution took long ago to increase the size of vertebrates.
So, knowing how this squid species got so giant awaits further probing of its genome.
"A genome is a first step for answering a lot of questions about the biology of these very weird animals," Albertin said, such as how they acquired the largest brain among the invertebrates, their sophisticated behaviors and agility, and their incredible skill at instantaneous camouflage.
"While cephalopods have many complex and elaborate features, they are thought to have evolved independently of the vertebrates. By comparing their genomes we can ask, 'Are cephalopods and vertebrates built the same way or are they built differently?'" Albertin says.
Albertin also identified more than 100 genes in the protocadherin family -- typically not found in abundance in invertebrates -- in the giant squid genome.
"Protocadherins are thought to be important in wiring up a complicated brain correctly," she says. "They were thought they were a vertebrate innovation, so we were really surprised when we found more than 100 of them in the octopus genome (in 2015). That seemed like a smoking gun to how you make a complicated brain. And we have found a similar expansion of protocadherins in the giant squid, as well."
Lastly, she analyzed a gene family that (so far) is unique to cephalopods, called reflectins. "Reflectins encode a protein that is involved in making iridescence. Color is an important part of camouflage, so we are trying to understand what this gene family is doing and how it works," Albertin says.
"Having this giant squid genome is an important node in helping us understand what makes a cephalopod a cephalopod. And it also can help us understand how new and novel genes arise in evolution and development."

Mosquitoes engineered to repel dengue virus

Aedes aegypti mosquito
An international team of scientists has synthetically engineered mosquitoes that halt the transmission of the dengue virus.


Led by biologists at the University of California San Diego, the research team describes details of the achievement in Aedes aegypti mosquitoes, the insects that spread dengue in humans, on January 16 in the journal PLOS Pathogens.
Researchers in UC San Diego Associate Professor Omar Akbari's lab worked with colleagues at Vanderbilt University Medical Center in identifying a broad spectrum human antibody for dengue suppression. The development marks the first engineered approach in mosquitoes that targets the four known types of dengue, improving upon previous designs that addressed single strains.
They then designed the antibody "cargo" to be synthetically expressed in female A. aegypti mosquitoes, which spread the dengue virus.
"Once the female mosquito takes in blood, the antibody is activated and expressed -- that's the trigger," said Akbari, of the Division of Biological Sciences and a member of the Tata Institute for Genetics and Society. "The antibody is able to hinder the replication of the virus and prevent its dissemination throughout the mosquito, which then prevents its transmission to humans. It's a powerful approach."
Akbari said the engineered mosquitoes could easily be paired with a dissemination system, such as a gene drive based on CRISPR/CAS-9 technology, capable of spreading the antibody throughout wild disease-transmitting mosquito populations.
"It is fascinating that we now can transfer genes from the human immune system to confer immunity to mosquitoes. This work opens up a whole new field of biotechnology possibilities to interrupt mosquito-borne diseases of man," said coauthor James Crowe, Jr., M.D., director of the Vanderbilt Vaccine Center at Vanderbilt University Medical Center in Nashville, Tenn.
According to the World Health Organization, dengue virus threatens millions of people in tropical and sub-tropical climates. Severe dengue is a leading cause of serious illness and death among children in many Asian and Latin American countries. The Pan American Health Organization recently reported the highest number of dengue cases ever recorded in the Americas. Infecting those with compromised immune systems, dengue victims suffer flu-like symptoms, including severe fevers and rashes. Serious cases can include life-threatening bleeding. Currently no specific treatment exists and thus prevention and control depend on measures that stop the spread of the virus.
"This development means that in the foreseeable future there may be viable genetic approaches to controlling dengue virus in the field, which could limit human suffering and mortality," said Akbari, whose lab is now in the early stages of testing methods to simultaneously neutralize mosquitoes against dengue and a suite of other viruses such as Zika, yellow fever and chikungunya.
"Mosquitoes have been given the bad rap of being the deadliest killers on the planet because they are the messengers that transmit diseases like malaria, dengue, chikungunya, Zika and yellow fever that collectively put 6.5 billion people at risk globally," said Suresh Subramani, professor emeritus of molecular biology at UC San Diego and global director of the Tata Institute for Genetics and Society (TIGS). "Until recently, the world has focused on shooting (killing) this messenger. Work from the Akbari lab and at TIGS is aimed at disarming the mosquito instead by preventing it from transmitting diseases, without killing the messenger. This paper shows that it is possible to immunize mosquitoes and prevent their ability to transmit dengue virus, and potentially other mosquito-borne pathogens."

In death of dinosaurs, it was all about the asteroid -- not volcanoes

Illustrated scene of dinosaurs and asteroid 
Volcanic activity did not play a direct role in the mass extinction event that killed the dinosaurs, according to an international, Yale-led team of researchers. It was all about the asteroid.
In a break from a number of other recent studies, Yale assistant professor of geology & geophysics Pincelli Hull and her colleagues argue in a new research paper in Science that environmental impacts from massive volcanic eruptions in India in the region known as the Deccan Traps happened well before the Cretaceous-Paleogene extinction event 66 million years ago and therefore did not contribute to the mass extinction.
Most scientists acknowledge that the mass extinction event, also known as K-Pg, occurred after an asteroid slammed into Earth. Some researchers also have focused on the role of volcanoes in K-Pg due to indications that volcanic activity happened around the same time.
"Volcanoes can drive mass extinctions because they release lots of gases, like SO2 and CO2, that can alter the climate and acidify the world," said Hull, lead author of the new study. "But recent work has focused on the timing of lava eruption rather than gas release."
To pinpoint the timing of volcanic gas emission, Hull and her colleagues compared global temperature change and the carbon isotopes (an isotope is an atom with a higher or lower number of neutrons than normal) from marine fossils with models of the climatic effect of CO2 release. They concluded that most of the gas release happened well before the asteroid impact -- and that the asteroid was the sole driver of extinction.
"Volcanic activity in the late Cretaceous caused a gradual global warming event of about two degrees, but not mass extinction," said former Yale researcher Michael Henehan, who compiled the temperature records for the study. "A number of species moved toward the North and South poles but moved back well before the asteroid impact."
Added Hull, "A lot of people have speculated that volcanoes mattered to K-Pg, and we're saying, 'No, they didn't.'"
Recent work on the Deccan Traps, in India, has also pointed to massive eruptions in the immediate aftermath of the K-Pg mass extinction. These results have puzzled scientists because there is no warming event to match. The new study suggests an answer to this puzzle, as well.
"The K-Pg extinction was a mass extinction and this profoundly altered the global carbon cycle," said Yale postdoctoral associate Donald Penman, the study's modeler. "Our results show that these changes would allow the ocean to absorb an enormous amount of CO2 on long time scales -- perhaps hiding the warming effects of volcanism in the aftermath of the event."

Billions of quantum entangled electrons found in 'strange metal'

Partial view of periodic table of the elements 
In a new study, U.S. and Austrian physicists have observed quantum entanglement among "billions of billions" of flowing electrons in a quantum critical material.
The research, which appears this week in Science, examined the electronic and magnetic behavior of a "strange metal" compound of ytterbium, rhodium and silicon as it both neared and passed through a critical transition at the boundary between two well-studied quantum phases.
The study at Rice University and Vienna University of Technology (TU Wien) provides the strongest direct evidence to date of entanglement's role in bringing about quantum criticality, said study co-author Qimiao Si of Rice.
"When we think about quantum entanglement, we think about small things," Si said. "We don't associate it with macroscopic objects. But at a quantum critical point, things are so collective that we have this chance to see the effects of entanglement, even in a metallic film that contains billions of billions of quantum mechanical objects."
Si, a theoretical physicist and director of the Rice Center for Quantum Materials (RCQM), has spent more than two decades studying what happens when materials like strange metals and high-temperature superconductors change quantum phases. Better understanding such materials could open the door to new technologies in computing, communications and more.
The international team overcame several challenges to get the result. TU Wien researchers developed a highly complex materials synthesis technique to produce ultrapure films containing one part ytterbium for every two parts rhodium and silicon (YbRh2Si2). At absolute zero temperature, the material undergoes a transition from one quantum phase that forms a magnetic order to another that does not.
At Rice, study co-lead author Xinwei Li, then a graduate student in the lab of co-author and RCQM member Junichiro Kono, performed terahertz spectroscopy experiments on the films at temperatures as low as 1.4 Kelvin. The terahertz measurements revealed the optical conductivity of the YbRh2Si2 films as they were cooled to a quantum critical point that marked the transition from one quantum phase to another.
"With strange metals, there is an unusual connection between electrical resistance and temperature," said corresponding author Silke Bühler-Paschen of TU Wien's Institute for Solid State Physics. "In contrast to simple metals such as copper or gold, this does not seem to be due to the thermal movement of the atoms, but to quantum fluctuations at the absolute zero temperature."
To measure optical conductivity, Li shined coherent electromagnetic radiation in the terahertz frequency range on top of the films and analyzed the amount of terahertz rays that passed through as a function of frequency and temperature. The experiments revealed "frequency over temperature scaling," a telltale sign of quantum criticality, the authors said.
Kono, an engineer and physicist in Rice's Brown School of Engineering, said the measurements were painstaking for Li, who's now a postdoctoral researcher at the California Institute of Technology. For example, only a fraction of the terahertz radiation shined onto the sample passed through to the detector, and the important measurement was how much that fraction rose or fell at different temperatures.
"Less than 0.1% of the total terahertz radiation was transmitted, and the signal, which was the variation of conductivity as a function of frequency, was a further few percent of that," Kono said. "It took many hours to take reliable data at each temperature to average over many, many measurements, and it was necessary to take data at many, many temperatures to prove the existence of scaling.
"Xinwei was very, very patient and persistent," Kono said. "In addition, he carefully processed the huge amounts of data he collected to unfold the scaling law, which was really fascinating to me."
Making the films was even more challenging. To grow them thin enough to pass terahertz rays, the TU Wien team developed a unique molecular beam epitaxy system and an elaborate growth procedure. Ytterbium, rhodium and silicon were simultaneously evaporated from separate sources in the exact 1-2-2 ratio. Because of the high energy needed to evaporate rhodium and silicon, the system required a custom-made ultrahigh vacuum chamber with two electron-beam evaporators.
"Our wild card was finding the perfect substrate: germanium," said TU Wien graduate student Lukas Prochaska, a study co-lead author. The germanium was transparent to terahertz, and had "certain atomic distances (that were) practically identical to those between the ytterbium atoms in YbRh2Si2, which explains the excellent quality of the films," he said.
Si recalled discussing the experiment with Bühler-Paschen more than 15 years ago when they were exploring the means to test a new class of quantum critical point. The hallmark of the quantum critical point that they were advancing with co-workers is that the quantum entanglement between spins and charges is critical.
"At a magnetic quantum critical point, conventional wisdom dictates that only the spin sector will be critical," he said. "But if the charge and spin sectors are quantum-entangled, the charge sector will end up being critical as well."
At the time, the technology was not available to test the hypothesis, but by 2016, the situation had changed. TU Wien could grow the films, Rice had recently installed a powerful microscope that could scan them for defects, and Kono had the terahertz spectrometer to measure optical conductivity. During Bühler-Paschen's sabbatical visit to Rice that year, she, Si, Kono and Rice microscopy expert Emilie Ringe received support to pursue the project via an Interdisciplinary Excellence Award from Rice's newly established Creative Ventures program.
"Conceptually, it was really a dream experiment," Si said. "Probe the charge sector at the magnetic quantum critical point to see whether it's critical, whether it has dynamical scaling. If you don't see anything that's collective, that's scaling, the critical point has to belong to some textbook type of description. But, if you see something singular, which in fact we did, then it is very direct and new evidence for the quantum entanglement nature of quantum criticality."
Si said all the efforts that went into the study were well worth it, because the findings have far-reaching implications.
"Quantum entanglement is the basis for storage and processing of quantum information," Si said. "At the same time, quantum criticality is believed to drive high-temperature superconductivity. So our findings suggest that the same underlying physics -- quantum criticality -- can lead to a platform for both quantum information and high-temperature superconductivity. When one contemplates that possibility, one cannot help but marvel at the wonder of nature."

Sunday, 5 January 2020

'Lost crops' could have fed as many as maize

Make some room in the garden, you storied three sisters: the winter squash, climbing beans and the vegetable we know as corn. Grown together, newly examined "lost crops" could have produced enough seed to feed as many indigenous people as traditionally grown maize, according to new research from Washington University in St. Louis.


But there are no written or oral histories to describe them. The domesticated forms of the lost crops are thought to be extinct.
Writing in the Journal of Ethnobiology, Natalie Muellert, assistant professor of archaeology in Arts & Sciences, describes how she painstakingly grew and calculated yield estimates for two annual plants that were cultivated in eastern North America for thousands of years -- and then abandoned.
Growing goosefoot (Chenopodium, sp.) and erect knotweed (Polygonum erectum) together is more productive than growing either one alone, Mueller discovered. Planted in tandem, along with the other known lost crops, they could have fed thousands.
Archaeologists found the first evidence of the lost crops in rock shelters in Kentucky and Arkansas in the 1930s. Seed caches and dried leaves were their only clues. Over the past 25 years, pioneering research by Gayle Fritz, professor emerita of archaeology at Washington University, helped to establish the fact that a previously unknown crop complex had supported local societies for millennia before maize -- a.k.a. corn -- was adopted as a staple crop.
But how, exactly, to grow them?
The lost crops include a small but diverse group of native grasses, seed plants, squashes and sunflowers -- of which only the squashes and sunflowers are still cultivated. For the rest, there is plenty of evidence that the lost crops were purposefully tended -- not just harvested from free-living stands in the wild -- but there are no instructions left.
"There are many Native American practitioners of ethnobotanical knowledge: farmers and people who know about medicinal plants, and people who know about wild foods. Their knowledge is really important," Mueller said. "But as far as we know, there aren't any people who hold knowledge about the lost crops and how they were grown.
"It's possible that there are communities or individuals who have knowledge about these plants, and it just isn't published or known by the academic community," she said. "But the way that I look at it, we can't talk to the people who grew these crops.
"So our group of people who are working with the living plants is trying to participate in the same kind of ecosystem that they participated in -- and trying to reconstruct their experience that way."
That means no greenhouse, no pesticides and no special fertilizers.
"You have not just the plants but also everything else that comes along with them, like the bugs that are pollinating them and the pests that are eating them. The diseases that affect them. The animals that they attract, and the seed dispersers," Mueller said. "There are all of these different kinds of ecological elements to the system, and we can interact with all of them."
Her new paper reported on two experiments designed to investigate germination requirements and yields for the lost crops.
Mueller discovered that a polyculture of goosefoot and erect knotweed is more productive than either grown separately as a monoculture. Grown together, the two plants have higher yields than global averages for closely related domesticated crops (think: quinoa and buckwheat), and they are within the range of those for traditionally grown maize.
"The main reason that I'm really interested in yield is because there's a debate within archeology about why these plants were abandoned," Mueller said. "We haven't had a lot of evidence about it one way or the other. But a lot of people have just kind of assumed that maize would be a lot more productive because we grow maize now, and it's known to be one of the most productive crops in the world per unit area."
Mueller wanted to quantify yield in this experiment so that she could directly compare yield for these plants to maize for the first time.
But it didn't work out perfectly. She was only able to obtain yield estimates for two of the five lost crops that she tried to grow -- but not for the plants known as maygrass, little barley and sumpweed.
Reporting on the partial batch was still important to her.
"My colleagues and I, we're motivated from the standpoint of wanting to see more diverse agricultural systems, wanting to see the knowledge and management of indigenous people recognized and curiosity about what the ecosystems of North America were like before we had this industrial agricultural system," Mueller said.

Evolution: Revelatory relationship

A new study of the ecology of an enigmatic group of novel unicellular organisms by scientists from Ludwig-Maximilians-Universitaet (LMU) in Munich supports the idea hydrogen played an important role in the evolution of Eukaryota, the first nucleated cells.
One of the most consequential developments in the history of biological evolution occurred approximately 2 billion years ago with the appearance of the first eukaryotes -- unicellular organisms that contain a distinct nucleus. This first eukaryotic lineage would subsequently give rise to all higher organisms including plants and animals, but its origins remain obscure. Some years ago, microbiologists analyzed DNA sequences from marine sediments, which shed new light on the problem. These sediments were recovered from a hydrothermal vent at a site known as Loki's Castle (named for the Norse god of fire) on the Mid-Atlantic Ridge in the Arctic Ocean. Sequencing of the DNA molecules they contained revealed that they were derived from a previously unknown group of microorganisms.
Although the cells from which the DNA originated could not be isolated and characterized directly, the sequence data showed them to be closely related to the Archaea. The researchers therefore named the new group Lokiarchaeota.
Archaea, together with the phylum Bacteria, are the oldest known lineages of single-celled organisms. Strikingly, the genomes of the Lokiarchaeota indicated that they might exhibit structural and biochemical features that are otherwise specific to eukaryotes. This suggests that the Lokiarchaeota might be related to the last common ancestor of eukaryotes. Indeed, phylogenomic analysis of the Lokiarchaeota DNA from Loki's Castle strongly suggested that they were derived from descendants of one of the last common ancestors of Eukaryota and Archaea. Professor William Orsi of the Department of Earth and Environmental Sciences at LMU, in cooperation with scientists at Oldenburg University and the Max Planck Institute for Marine Microbiology, has now been able to examine the activity and metabolism of the Lokiarchaeota directly. The results support the suggested relationship between Lokiarchaeota and eukaryotes, and provide hints as to the nature of the environment in which the first eukaryotes evolved. The new findings appear in the journal Nature Microbiology.
The most likely scenario for the emergence of eukaryotes is that they arose from a symbiosis in which the host was an archaeal cell and the symbiont was a bacterium. According to this theory, the bacterial symbiont subsequently gave rise to the mitochondria -- the intracellular organelles that are responsible for energy production in eukaryotic cells. One hypothesis proposes that the archaeal host was dependent on hydrogen for its metabolism, and that the precursor of the mitochondria produced it. This "hydrogen hypothesis" posits that the two partner cells presumably lived in an anoxic environment that was rich in hydrogen, and if they were separated from the hydrogen source they would have become more dependent on one another for survival potentially leading to an endosymbiotic event. "If the Lokiarchaeota, as the descendants of this putative ur-archaeon, are also dependent on hydrogen, this would support the hydrogen hypothesis," says Orsi. "However, up to now, the ecology of these Archaea in their natural habitat was a matter of speculation."
Orsi and his team have now, for the first time, characterized the cellular metabolism of Lokiarchaeota recovered from sediment cores obtained from the seabottom in an extensive oxygen-depleted region off the coast of Namibia. They did so by analyzing the RNA present in these samples. RNA molecules are copied from the genomic DNA, and serve as blueprints for the synthesis of proteins. Their sequences therefore reflect patterns and levels of gene activity. The sequence analyses revealed that Lokiarchaeota in these samples outnumbered bacteria by 100- to 1000-fold. "That strongly indicates that these sediments are a favorable habitat for them, promoting their activity," says Orsi.
He and his colleagues were able to establish enrichment cultures from the Lokiarchaeota in the sediment samples in the laboratory. This enabled them to study the metabolism of these cells using stable carbon isotopes as markers. The results demonstrated that the microorganisms make use of a complex network of metabolic pathways. Moreover, the data confirmed that Lokiarchaea indeed use hydrogen for the fixation of carbon dioxide. This process enhances the efficiency of metabolism, and allows these species to maintain high levels of biochemical activity, in spite of the energy limited conditions of their anoxic natural habitat. "Our experimental evidence the hydrogen hypothesis for the first eukaryotic cell," says Orsi. "Consequently, the earliest eukaryotes could have originated in oxygen-depleted and hydrogen-rich marine sediments, such as those in which modern Lokiarchaeota are particularly active today."

Life could have emerged from lakes with high phosphorus

Life as we know it requires phosphorus. It's one of the six main chemical elements of life, it forms the backbone of DNA and RNA molecules, acts as the main currency for energy in all cells and anchors the lipids that separate cells from their surrounding environment.
But how did a lifeless environment on the early Earth supply this key ingredient?
"For 50 years, what's called 'the phosphate problem,' has plagued studies on the origin of life," said first author Jonathan Toner, a University of Washington research assistant professor of Earth and space sciences.
The problem is that chemical reactions that make the building blocks of living things need a lot of phosphorus, but phosphorus is scarce. A new UW study, published Dec. 30 in the Proceedings of the National Academy of Sciences, finds an answer to this problem in certain types of lakes.
The study focuses on carbonate-rich lakes, which form in dry environments within depressions that funnel water draining from the surrounding landscape. Because of high evaporation rates, the lake waters concentrate into salty and alkaline, or high-pH, solutions. Such lakes, also known as alkaline or soda lakes, are found on all seven continents.
The researchers first looked at phosphorus measurements in existing carbonate-rich lakes, including Mono Lake in California, Lake Magadi in Kenya and Lonar Lake in India.
While the exact concentration depends on where the samples were taken and during what season, the researchers found that carbonate-rich lakes have up to 50,000 times phosphorus levels found in seawater, rivers and other types of lakes. Such high concentrations point to the existence of some common, natural mechanism that accumulates phosphorus in these lakes.
Today these carbonate-rich lakes are biologically rich and support life ranging from microbes to Lake Magadi's famous flocks of flamingoes. These living things affect the lake chemistry. So researchers did lab experiments with bottles of carbonate-rich water at different chemical compositions to understand how the lakes accumulate phosphorus, and how high phosphorus concentrations could get in a lifeless environment.
The reason these waters have high phosphorus is their carbonate content. In most lakes, calcium, which is much more abundant on Earth, binds to phosphorus to make solid calcium phosphate minerals, which life can't access. But in carbonate-rich waters, the carbonate outcompetes phosphate to bind with calcium, leaving some of the phosphate unattached. Lab tests that combined ingredients at different concentrations show that calcium binds to carbonate and leaves the phosphate freely available in the water.
"It's a straightforward idea, which is its appeal," Toner said. "It solves the phosphate problem in an elegant and plausible way."
Phosphate levels could climb even higher, to a million times levels in seawater, when lake waters evaporate during dry seasons, along shorelines, or in pools separated from the main body of the lake.
"The extremely high phosphate levels in these lakes and ponds would have driven reactions that put phosphorus into the molecular building blocks of RNA, proteins, and fats, all of which were needed to get life going," said co-author David Catling, a UW professor of Earth & space sciences.
The carbon dioxide-rich air on the early Earth, some four billion years ago, would have been ideal for creating such lakes and allowing them to reach maximum levels of phosphorus. Carbonate-rich lakes tend to form in atmospheres with high carbon dioxide. Plus, carbon dioxide dissolves in water to create acid conditions that efficiently release phosphorus from rocks.
"The early Earth was a volcanically active place, so you would have had lots of fresh volcanic rock reacting with carbon dioxide and supplying carbonate and phosphorus to lakes," Toner said. "The early Earth could have hosted many carbonate-rich lakes, which would have had high enough phosphorus concentrations to get life started."
Another recent study by the two authors showed that these types of lakes can also provide abundant cyanide to support the formation of amino acids and nucleotides, the building blocks of proteins, DNA and RNA. Before then researchers had struggled to find a natural environment with enough cyanide to support an origin of life. Cyanide is poisonous to humans, but not to primitive microbes, and is critical for the kind of chemistry that readily makes the building blocks of life.

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...