Thursday, 26 October 2023

Solar farms in space are possible

 It's viable to produce low-cost, lightweight solar panels that can generate energy in space, according to new research from the Universities of Surrey and Swansea.

The first study of its kind followed a satellite over six years, observing how the panels generated power and weathered solar radiation over 30,000 orbits.

The findings could pave the way for commercially viable solar farms in space.

Professor Craig Underwood, Emeritus Professor of Spacecraft Engineering at the Surrey Space Centre at the University of Surrey, said:

"We are very pleased that a mission designed to last one year is still working after six. These detailed data show the panels have resisted radiation and their thin-film structure has not deteriorated in the harsh thermal and vacuum conditions of space.

"This ultra-low mass solar cell technology could lead to large, low-cost solar power stations deployed in space, bringing clean energy back to Earth -- and now we have the first evidence that the technology works reliably in orbit."

Researchers from the University of Swansea's Centre for Solar Energy Research developed new solar cells from cadmium telluride. The panels cover a larger area, are more lightweight, and provide far greater power than current technology -- as well as being relatively cheap to manufacture.

Scientists from the University of Surrey designed instruments that measured their performance in orbit. The satellite itself was designed and built at the Surrey Space Centre in partnership with a team of trainee engineers from the Algerian Space Agency (ASAL).

Although the cells' power output became less efficient over time, researchers believe their findings prove that solar power satellites work and could be commercially viable.

Dr Dan Lamb from the University of Swansea said:

"The successful flight test of this novel thin film solar cell payload has leveraged funding opportunities to further develop this technology."

"Large area solar arrays for space applications are a rapidly expanding market and demonstrations such as this help to build on the UK's world class reputations for space technology."

Climate report: 'Uncharted territory' imperils life on Earth

 An international coalition of climate scientists says in a paper published today that the Earth's vital signs have worsened beyond anything humans have yet seen, to the point that life on the planet is imperiled.

William Ripple, a distinguished professor in the Oregon State University College of Forestry, and former OSU postdoctoral researcher Christopher Wolf are the lead authors of the report, and 10 other U.S. and global scientists are co-authors.

"Without actions that address the root problem of humanity taking more from the Earth than it can safely give, we're on our way to the potential collapse of natural and socioeconomic systems and a world with unbearable heat and shortages of food and freshwater," Wolf said.

Published in BioScience, "The 2023 State of the climate report: Entering uncharted territory" notes that 20 of 35 planetary vital signs the authors use to track climate change are at record extremes.

The authors share new data illustrating that many climate-related records were broken by "enormous margins" in 2023, particularly those relating to ocean temperatures and sea ice. They also note an extraordinary Canadian wildfire season that produced unprecedented carbon dioxide emissions.

The report follows by four years the "World Scientists' Warning of a Climate Emergency" published by Ripple and collaborators in BioScience and co-signed by more than 15,000 scientists in 161 countries.

"Life on our planet is clearly under siege," Ripple said. "The statistical trends show deeply alarming patterns of climate-related variables and disasters. We also found little progress to report as far as humanity combating climate change."

Among the key numbers in the report:

  • Fossil fuel subsidies -- actions by governments that artificially lower the cost of energy production, raise the price received by producers or lower the price paid by consumers -- roughly doubled between 2021 and 2022, from $531 billion to just over $1 trillion.
  • Already this year wildfires in Canada have pumped more than 1 gigaton of carbon dioxide into the atmosphere, greater than Canada's total 2021 greenhouse gas emissions of 0.67 gigatons.
  • In 2023, there have already been 38 days with global average temperatures more than 1.5 degrees Celsius above pre-industrial levels. Until this year, such days were a rarity, the authors note.
  • The highest average Earth surface temperature ever recorded came this past July, and there's reason to believe it was the highest surface temperature the planet has seen in the last 100,000 years.

"As scientists, we are hugely troubled by the sudden increases in the frequency and severity of climate-related disasters," said Wolf, now a scientist with Corvallis-based Terrestrial Ecosystems Research Associates. "The frequency and severity of those disasters might be outpacing rising temperatures. By the end of the 21st century, as many as 3 to 6 billion people may find themselves outside the Earth's livable regions, meaning they will be encountering severe heat, limited food availability and elevated mortality rates."

The authors say policies are needed that take aim at the underlying issue of "ecological overshoot." When human demand on the Earth's resources is too large, the result in an array of environmental crises, including biodiversity decline. As long as humanity continues to put extreme pressure on the planet, any strategy that focuses only on carbon or climate will simply redistribute the pressure, they note.

"Our goal is to communicate climate facts and make policy recommendations," Ripple said. "It is a moral duty of scientists and our institutions to alert humanity of any potential existential threat and to show leadership in taking action."

The authors urge transitioning to a global economy that prioritizes human well-being and curtails overconsumption and excessive emissions by the rich. Specific recommendations include phasing out fossil fuel subsidies, transitioning toward plant-based diets, scaling up forest protection efforts and adopting international coal elimination and fossil fuel non-proliferation treaties.

They stress that all climate-related actions must be grounded in equity and social justice, noting that extreme weather and other climate impacts are being disproportionately felt by the poorest people, who have contributed the least to climate change.

Raining cats and dogs: Global precipitation patterns a driver for animal diversity

 Since the HMS Beagle arrived in the Galapagos with Charles Darwin to meet a fateful family of finches, ecologists have struggled to understand a particularly perplexing question: Why is there a ridiculous abundance of species some places on earth and a scarcity in others? What factors, exactly, drive animal diversity?

With access to a mammoth set of global-scale climate data and a novel strategy, a team from the Department of Watershed Sciences in Quinney College of Natural Resources and the Ecology Center identified several factors to help answer this fundamental ecological question. They discovered that what an animal eats (and how that interacts with climate) shapes Earth's diversity.

The work was recently published in the high-impact journal Ecology Letters.

"Historically studies looking at the distribution of species across Earth's latitudinal gradient have overlooked the role of trophic ecology -- how what animals eat impacts where they are found," said Trisha Atwood, author on the study from the Department of Watershed Sciences and the Ecology Center. "This new work shows that predators, omnivores and herbivores are not randomly scattered across the globe. There are patterns to where we find these groups of animals."

Certain locations have an unexpected abundance of meat-eating predators -- parts of Africa, Europe and Greenland. Herbivores are common in cooler areas, and omnivores tend to be more dominant in warm places. Two key factors emerged as crucial in shaping these patterns: precipitation and plant growth.

Precipitation patterns across time play a big role in determining where different groups of mammals thrive, Atwood said. Geographical areas where precipitation varies by season, without being too extreme, had the highest levels of mammal diversity.

"Keep in mind that we aren't talking about the total amount of rain," said Jaron Adkins, lead author on the research. "If you imagine ecosystems around the world on a scale of precipitation and season, certain places in Utah and the Amazon rainforest fall on one end with low variability -- they have steady levels of precipitation throughout the year. Other regions, like southern California, have really high variability, getting about 75 percent of the annual precipitation between December and March."

But the sweet spot for predators and herbivores fell in a middle zone between the two extremes, he said. Places like Madagascar, where precipitation patterns had an equal split between a wet season and a dry one (six months each), had the ideal ecological cocktail for promoting conditions for these two groups. Omnivore diversity tends to thrive in places with very stable climates.

The second important factor connected with mammal diversity the work uncovered was a measure of the amount of plant growth in an area, measured as "gross primary productivity."

"It makes intuitive sense for plant-eating animals to benefit from plant growth," Adkins said.

But this measure actually impacted carnivores most, according to the research. The strong relationship between predators and plant growth highlights the importance of an abundance of plants on an entire food chain's structural integrity.

"It was surprising that this factor was more important for predators than omnivores and herbivores," Atwood said. "Why this is remains a mystery."

Although evolutionary processes are ultimately responsible for spurring differences in species, climate conditions can impact related factors -- rates of evolutionary change, extinction and animal dispersal -- influencing species and trait-based richness, according to the research.

Animal diversity is rapidly declining in many ecosystems around the world through habitat loss and climate change. This has negative consequences for ecosystems. Forecasting how climate change will disrupt animal systems going forward is extremely important, Atwood said, and this research is a first step in better managing future conditions for animals around the world.

"Animal diversity can act as an alarm system for the stability of ecosystems," Atwood said. "Identifying the ecological mechanisms that help drive richness patterns provides insight for better managing and predicting how diversity could change under future climates."

In addition to Adkins and Atwood, the research included seven authors currently or previously associated with the Department of Watershed Sciences and the Ecology Center: Edd Hammill, Umarfarooq Abdulwahab, John Draper, Marshall Wolf, Catherine McClure, Adrián González Ortiz and Emily Chavez.

Bizarre new fossils shed light on ancient plankton

 A scientist from the University of Leicester has discovered a new type of fossil that reveals life in the oceans half a billion years ago.

The tiny organisms, detailed in a new study in the journal Proceedings of the Royal Society B, resemble modern-day algae and might also give scientists an insight into the climate changes that affected our oceans.

The fossils are microscopic and look like spiny balls connected together. The study's author Dr Tom Harvey, from the University of Leicester School of Geography, Geology and the Environment, said: "When I first saw them, I had no idea what they were. I wondered if they could be animal eggs, or some new type of organism. There's nothing quite like them, living or extinct."

But as further specimens came to light, Dr Harvey identified similarities with modern green algae that live floating in the plankton of ponds and lakes. He explains: "The fossils have the same sort of colonial structure as the modern algae, with cells linking together, explaining their neat, geometric arrangements. Surprisingly, though, the fossil examples lived in the sea, giving a rare glimpse of the early marine plankton."

The importance of the fossils lies in their immense age. They lived around the time when animals were first evolving, during the Cambrian 'explosion' of life -- and this is probably no coincidence. In today's world, phytoplankton provides the fundamental food source for almost all life in the oceans. However, the modern groups of phytoplankton evolved relatively recently, and we do not know which groups inhabited the Cambrian oceans.

Dr Harvey explains: "When we look at modern plankton, we see that algae develop colonies when animals are trying to eat them. It's a defence mechanism. So, the existence of colonial algae in the Cambrian Period suggests that early animals were evolving to feed in the plankton, starting a predator-prey relationship that has continued ever since.

"Considering that the plankton underpins life in the oceans, and fossil plankton helps us build ancient climate models, these small fossils have a big role in telling the history of life on Earth."

The new discovery will prompt a re-think on other early microfossils. For years, scientists have thought that the spiny balls found individually were the dormant cysts of single-celled life.

For Dr Harvey, the new fossils seriously challenge this view: "I wonder if we've been getting it all wrong, and in fact lots of these microfossils were living as colonies in the plankton. It's easy to accidentally break up the fossils as we extract them from the rocks, so we all need to get back to the collections, back to our labs, and find out how common they really were."

Scientists develop new method to create stable, efficient next-gen solar cells

 Next-generation solar materials are cheaper and more sustainable to produce than traditional silicon solar cells, but hurdles remain in making the devices durable enough to withstand real-world conditions. A new technique developed by a team of international scientists could simplify the development of efficient and stable perovskite solar cells, named for their unique crystalline structure that excels at absorbing visible light.

The scientists, including Penn State faculty Nelson Dzade, reported in the journal Nature Energy their new method for creating more durable perovskite solar cells that still achieve a high efficiency of 21.59% conversion of sunlight to electricity.

Perovskites are promising solar technology because the cells can be manufactured at room temperature using less energy than traditional silicon materials, making them more affordable and more sustainable to produce, according to the Dzade, assistant professor of energy and mineral engineering in the John and Willie Leone Family Department of Energy and Mineral Engineering and co-author of the study. But the leading candidates used to make these devices, hybrid organic-inorganic metal halides, contain organic components that are susceptible to moisture, oxygen and heat, and exposure to real-world conditions can lead to rapid performance degradation, the scientists said.

One solution involves turning instead to all-inorganic perovskite materials like cesium lead iodide, which has good electrical properties and a superior tolerance to environmental factors. However, this material is polymorphic, meaning it has multiple phases with different crystalline structures. Two of the photoactive phases are good for solar cells, but they can easily convert to an undesirable non-photoactive phase at room temperature, which introduces defects and degrades the efficiency of the solar cell, the scientists said.

The scientists combined the two photoactive polymorphs of cesium lead iodide to form a phase-heterojunction -- which can suppress the transformation to the undesirable phase, the scientists said. Heterojunctions are formed by stacking different semiconductor materials, like layers in a solar cell, with dissimilar optoelectronic properties. These junctions in solar devices can be tailored to help absorb more energy from the sun and convert it into electricity more efficiently.

"The beautiful thing about this work is that it shows the fabrication of phase heterojunction solar cells by utilizing two polymorphs of the same material is the way to go," Dzade said. "It improves material stability and prevents interconversion between the two phases. The formation of a coherent interface between the two phases allows electrons to flow easily across the device, leading to enhanced power conversion efficiency. That is what we demonstrated in this piece of work."

The researchers fabricated a device that achieved a 21.59% power conversion efficiency, among the highest reported for this type of approach, and excellent stability. The devices maintained more than 90% of the initial efficiency after 200?hours of storage under ambient conditions, Dzade said.

"When scaled from a laboratory to a real-world solar module, our design exhibited a power conversion efficiency of 18.43% for a solar cell area of more than 7 square inches (18.08 centimeters squared)," Dzade said. "These initial results highlight the potential of our approach for developing ultra-large perovskite solar cell modules and reliably assessing their stability."

Dzade modeled the structure and electronic properties of the heterojunction at the atomic scale and found that bringing the two photoactive phases together created a stable and coherent interface structure, which promotes efficient charge separation and transfer -- desirable properties for achieving high efficiency solar devices.

Dzade's colleagues at Chonnam University in South Korea developed the unique dual deposition method for fabricating the device -- depositing one phase with a hot-air technique and the other with triple-source thermal evaporation. Adding small amounts of molecular and organic additives during the deposition process further improved the electrical properties, efficiency and stability of the device, said Sawanta S. Mali, a research professor at Chonnam University in South Korea and lead author on the paper.

"We believe the dual deposition technique we developed in this work will have important implications for fabricating highly efficient and stable perovskite solar cells moving forward," said Nelson Dzade, assistant professor of energy and mineral engineering in the John and Willie Leone Family Department of Energy and Mineral Engineering and co-author of the study.

The researchers said the dual deposition technique could pave the way for the development of additional solar cells based on all inorganic perovskites or other halide perovskite compositions. In addition to extending the technique to different compositions, future work will involve making the current phase-heterojunction cells more durable in real-world conditions and scaling them to the size of traditional solar panels, the researchers said.

"With this approach, we believe it should be possible in the near future to shoot the efficiency of this material past 25%," Dzade said. "And once we do that, commercialization becomes very close."

Also contributing were Chang Kook Hong, professor, and Jyoti Patil, research professor, at Chonnam National University, South Korea; Yu-Wu Zhong, professor, and Jiang-Yang Shao, researcher, at the Institute of Chemistry, Chinese Academy of Sciences; and Sachin Rondiya, assistant professor, Indian Institute of Science.

The National Research Foundation of Korea supported this work. Computer simulations were performed on the Roar Supercomputer in the Institute for Computational and Data Sciences at Penn State.

How quantum light 'sees' quantum sound

Researchers at the University of East Anglia have proposed a new way of using quantum light to 'see' quantum sound.

A new paper published today reveals the quantum-mechanical interplay between vibrations and particles of light, known as photons, in molecules.

It is hoped that the discovery may help scientists better understand the interactions between light and matter on molecular scales.

And it potentially paves the way for addressing fundamental questions about the importance of quantum effects in applications ranging from new quantum technologies to biological systems.

Dr Magnus Borgh from UEA's School of Physics said: "There is a long-standing controversy in chemical physics about the nature of processes where energy from particles of light is transferred within molecules.

"Are they fundamentally quantum-mechanical or classical? Molecules are complex and messy systems, constantly vibrating. How do these vibrations affect any quantum-mechanical processes in the molecule?

"These processes are typically investigated using techniques that rely on polarisation -- the same property of light used in sunglasses to reduce reflections. But this is a classical phenomenon.

advertisement

"Techniques from quantum optics, the field of physics that studies the quantum nature of light and its interactions with matter on the atomic scale, can offer a way to investigate genuine quantum effects directly in molecular systems."

Quantum behaviour can be revealed by studying correlations in the emitted light from a molecule placed in a laser field. Correlations answer the question how likely it is that two photons are emitted very close together and can be measured using standard techniques.

Ben Humphries, PhD student in theoretical chemistry, at UEA said: "Our research shows that when a molecule exchanges phonons -- quantum-mechanical particles of sound -- with its environment, this produces a recognisable signal in the photon correlations."

While photons are routinely created and measured in laboratories all over the world, individual quanta of vibrations, which are the corresponding particles of sound, phonons, cannot in general be similarly measured.

The new findings provide a toolbox for investigating the world of quantum sound in molecules.

Lead researcher Dr Garth Jones, from UEA's School of Chemistry, said: "We have also computed correlations between photon and phonons.

"It would be very exciting if our paper could inspire the development of new experimental techniques to detect individual phonons directly," he added. 

LIGO surpasses the quantum limit

 In 2015, the Laser Interferometer Gravitational-Wave Observatory, or LIGO, made history when it made the first direct detection of gravitational waves, or ripples in space and time, produced by a pair of colliding black holes. Since then, the U.S. National Science Foundation (NSF)-funded LIGO and its sister detector in Europe, Virgo, have detected gravitational waves from dozens of mergers between black holes as well as from collisions between a related class of stellar remnants called neutron stars. At the heart of LIGO's success is its ability to measure the stretching and squeezing of the fabric of space-time on scales 10 thousand trillion times smaller than a human hair.

As incomprehensibly small as these measurements are, LIGO's precision has continued to be limited by the laws of quantum physics. At very tiny, subatomic scales, empty space is filled with a faint crackling of quantum noise, which interferes with LIGO's measurements and restricts how sensitive the observatory can be. Now, writing in the journal Physical Review X, LIGO researchers report a significant advance in a quantum technology called "squeezing" that allows them to skirt around this limit and measure undulations in space-time across the entire range of gravitational frequencies detected by LIGO.

This new "frequency-dependent squeezing" technology, in operation at LIGO since it turned back on in May of this year, means that the detectors can now probe a larger volume of the universe and are expected to detect about 60 percent more mergers than before. This greatly boosts LIGO's ability to study the exotic events that shake space and time.

"We can't control nature, but we can control our detectors," says Lisa Barsotti, a senior research scientist at MIT who oversaw the development of the new LIGO technology, a project that originally involved research experiments at MIT led by Matt Evans, professor of physics, and Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics and the dean of the School of Science. The effort now includes dozens of scientists and engineers based at MIT, Caltech, and the twin LIGO observatories in Hanford, Washington, and Livingston, Louisiana.

"A project of this scale requires multiple people, from facilities to engineering and optics -- basically the full extent of the LIGO Lab with important contributions from the LIGO Scientific Collaboration. It was a grand effort made even more challenging by the pandemic," Barsotti says.

"Now that we have surpassed this quantum limit, we can do a lot more astronomy," explains Lee McCuller, assistant professor of physics at Caltech and one of the leaders of the new study. "LIGO uses lasers and large mirrors to make its observations, but we are working at a level of sensitivity that means the device is affected by the quantum realm."

The results also have ramifications for future quantum technologies such as quantum computers and other microelectronics as well as for fundamental physics experiments. "We can take what we have learned from LIGO and apply it to problems that require measuring subatomic-scale distances with incredible accuracy," McCuller says.


"When NSF first invested in building the twin LIGO detectors in the late 1990s, we were enthusiastic about the potential to observe gravitational waves," says NSF Director Sethuraman Panchanathan. "Not only did these detectors make possible groundbreaking discoveries, they also unleashed the design and development of novel technologies. This is truly exemplar of the DNA of NSF -- curiosity-driven explorations coupled with use-inspired innovations. Through decades of continuing investments and expansion of international partnerships, LIGO is further poised to advance rich discoveries and technological progress."

The laws of quantum physics dictate that particles, including photons, will randomly pop in and out of empty space, creating a background hiss of quantum noise that brings a level of uncertainty to LIGO's laser-based measurements. Quantum squeezing, which has roots in the late 1970s, is a method for hushing quantum noise or, more specifically, for pushing the noise from one place to another with the goal of making more precise measurements.

The term squeezing refers to the fact that light can be manipulated like a balloon animal. To make a dog or giraffe, one might pinch one section of a long balloon into a small precisely located joint. But then the other side of the balloon will swell out to a larger, less precise size. Light can similarly be squeezed to be more precise in one trait, such as its frequency, but the result is that it becomes more uncertain in another trait, such as its power. This limitation is based on a fundamental law of quantum mechanics called the uncertainty principle, which states that you cannot know both the position and momentum of objects (or the frequency and power of light) at the same time.

Since 2019, LIGO's twin detectors have been squeezing light in such a way as to improve their sensitivity to the upper frequency range of gravitational waves they detect. But, in the same way that squeezing one side of a balloon results in the expansion of the other side, squeezing light has a price. By making LIGO's measurements more precise at the high frequencies, the measurements became less precise at the lower frequencies.

"At some point, if you do more squeezing, you aren't going to gain much. We needed to prepare for what was to come next in our ability to detect gravitational waves," Barsotti explains.

Now, LIGO's new frequency-dependent optical cavities -- long tubes about the length of three football fields -- allow the team to squeeze light in different ways depending on the frequency of gravitational waves of interest, thereby reducing noise across the whole LIGO frequency range.

"Before, we had to choose where we wanted LIGO to be more precise," says LIGO team member Rana Adhikari, a professor of physics at Caltech. "Now we can eat our cake and have it too. We've known for a while how to write down the equations to make this work, but it was not clear that we could actually make it work until now. It's like science fiction."

Uncertainty in the quantum realm

Each LIGO facility is made up of two 4-kilometer-long arms connected to form an "L" shape. Laser beams travel down each arm, hit giant suspended mirrors, and then travel back to where they started. As gravitational waves sweep by Earth, they cause LIGO's arms to stretch and squeeze, pushing the laser beams out of sync. This causes the light in the two beams to interfere with each other in a specific way, revealing the presence of gravitational waves.

However, the quantum noise that lurks inside the vacuum tubes that encase LIGO's laser beams can alter the timing of the photons in the beams by minutely small amounts. McCuller likens this uncertainty in the laser light to a can of BBs. "Imagine dumping out a can full of BBs. They all hit the ground and click and clack independently. The BBs are randomly hitting the ground, and that creates a noise. The light photons are like the BBs and hit LIGO's mirrors at irregular times," he said in a Caltech interview.

The squeezing technologies that have been in place since 2019 make "the photons arrive more regularly, as if the photons are holding hands rather than traveling independently," McCuller said. The idea is to make the frequency, or timing, of the light more certain and the amplitude, or power, less certain as a way to tamp down the BB-like effects of the photons. This is accomplished with the help of specialized crystals that essentially turn one photon into a pair of two entangled, or connected, photons with lower energy. The crystals don't directly squeeze light in LIGO's laser beams; rather, they squeeze stray light in the vacuum of the LIGO tubes, and this light interacts with the laser beams to indirectly squeeze the laser light.

"The quantum nature of the light creates the problem, but quantum physics also gives us the solution," Barsotti says.

An idea that began decades ago

The concept for squeezing itself dates back to the late 1970s, beginning with theoretical studies by the late Russian physicist Vladimir Braginsky; Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus at Caltech; and Carlton Caves, professor emeritus at the University of New Mexico. The researchers had been thinking about the limits of quantum-based measurements and communications, and this work inspired one of the first experimental demonstrations of squeezing in 1986 by H. Jeff Kimble, the William L. Valentine Professor of Physics, Emeritus at Caltech. Kimble compared squeezed light to a cucumber; the certainty of the light measurements are pushed into only one direction, or feature, turning "quantum cabbages into quantum cucumbers," he wrote in an article in Caltech's Engineering & Science magazine in 1993.

In 2002, researchers began thinking about how to squeeze light in the LIGO detectors, and, in 2008, the first experimental demonstration of the technique was achieved at the 40-meter test facility at Caltech. In 2010, MIT researchers developed a preliminary design for a LIGO squeezer, which they tested at LIGO's Hanford site. Parallel work done at the GEO600 detector in Germany also convinced researchers that squeezing would work. Nine years later, in 2019, after many trials and careful teamwork, LIGO began squeezing light for the first time.

"We went through a lot of troubleshooting," says Sheila Dwyer, who has been working on the project since 2008, first as a graduate student at MIT and then as a scientist at the LIGO Hanford Observatory beginning in 2013. "Squeezing was first thought of in the late 1970s, but it took decades to get it right."

Too much of a good thing

However, as noted earlier, there is a tradeoff that comes with squeezing. By moving the quantum noise out of the timing, or frequency, of the laser light, the researchers put the noise into the amplitude, or power, of the laser light. The more powerful laser beams then push LIGO's heavy mirrors around causing a rumbling of unwanted noise corresponding to lower frequencies of gravitational waves. These rumbles mask the detectors' ability to sense low-frequency gravitational waves.

"Even though we are using squeezing to put order into our system, reducing the chaos, it doesn't mean we are winning everywhere," says Dhruva Ganapathy, a graduate student at MIT and one of four co-lead authors of the new study. "We are still bound by the laws of physics." The other three lead authors of the study are MIT graduate student Wenxuan Jia, LIGO Livingston postdoc Masayuki Nakano, and MIT postdoc Victoria Xu.

Unfortunately, this troublesome rumbling becomes even more of a problem when the LIGO team turns up the power on its lasers. "Both squeezing and the act of turning up the power improve our quantum-sensing precision to the point where we are impacted by quantum uncertainty," McCuller says. "Both cause more pushing of photons, which leads to the rumbling of the mirrors. Laser power simply adds more photons, while squeezing makes them more clumpy and thus rumbly."

A win-win

The solution is to squeeze light in one way for high frequencies of gravitational waves and another way for low frequencies. It's like going back and forth between squeezing a balloon from the top and bottom and from the sides.

This is accomplished by LIGO's new frequency-dependent squeezing cavity, which controls the relative phases of the light waves in such a way that the researchers can selectively

move the quantum noise into different features of light (phase or amplitude) depending on the frequency range of gravitational waves.

"It is true that we are doing this really cool quantum thing, but the real reason for this is that it's the simplest way to improve LIGO's sensitivity," Ganapathy says. "Otherwise, we would have to turn up the laser, which has its own problems, or we would have to greatly increase the sizes of the mirrors, which would be expensive."

LIGO's partner observatory, Virgo, will likely also use frequency-dependent squeezing technology within the current run, which will continue until roughly the end of 2024. Next-generation larger gravitational-wave detectors, such as the planned ground-based Cosmic Explorer, will also reap the benefits of squeezed light.

With its new frequency-dependent squeezing cavity, LIGO can now detect even more black hole and neutron star collisions. Ganapathy says he's most excited about catching more neutron star smashups. "With more detections, we can watch the neutron stars rip each other apart and learn more about what's inside."

"We are finally taking advantage of our gravitational universe," Barsotti says. "In the future, we can improve our sensitivity even more. I would like to see how far we can push it."

The Physical Review X study is titled "Broadband quantum enhancement of the LIGO detectors with frequency-dependent squeezing." Many additional researchers contributed to the development of the squeezing and frequency-dependent squeezing work, including Mike Zucker of MIT and GariLynn Billingsley of Caltech, the leads of the "Advanced LIGO Plus" upgrades that includes the frequency-dependent squeezing cavity; Daniel Sigg of LIGO Hanford Observatory; Adam Mullavey of LIGO Livingston Laboratory; and David McClelland's group from the Australian National University.

The LIGO-Virgo-KAGRA Collaboration operates a network of gravitational-wave detectors in the United States, Italy, and Japan. LIGO Laboratory is operated by Caltech and MIT, and is funded by the NSF with contributions to the Advanced LIGO detectors from Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council), and Australia (Australian Research Council). Virgo is managed by the European Gravitational Observatory (EGO) and is funded by the Centre National de la Recherche Scientifique (CNRS) in France, the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and the National Institute for Subatomic Physics (Nikhef) in the Netherlands. KAGRA is hosted by the Institute for Cosmic Ray Research (ICRR) at the University of Tokyo and co-hosted by the National Astronomical Observatory of Japan (NAOJ) and the High Energy Accelerator Research Organization (KEK).

Bitcoin mining has 'very worrying' impacts on land and water, not only carbon

 As bitcoin and other cryptocurrencies have grown in market share, they've been criticized for their heavy carbon footprint: Cryptocurrency mining is an energy-intensive endeavor. Mining has massive water and land footprints as well, according to a new study that is the first to detail country-by-country environmental impacts of bitcoin mining. It serves as the foundation for a new United Nations (UN) report on bitcoin mining, also published today.

The study reveals how each country's mix of energy sources defines the environmental footprint of its bitcoin mining and highlights the top 10 countries for energy, carbon, water and land use.* The work was published in Earth's Future, which publishes interdisciplinary research on the past, present and future of our planet and its inhabitants.

"A lot of our exciting new technologies have hidden costs we don't realize at the onset," said Kaveh Madani, a Director at United Nations University who led the new study. "We introduce something, it gets adopted, and only then do we realize that there are consequences."

Madani and his co-authors used energy, carbon, water and land use data from 2020 to 2021 to calculate country-specific environmental impacts for 76 countries known to mine bitcoin. They focused on bitcoin because it's older, popular and more well-established/widely used than other cryptocurrencies.

Madani said the results were "very interesting and very concerning," in part because demand is rising so quickly. But even with more energy-efficient mining approaches, if demand continues to grow, so too will mining's environmental footprints, he said.

Electricity and carbon

If bitcoin mining were a country, it would be ranked 27th in energy use globally. Overall, bitcoin mining consumed about 173 terawatt hours of electricity in the two years from January 2020 to December 2021, about 60% more than the energy used for bitcoin mining in 2018-2019, the study found. Bitcoin mining emitted about 86 megatons of carbon, largely because of the dominance of fossil fuel-based energy in bitcoin-mining countries.

The environmental impact of bitcoin mining fluctuates along with energy supply and demand in a country. When energy is inexpensive, the profitability of mining bitcoin goes up. But when energy is expensive, the value of bitcoin must be high enough to make the cost of mining worth it to the miner, whether it's an individual, a company or a government.

China, the U.S. and Kazakhstan had the largest energy and carbon footprints in 2020-2021.

Water

Globally, bitcoin mining used 1.65 million liters (about 426,000 gallons) of water in 2020-2021, enough to fill more than 660,000 Olympic-sized swimming pools. China, the U.S. and Canada had the largest water footprints. Kazakhstan and Iran, which along with the U.S. and China have suffered from water shortages, were also in the top-10 list for water footprint.

"These are very, very worrying numbers," Madani said. "Even hydropower, which some countries consider a clean source of renewable energy, has a huge footprint."

Land use

The study analyzed land use by considering the area of land affected to produce energy for mining. The land footprint of server farms is negligible, Kaveh said. The global land use footprint of bitcoin mining is 1,870 square kilometers (722 square miles), with China's footprint alone taking up 913 square kilometers (353 square miles). The U.S.' land footprint is 303 square kilometers (117 square miles), and likely growing while China's is shrinking.

Most impacted countries

China and the United States, which have two of the largest economies and populations in the world, take the top two spots across all environmental factors. A mix of other countries make up the other 8 spots in the top 10. Kazakhstan, Malaysia, Iran and Thailand -- countries to which servers are outsourced and, in some cases, where cryptocurrency mining is subsidized by the government -- appear as well. Canada, Germany and Russia have some of the largest footprints across all categories. Each country engaged in large-scale bitcoin mining affects countries around the world by their carbon emissions, Kaveh noted.

But the benefits of bitcoin mining may not accrue to the country, or the individuals, doing the work. Cryptocurrency mining is an extractive and, by design, difficult to trace process, so geographic distribution of environmental impacts cannot be assumed to be a map of the biggest digital asset owners.

"It's hard to know exactly who is benefiting from this," Madani said. "The issue now is who is suffering from this."

Already, some countries have potentially seen their resources impacted by cryptocurrency mining. In 2021, Iran faced blackouts. The government blamed bitcoin mining for excessively draining hydropower during a drought and periodically banned the practice.

China in June 2021 banned bitcoin mining and transactions in the country; other countries, such as the U.S. and Kazakhstan, have taken up the slack and had their shares in bitcoin increase by 34% and 10%, respectively.

Madani said the study is not meant to indict bitcoin or other cryptocurrency mining. "We're getting used to these technologies, and they have hidden costs we don't realize," he said. "We want to inform people and industries about what these costs might be before it's too late."

* By the numbers, global bitcoin mining in 2020-2021:

  • Used 173 terawatt hours of electricity (more than most nations)
  • Emitted 86 megatons of carbon (like burning 8.5 billion pounds of coal)
  • Required 1.65 cubic kilometers of water (more than the domestic use of 300 million people in Sub-Saharan Africa)
  • Affected 1,870 square kilometers of land (1.4 times the size of Los Angeles)
  • Got 67% of its energy from fossil fuels, with coal contributing 45%

Plant-based materials give 'life' to tiny soft robots

 A team of University of Waterloo researchers has created smart, advanced materials that will be the building blocks for a future generation of soft medical microrobots.

These tiny robots have the potential to conduct medical procedures, such as biopsy, and cell and tissue transport, in a minimally invasive fashion. They can move through confined and flooded environments, like the human body, and deliver delicate and light cargo, such as cells or tissues, to a target position.

The tiny soft robots are a maximum of one centimetre long and are bio-compatible and non-toxic. The robots are made of advanced hydrogel composites that include sustainable cellulose nanoparticles derived from plants.

This research, led by Hamed Shahsavan, a professor in the Department of Chemical Engineering, portrays a holistic approach to the design, synthesis, fabrication, and manipulation of microrobots. The hydrogel used in this work changes its shape when exposed to external chemical stimulation. The ability to orient cellulose nanoparticles at will enables researchers to program such shape-change, which is crucial for the fabrication of functional soft robots.

"In my research group, we are bridging the old and new," said Shahsavan, director of the Smart Materials for Advanced Robotic Technologies (SMART-Lab). "We introduce emerging microrobots by leveraging traditional soft matter like hydrogels, liquid crystals, and colloids."

The other unique component of this advanced smart material is that it is self-healing, which allows for programming a wide range in the shape of the robots. Researchers can cut the material and paste it back together without using glue or other adhesives to form different shapes for different procedures.

The material can be further modified with a magnetism that facilitates the movement of soft robots through the human body. As proof of concept of how the robot would maneuver through the body, the tiny robot was moved through a maze by researchers controlling its movement using a magnetic field.

"Chemical engineers play a critical role in pushing the frontiers of medical microrobotics research," Shahsavan said. "Interestingly, tackling the many grand challenges in microrobotics requires the skillset and knowledge chemical engineers possess, including heat and mass transfer, fluid mechanics, reaction engineering, polymers, soft matter science, and biochemical systems. So, we are uniquely positioned to introduce innovative avenues in this emerging field."

The next step in this research is to scale the robot down to submillimeter scales.

Shahsavan's research group collaborated with Waterloo's Tizazu Mekonnen, a professor from the Department of Chemical Engineering, Professor Shirley Tang, Associate Dean of Science (Research), and Amirreza Aghakhani, a professor from the University of Stuttgart in Germany.

There have been questions about microbiome medicine -- is it a fad or is it real? This is an extremely exciting demonstration that we can exploit bacteria to work for us, and it works for high blood pressure, something that affects a significant portion of the population," she said. "It's that ray of hope that you may not need traditional medications to keep your blood pressure in check." According to data from the U.S. Centers for Disease Control and Prevention, nearly half of U.S. adults have hypertension, and only about one in four of those have their blood pressure under control. Though it rarely exhibits symptoms, uncontrolled blood pressure is a major risk factor for heart attack, stroke and kidney disease. Joe said more research is needed, including studies on what happens when ACE2-producing bacteria are introduced in animals whose bodies already make the protein naturally and any side effects that might come with boosting ACE2 levels in the gut. However, the first-of-its-kind research from UToledo opens a window into the substantial potential of exploiting the bacteria living in our gut for our benefit. "It is a real possibility that we can use bacteria to correct hypertension. This is a big deal, and the concept could be applied to other diseases," she said. "For example, if you are unable to control your sugar, can we have a bacteria make a protein that can lower your blood glucose? There are still a lot of questions that need answered, but now we know the paradigm works

 Scientists at The University of Toledo have proven that engineered bacteria can lower blood pressure, a finding that opens new doors in the pursuit of harnessing our body's own microbiome to treat hypertension.

The study, published this month in the peer-reviewed journal Pharmacological Research, represents a paradigm shift, said Dr. Bina Joe, a hypertension researcher at UToledo and the paper's senior author.

"The question we always ask is, can we exploit microbiota to help our health, for which optimal blood pressure is a cardinal sign. Until now, we have simply said changes in microbiota play a role in elevated blood pressure or hypertension. Those are important findings, but they don't always have an immediately translational application," she said. "This is the first time we have shown that we really can do this. It's a proof of principle that you can use microbiota to make products that measurably improve your health."

Joe, a Distinguished University Professor and chair of the Department of Physiology and Pharmacology in the UToledo College of Medicine and Life Sciences, is a pioneer in studying the connection between bacteria living in our gut and blood pressure regulation.

In her most recent research, Joe and her team tested Lactobacillus paracasei, a beneficial gut bacterium, that was specially modified to produce a protein called ACE2 in lab rats that are predisposed to hypertension and unable to naturally produce ACE2.

ACE2 has drawn considerable interest in recent years because of its role as a key receptor for the virus that causes COVID-19.

However, the protein also negatively regulates the renin-angiotensin system which generates angiotensin II, a hormone that raises blood pressure in a number of ways, including by the constriction of blood vessels.

By feeding rats the engineered Lactobacillus paracasei bacterium as a probiotic, researchers were able to introduce human ACE2 in their guts, which specifically reduced their gut angiotensin II and, in turn, lowered their blood pressure.

Interestingly, though, the blood-pressure-lowering effects were only seen in female rats. Though there was no difference in ACE2 expression between male and female rats, only the female rats saw a decrease in their blood pressure.

Researchers are unsure of exactly why that was the case, but Joe speculates it has something to do with the fact that females, but not males, have two functional copies of ACE2.

The gene encoding ACE2 is located in a region of the X-chromosome which escapes a genetic phenomenon called X-inactivation. It appears, Joe said, that having two functional copies of ACE2 is extremely important for females, because when both copies were lost, females had a much higher level of hypertension compared to males.

"Females therefore appear to readily accept all the help they can get from gut microbiota supplying ACE," she said. "For now, this is a theory that requires further experimental proof."

Even with the differing results between male and female rats, however, Joe said the findings are an important steppingstone between the theory of leveraging bacteria to treat hypertension and other chronic conditions and practically being able to bring it into the clinic.



Women with a heart healthy diet in midlife are less likely to report cognitive decline later

 Women with diets during middle age designed to lower blood pressure were about 17 percent less likely to report memory loss and other signs of cognitive decline decades later, a new study finds.

Led by researchers from NYU Grossman School of Medicine, the new findings suggest that a mid-life lifestyle modification -- adoption of the Dietary Approaches to Stop Hypertension, or DASH diet -- may improve cognitive function later in life for women, who make up more than two-thirds of those diagnosed with Alzheimer's disease, the most prevalent form of dementia.

The findings, published online today in the journal Alzheimer's & Dementia , have implications for the approximately 6.5 million Americans over age 65 diagnosed with Alzheimer's disease in 2022. That number is expected to more than double by 2060.

"Subjective complaints about daily cognitive performance are early predictors of more serious neurocognitive disorders such as Alzheimer's," said Yu Chen, PhD, MPH, professor in the Department of Population Health and senior author of the study. "With more than 30 years follow-up, we found that the stronger the adherence to a DASH diet in midlife, the less likely women are to report cognitive issues much later in life."

The DASH diet includes a high consumption of plant-based foods that are rich in potassium, calcium, and magnesium and limits saturated fat, cholesterol, sodium, and sugar. Longstanding research shows that high blood pressure, particularly in midlife, is a risk factor for cognitive decline and dementia.

How the Study was Conducted

The investigators analyized data from 5,116 of the more than 14,000 women enrolled in the NYU Women's Health Study, one of the longest running studies of its kind that examines the impact of lifestyle and other factors on the development of the most common cancers among women, as well as other chronic conditions.

advertisement

The researchers queried the study participants' diet using questionnaires between 1985 and 1991 at study enrollment when the participants were, on average, 49 years old. The participants were followed for more than 30 years (average age of 79) and then asked to report any cognitive complaints. Participants that did not return questionnaires were contacted by phone.

Self-reported cognitive complaints were assessed using six validated standard questions that are indicative of later mild cognitive impairment, which leads to dementia. These questions were about difficulties in remembering recent events or shopping lists, understanding spoken instructions or group conversation, or navigating familier streets.

Of the six cognitive complaints, 33 percent of women reported having more than one. Women who adhered most closely to the DASH diet had a 17 percent reduction in the odds of reporting multiple cognitive complaints.

"Our data suggest that it is important to start a healthy diet in midlife to prevent cognitive impairment in older age," said Yixiao Song , a lead author of the study.

"Following the DASH diet may not only prevent high blood pressure, but also cognitive issues," said Fen Wu, PhD, an senior associate research scientist and co-led the study.

According to the investigators, future research is needed across multiple racial and ethnic groups to determine the generalizability of the findings.

Can AI grasp related concepts after learning only one?

 Humans have the ability to learn a new concept and then immediately use it to understand related uses of that concept -- once children know how to "skip," they understand what it means to "skip twice around the room" or "skip with your hands up."

But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks -- the engines that drive artificial intelligence and machine learning -- are not capable of making these connections, known as "compositional generalizations." However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.

Researchers at New York University and Spain's Pompeu Fabra University have now developed a technique -- reported in the journal Nature -- that advances the ability of these tools, such as ChatGPT, to make compositional generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks -- the engines driving ChatGPT and related technologies for speech recognition and natural language processing -- to become better at compositional generalization through practice.

Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allow these systems to unlock new powers, the authors note.

"For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization," says Brenden Lake, an assistant professor in NYU's Center for Data Science and Department of Psychology and one of the authors of the paper. "We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison."

In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally -- for instance, to take the word "jump" and then create new word combinations, such as "jump twice" or "jump around right twice." MLC then receives a new episode that features a different word, and so on, each time improving the network's compositional skills.

To test the effectiveness of MLC, Lake, co-director of NYU's Minds, Brains, and Machines Initiative, and Marco Baroni, a researcher at the Catalan Institute for Research and Advanced Studies and professor at the Department of Translation and Language Sciences of Pompeu Fabra University, conducted a series of experiments with human participants that were identical to the tasks performed by MLC.

In addition, rather than learn the meaning of actual words -- terms humans would already know -- they also had to learn the meaning of nonsensical terms (e.g., "zup" and "dax") as defined by the researchers and know how to apply them in different ways. MLC performed as well as the human participants -- and, in some cases, better than its human counterparts. MLC and people also outperformed ChatGPT and GPT-4, which despite its striking general abilities, showed difficulties with this learning task.

"Large language models such as ChatGPT still struggle with compositional generalization, though they have gotten better in recent years," observes Baroni, a member of Pompeu Fabra University's Computational Linguistics and Linguistic Theory research group. "But we think that MLC can further improve the compositional skills of large language models."

Wednesday, 25 October 2023

Smartphone attachment could increase racial fairness in neurological screening

 Engineers at the University of California San Diego have developed a smartphone attachment that could enable people to screen for a variety of neurological conditions, such as Alzheimer's disease and traumatic brain injury, at low cost -- and do so accurately regardless of their skin tone.

The technology, published in Scientific Reports, has the potential to improve the equity and accessibility of neurological screening procedures while making them widely available on all smartphone models.

The attachment fits over a smartphone's camera and improves its ability to capture clear video recordings and measurements of the pupil, which is the dark center of the eye. Recent research has shown that tracking pupil size changes during certain tasks can provide valuable insight into an individual's neurological functions. For example, the pupil tends to dilate during complex cognitive tasks or in response to unexpected stimuli.

However, tracking pupil size can be difficult in individuals with dark eye colors, such as those with darker skin tones, because conventional color cameras struggle to distinguish the pupil from the surrounding dark iris.

To enhance the visibility of the pupil, UC San Diego engineers equipped their smartphone attachment with a specialized filter that selectively permits a certain range of light into the camera. That range is called far-red light -- the extreme red end of the visible spectrum located just before infrared light. Melanin, the dark pigment in the iris, absorbs most visible wavelengths of light but reflects longer wavelengths, including far-red light. By imaging the eye with far-red light while blocking out other wavelengths, the iris appears significantly lighter, making it easier to see the pupil with a regular camera.

"There has been a large issue with medical device design that depends on optical measurements ultimately working only for those with light skin and eye colors, while failing to perform well for those with dark skin and eyes," said study senior author Edward Wang, an electrical and computer engineering professor in The Design Lab at UC San Diego, where he is the director of the Digital Health Technologies Lab. "By focusing on how we can make this work for all people while keeping the solution simple and low cost, we aim to pave the way to a future of fair access to remote, affordable healthcare."

Another feature of this technology that makes it more accessible is that it is designed to work on all smartphones. Traditionally, pupil measurements have been performed using infrared cameras, which are only available in high-end smartphone models. Since regular cameras cannot detect infrared light, this traditional approach limits accessibility to those who can afford more expensive smartphones. By using far-red light, which is still part of the visible spectrum and can be captured by regular smartphone cameras, this technology levels the playing field.

"The issue with relying on specialized sensors like an infrared camera is that not all phones have it," said study first author Colin Barry, an electrical and computer engineering Ph.D. student in Wang's lab. "We created an inexpensive and fair solution to provide these kinds of emerging neurological screenings regardless of the smartphone price, make or model."

To use the attachment, a person clips it over a smartphone's camera and places it over their eye. Then, the smartphone administers a pupil response test by providing a flash of bright light and recording video of the eye during the test. A machine learning model uses the recorded video of the eye to track pupil size.

The researchers tested their smartphone attachment on a diverse group of 12 volunteers with a wide range of eye colors, from light blue to dark brown. The smartphone measurements were validated against a pupillometer, the gold standard device used in the clinic for measuring pupil size.

The next phase of this project involves taking steps towards deploying the technology for large-scale neurological screenings in at-home environments. To reach that stage, the researchers are working on optimizing the design for mass manufacturing. They are also making the technology more user friendly, especially for older adults given their elevated risk for developing neurological conditions.

Wang and Barry have co-founded a company, Billion Labs Inc., to refine and commercialize the technology.

Paper: "Racially fair pupillometry measurements for RGB smartphone cameras using the far red spectrum."

This work is supported by the National Institute of Aging.

Disclosures: Edward Wang and Colin Barry are co-founders of and have a financial interest in Billion Labs Inc. Wang is also the CEO of Billion Labs. The terms of this arrangement have been reviewed and approved by the University of California San Diego in accordance with its conflict-of-interest policies.

Certain per- and polyfluoroalkyl 'forever chemicals' identified as potential risk factor for thyroid cancer

 Mount Sinai researchers have discovered a link between certain per- and polyfluoroalkyl substances (PFAS) and an increased risk for thyroid cancer, according to a study published in eBioMedicine today.

PFAS, also known as "forever chemicals," are a large, complex group of synthetic chemicals that can migrate into the soil, water, and air. Due to their strong carbon-fluorine bond, these chemicals do not degrade easily in the environment. Forever chemicals been used in consumer products around the world since the 1940s, including nonstick cookware, water-repellent clothing, stain-resistant fabrics, and other products that resist grease, water, and oil.

Multiple national and international institutions, including the European Parliament and the U.S. Environmental Protection Agency (EPA), have declared PFAS exposure a health crisis. This study supports the actions needed to regulate and remove these chemicals from potential exposure routes. Although PFAS exposure has been identified as a potential contributor to recent increases in thyroid cancer, limited studies have investigated the association between PFAS exposure and thyroid cancer in human populations.

"With the substantial increase of thyroid cancer worldwide over recent decades, we wanted to dive into the potential environmental factors that could be the cause for this rise. This led us to the finding that PFAS, 'forever chemicals,' may at least partially explain the rise of thyroid cancer and are an area we should continue to study further," said co-corresponding author Maaike van Gerwen, MD, PhD, Assistant Professor and Director of Research for the Department of Otolaryngology -- Head and Neck Surgery, Icahn School of Medicine at Mount Sinai. "Thyroid cancer risk from PFAS exposure is a global concern given the prevalence of PFAS exposure in our world. This study provides critical evidence to support large-scale studies further exploring the effect of PFAS exposure on the thyroid gland."

The researchers investigated associations between plasma PFAS levels and thyroid cancer diagnosis using BioMe, a medical record-linked biobank at Icahn Mount Sinai. They studied 88 thyroid cancer patients with plasma samples collected either at or before cancer diagnosis and 88 non-cancer controls -- people who did not develop any form of cancer -- who matched on sex, race/ethnicity, age (within five years), body mass index, smoking status, and the year of sample collection. The researchers measured levels of eight PFAS in blood samples from the BioMe participants using untargeted metabolomics. The levels of individual PFAS were compared between the group of participants who developed thyroid cancer and the group of healthy participants, using different statistical models to estimate accuracy.

The results showed that exposure to perfluorooctanesulfonic acid (n-PFOS, a group of chemicals under the PFAS umbrella) led to a 56 percent increased risk of thyroid cancer diagnosis. Additionally, the researchers conducted the analysis again in a subgroup of 31 patients who had at least a year between their enrollment in BioMe and their diagnosis of thyroid cancer, to take into consideration the time lag between exposure to PFAS chemicals and developing a disease. From this second analysis, there was also a positive association between the exposure of n-PFOS and the risk of thyroid cancer, as well as a positive association with a few additional PFAS chemicals, including branched perfluorooctanesulfonic acid, perfluorononanoic acid, perfluorooctylphosphonic acid, and linear perfluorohexanesulfonic acid.

"The results of this study provide further confirmation for the PFAS health crisis and underline the need to reduce, and hopefully one day eliminate, PFAS exposure," said co-corresponding author Lauren Petrick, PhD, Associate Professor of Environmental Medicine and Public Health, Icahn Mount Sinai. "Today, it's nearly impossible to avoid PFAS in our daily activities. We hope these findings bring awareness of the severity of these forever chemicals. Everyone should discuss their PFAS exposure with their treating physician to determine their risk and get screened if appropriate. In addition, we need continued industry changes to eliminate PFAS altogether."

This study was funded with pilot funding through the Department of Environmental Medicine and Public Health and the Institute for Exposomic Research's National Institute of Environmental Health Sciences-funded Center on Health and Environment Across the LifeSpan (HEALS), which supports research on environmental exposures, and their effects on health across the life course.

Monday, 23 October 2023

To excel at engineering design, generative AI must learn to innovate, study finds

 ChatGPT and other deep generative models are proving to be uncanny mimics. These AI supermodels can churn out poems, finish symphonies, and create new videos and images by automatically learning from millions of examples of previous works. These enormously powerful and versatile tools excel at generating new content that resembles everything they've seen before.

But as MIT engineers say in a new study, similarity isn't enough if you want to truly innovate in engineering tasks.

"Deep generative models (DGMs) are very promising, but also inherently flawed," says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. "The objective of these models is to mimic a dataset. But as engineers and designers, we often don't want to create a design that's already out there."

He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond "statistical similarity."

"The performance of a lot of these models is explicitly tied to how statistically similar a generated sample is to what the model has already seen," says co-author Faez Ahmed, assistant professor of mechanical engineering at MIT. "But in design, being different could be important if you want to innovate."

In their study, Ahmed and Regenwetter reveal the pitfalls of deep generative models when they are tasked with solving engineering design problems. In a case study of bicycle frame design, the team shows that these models end up generating new frames that mimic previous designs but falter on engineering performance and requirements.

When the researchers presented the same bicycle frame problem to DGMs that they specifically designed with engineering-focused objectives, rather than only statistical similarity, these models produced more innovative, higher-performing frames.

The team's results show that similarity-focused AI models don't quite translate when applied to engineering problems. But, as the researchers also highlight in their study, with some careful planning of task-appropriate metrics, AI models could be an effective design "co-pilot."

"This is about how AI can help engineers be better and faster at creating innovative products," Ahmed says. "To do that, we have to first understand the requirements. This is one step in that direction."

The team's new study appeared recently online, and will be in the December print edition of the journal Computer Aided Design. The research is a collaboration between computer scientists at MIT-IBM Watson AI Lab and mechanical engineers in MIT's DeCoDe Lab. The study's co-authors include Akash Srivastava and Dan Gutreund at the MIT-IBM Watson AI Lab.

Framing a problem

As Ahmed and Regenwetter write, DGMs are "powerful learners, boasting unparalleled ability" to process huge amounts of data. DGM is a broad term for any machine-learning model that is trained to learn distribution of data and then use that to generate new, statistically similar content. The enormously popular ChatGPT is one type of deep generative model known as a large language model, or LLM, which incorporates natural language processing capabilities into the model to enable the app to generate realistic imagery and speech in response to conversational queries. Other popular models for image generation include DALL-E and Stable Diffusion.

Because of their ability to learn from data and generate realistic samples, DGMs have been increasingly applied in multiple engineering domains. Designers have used deep generative models to draft new aircraft frames, metamaterial designs, and optimal geometries for bridges and cars. But for the most part, the models have mimicked existing designs, without improving the performance on existing designs.

"Designers who are working with DGMs are sort of missing this cherry on top, which is adjusting the model's training objective to focus on the design requirements," Regenwetter says. "So, people end up generating designs that are very similar to the dataset."

In the new study, he outlines the main pitfalls in applying DGMs to engineering tasks, and shows that the fundamental objective of standard DGMs does not take into account specific design requirements. To illustrate this, the team invokes a simple case of bicycle frame design and demonstrates that problems can crop up as early as the initial learning phase. As a model learns from thousands of existing bike frames of various sizes and shapes, it might consider two frames of similar dimensions to have similar performance, when in fact a small disconnect in one frame -- too small to register as a significant difference in statistical similarity metrics -- makes the frame much weaker than the other, visually similar frame.

Beyond "vanilla"

The researchers carried the bicycle example forward to see what designs a DGM would actually generate after having learned from existing designs. They first tested a conventional "vanilla" generative adversarial network, or GAN -- a model that has widely been used in image and text synthesis, and is tuned simply to generate statistically similar content. They trained the model on a dataset of thousands of bicycle frames, including commercially manufactured designs and less conventional, one-off frames designed by hobbyists.

Once the model learned from the data, the researchers asked it to generate hundreds of new bike frames. The model produced realistic designs that resembled existing frames. But none of the designs showed significant improvement in performance, and some were even a bit inferior, with heavier, less structurally sound frames.

The team then carried out the same test with two other DGMs that were specifically designed for engineering tasks. The first model is one that Ahmed previously developed to generate high-performing airfoil designs. He built this model to prioritize statistical similarity as well as functional performance. When applied to the bike frame task, this model generated realistic designs that also were lighter and stronger than existing designs. But it also produced physically "invalid" frames, with components that didn't quite fit or overlapped in physically impossible ways.

"We saw designs that were significantly better than the dataset, but also designs that were geometrically incompatible because the model wasn't focused on meeting design constraints," Regenwetter says.

The last model the team tested was one that Regenwetter built to generate new geometric structures. This model was designed with the same priorities as the previous models, with the added ingredient of design constraints, and prioritizing physically viable frames, for instance, with no disconnections or overlapping bars. This last model produced the highest-performing designs, that were also physically feasible.

"We found that when a model goes beyond statistical similarity, it can come up with designs that are better than the ones that are already out there," Ahmed says. "It's a proof of what AI can do, if it is explicitly trained on a design task."

For instance, if DGMs can be built with other priorities, such as performance, design constraints, and novelty, Ahmed foresees "numerous engineering fields, such as molecular design and civil infrastructure, would greatly benefit. By shedding light on the potential pitfalls of relying solely on statistical similarity, we hope to inspire new pathways and strategies in generative AI applications 

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...