Monday 27 March 2023

Hunting Venus 2.0: Scientists sharpen their sights

 With the first paper compiling all known information about planets like Venus beyond our solar system, scientists are the closest they've ever been to finding an analog of Earth's "twin."

If they succeed in locating one, it could reveal valuable insights into Earth's future, and our risk of developing a runaway greenhouse climate as Venus did.

Scientists who wrote the paper began with more than 300 known terrestrial planets orbiting other stars, called exoplanets. They whittled the list down to the five most likely to resemble Venus in terms of their radii, masses, densities, the shapes of their orbits, and perhaps most significantly, distances from their stars.

The paper, published in The Astronomical Journal, also ranked the most Venus-like planets in terms of the brightness of the stars they orbit, which increases the likelihood that the James Webb Space Telescope would get more informative signals regarding the composition of their atmospheres.

Today's Venus floats in a nest of sulfuric acid clouds, has no water, and features surface temperatures of up to 900 degrees Fahrenheit -- hot enough to melt lead. Using the Webb telescope to observe these possible Venus analogs, or "exoVenuses," scientists hope to learn if things were ever different for our Venus.

"One thing we wonder is if Venus could once have been habitable," said Colby Ostberg, lead study author and UC Riverside Ph.D. student. 'To confirm this, we want to look at the coolest of the planets in the outer edge of the Venus zone, where they get less energy from their stars."

The Venus Zone is a concept proposed by UCR astrophysicist Stephen Kane in 2014. It is similar to the concept of a habitable zone, which is a region around a star where liquid surface water could exist.

"The Venus Zone is where it would be too hot to have water, but not hot enough that the planet's atmosphere gets stripped away," Ostberg explained. "We want to find planets that still have significant atmospheres."

Finding a planet similar to Venus in terms of planet mass is also important because mass affects how long a planet is able to maintain an active interior, with the movement of rocky plates across its outer shell known as plate tectonics.

"Venus has 20% less mass than Earth, and as a result, scientists believe there may not be any tectonic activity there. Therefore, Venus has a hard time taking carbon out of its atmosphere," Ostberg explained. "The planet just can't get rid of it."

Another aspect of an active planet interior is volcanic activity, and evidence uncovered just this month suggests Venus still has active volcanoes. "The large number of Venus analogs identified in our paper will allow us to test if such volcanic activity is the norm amongst similar planets, or not," said Kane, who co-authored the study.

The research team is proposing the planets identified in the paper as targets for the Webb telescope in 2024. Webb is the most expensive and advanced observation tool ever created and will enable scientists not only to see whether the exoVenuses have atmospheres, but also what they're made of.

The Webb observations may reveal biosignature gases in the atmosphere of an exoVenus, such as methane, methyl bromide or nitrous oxide, which could signal the presence of life.

"Detecting those molecules on an exoVenus would show that habitable worlds can exist in the Venus Zone and strengthen the possibility of a temperate period in Venus' past," Ostberg said.

These observations will be complemented by NASA's two upcoming missions to Venus, in which Kane will play an active role. The DAVINCI mission will also measure gases in the Venusian atmosphere, while the VERITAS mission will enable 3D reconstructions of the landscape.

All of these observations are leading toward the ultimate question that Kane poses in much of his work, which attempts to understand the Earth-Venus divergence in climate: "Is Earth weird or is Venus the weird one?"

"It could be that one or the other evolved in an unusual way, but it's hard to answer that when we only have two planets to analyze in our solar system, Venus and Earth. The exoplanet explorations will give us the statistical power to explain the differences we see," Kane said.

If the planets on the new list turn out to indeed be much like Venus, that would show the outcome of Venus' evolution is common.

"That would be a warning for us here on Earth because the danger is real. We need to understand what happened there to make sure it doesn't happen here," Kane said.

Searching for life with space dust

 Following enormous collisions, such as asteroid impacts, some amount of material from an impacted world may be ejected into space. This material can travel vast distances and for extremely long periods of time. In theory this material could contain direct or indirect signs of life from the host world, such as fossils of microorganisms. And this material could be detectable by humans in the near future, or even now.

When you hear the words vacuum and dust in a sentence, you may groan at the thought of having to do the housework. But in astronomy, these words have different connotations. Vacuum of course refers to the void of space. Dust, however, means diffuse solid material floating through space. It can be an annoyance to some astronomers as it may hinder their views of some distant object. Or dust could be a useful tool to help other astronomers learn about something distant without having to leave the safety of our own planet. Professor Tomonori Totani from the University of Tokyo's Department of Astronomy has an idea for space dust that might sound like science fiction but actually warrants serious consideration.

"I propose we study well-preserved grains ejected from other worlds for potential signs of life," said Totani. "The search for life outside our solar system typically means a search for signs of communication, which would indicate intelligent life but precludes any pre-technological life. Or the search is for atmospheric signatures that might hint at life, but without direct confirmation there could always be an explanation that does not require life. However, if there are signs of life in dust grains, not only could we be certain, but we could also find out soon."

The basic idea is that large asteroid strikes can eject ground material into space. There is a chance that recently deceased or even fossilized microorganisms could be contained in some rocky material in this ejecta. This material will vary in size greatly, with different-sized pieces behaving differently once in space. Some larger pieces might fall back down or enter permanent orbits around a local planet or star. And some much smaller pieces might be too small to contain any verifiable signs of life. But grains in the region of 1 micrometer (one-thousandth of a millimeter) could not only host a specimen of a single-celled organism, but they could also potentially escape their host solar system altogether, and under the right circumstances, maybe even venture to ours.

"My paper explores this idea using available data on the different aspects of this scenario," said Totani. "The distances and times involved can be vast, and both reduce the chance any ejecta containing life signs from another world could even reach us. Add to that the number of phenomena in space that can destroy small objects due to heat or radiation, and the chances get even lower. Despite that, I calculate around 100,000 such grains could be landing on Earth every year. Given there are many unknowns involved, this estimate could be too high or too low, but the means to explore it already exist so it seems like a worthwhile pursuit."

There may be such grains already on Earth, and in plentiful amounts, preserved in places such as the Antarctic ice, or under the seafloor. Space dust in these places could be retrieved relatively easily, but discerning extrasolar material from material originating in our own solar system is still a complex matter. If the search is extended to space itself, however, there are already missions that capture dust in the vacuum using ultralight materials called aerogels.

"I hope that researchers in different fields are interested in this idea and start to examine the feasibility of this new search for extrasolar life in more detail." said Totani.

Neutrinos made by a particle collider detected

 In a scientific first, a team led by physicists at the University of California, Irvine has detected neutrinos created by a particle collider. The discovery promises to deepen scientists' understanding of the subatomic particles, which were first spotted in 1956 and play a key role in the process that makes stars burn.

The work could also shed light on cosmic neutrinos that travel large distances and collide with the Earth, providing a window on distant parts of the universe.

It's the latest result from the Forward Search Experiment, or FASER, a particle detector designed and built by an international group of physicists and installed at CERN, the European Council for Nuclear Research in Geneva, Switzerland. There, FASER detects particles produced by CERN's Large Hadron Collider.

"We've discovered neutrinos from a brand-new source particle colliders where you have two beams of particles smash together at extremely high energy," said UC Irvine particle physicist and FASER Collaboration Co-Spokesman Jonathan Feng, who initiated the project, which involves over 80 researchers at UCI and 21 partner institutions.

Brian Petersen, a particle physicist at CERN, announced the results Sunday on behalf of FASER at the 57th Rencontres de Moriond Electroweak Interactions and Unified Theories conference in Italy.

Neutrinos, which were co-discovered nearly 70 years ago by the late UCI physicist and Nobel laureate Frederick Reines, are the most abundant particle in the cosmos and "were very important for establishing the standard model of particle physics," said Jamie Boyd, a particle physicist at CERN and co-spokesman for FASER. "But no neutrino produced at a collider had ever been detected by an experiment."

Since the groundbreaking work of Reines and others like Hank Sobel, UCI professor of physics & astronomy, the majority of neutrinos studied by physicists have been low-energy neutrinos. But the neutrinos detected by FASER are the highest energy ever produced in a lab and are similar to the neutrinos found when deep-space particles trigger dramatic particle showers in our atmosphere.

"They can tell us about deep space in ways we can't learn otherwise," said Boyd. "These very high-energy neutrinos in the LHC are important for understanding really exciting observations in particle astrophysics."

FASER itself is new and unique among particle-detecting experiments. In contrast to other detectors at CERN, such as ATLAS, which stands several stories tall and weighs thousands of tons, FASER is about one ton and fits neatly inside a small side tunnel at CERN. And it took only a few years to design and construct using spare parts from other experiments.

"Neutrinos are the only known particles that the much larger experiments at the Large Hadron Collider are unable to directly detect, so FASER's successful observation means the collider's full physics potential is finally being exploited," said UCI experimental physicist Dave Casper.

Beyond neutrinos, one of FASER's other chief objectives is to help identify the particles that make up dark matter, which physicists think comprises most of the matter in the universe, but which they've never directly observed.

FASER has yet to find signs of dark matter, but with the LHC set to begin a new round of particle collisions in a few months, the detector stands ready to record any that appear.

"We're hoping to see some exciting signals," said Boyd.

Artificial intelligence discovers secret equation for 'weighing' galaxy clusters

 Astrophysicists at the Institute for Advanced Study, the Flatiron Institute and their colleagues have leveraged artificial intelligence to uncover a better way to estimate the mass of colossal clusters of galaxies. The AI discovered that by just adding a simple term to an existing equation, scientists can produce far better mass estimates than they previously had.

The improved estimates will enable scientists to calculate the fundamental properties of the universe more accurately, the astrophysicists reported March 17, 2023, in the Proceedings of the National Academy of Sciences.

"It's such a simple thing; that's the beauty of this," says study co-author Francisco Villaescusa-Navarro, a research scientist at the Flatiron Institute's Center for Computational Astrophysics (CCA) in New York City. "Even though it's so simple, nobody before found this term. People have been working on this for decades, and still they were not able to find this."

The work was led by Digvijay Wadekar of the Institute for Advanced Study in Princeton, New Jersey, along with researchers from the CCA, Princeton University, Cornell University and the Center for Astrophysics | Harvard & Smithsonian.

Understanding the universe requires knowing where and how much stuff there is. Galaxy clusters are the most massive objects in the universe: A single cluster can contain anything from hundreds to thousands of galaxies, along with plasma, hot gas and dark matter. The cluster's gravity holds these components together. Understanding such galaxy clusters is crucial to pinning down the origin and continuing evolution of the universe.

Perhaps the most crucial quantity determining the properties of a galaxy cluster is its total mass. But measuring this quantity is difficult -- galaxies cannot be 'weighed' by placing them on a scale. The problem is further complicated because the dark matter that makes up much of a cluster's mass is invisible. Instead, scientists deduce the mass of a cluster from other observable quantities.

In the early 1970s, Rashid Sunyaev, current distinguished visiting professor at the Institute for Advanced Study's School of Natural Sciences, and his collaborator Yakov B. Zel'dovich developed a new way to estimate galaxy cluster masses. Their method relies on the fact that as gravity squashes matter together, the matter's electrons push back. That electron pressure alters how the electrons interact with particles of light called photons. As photons left over from the Big Bang's afterglow hit the squeezed material, the interaction creates new photons. The properties of those photons depend on how strongly gravity is compressing the material, which in turn depends on the galaxy cluster's heft. By measuring the photons, astrophysicists can estimate the cluster's mass.

However, this 'integrated electron pressure' is not a perfect proxy for mass, because the changes in the photon properties vary depending on the galaxy cluster. Wadekar and his colleagues thought an artificial intelligence tool called 'symbolic regression' might find a better approach. The tool essentially tries out different combinations of mathematical operators -- such as addition and subtraction -- with various variables, to see what equation best matches the data.

Wadekar and his collaborators 'fed' their AI program a state-of-the-art universe simulation containing many galaxy clusters. Next, their program, written by CCA research fellow Miles Cranmer, searched for and identified additional variables that might make the mass estimates more accurate.

AI is useful for identifying new parameter combinations that human analysts might overlook. For example, while it is easy for human analysts to identify two significant parameters in a dataset, AI can better parse through high volumes, often revealing unexpected influencing factors.

"Right now, a lot of the machine-learning community focuses on deep neural networks," Wadekar explained. "These are very powerful, but the drawback is that they are almost like a black box. We cannot understand what goes on in them. In physics, if something is giving good results, we want to know why it is doing so. Symbolic regression is beneficial because it searches a given dataset and generates simple mathematical expressions in the form of simple equations that you can understand. It provides an easily interpretable model."

The researchers' symbolic regression program handed them a new equation, which was able to better predict the mass of the galaxy cluster by adding a single new term to the existing equation. Wadekar and his collaborators then worked backward from this AI-generated equation and found a physical explanation. They realized that gas concentration correlates with the regions of galaxy clusters where mass inferences are less reliable, such as the cores of galaxies where supermassive black holes lurk. Their new equation improved mass inferences by downplaying the importance of those complex cores in the calculations. In a sense, the galaxy cluster is like a spherical doughnut. The new equation extracts the jelly at the center of the doughnut that can introduce larger errors, and instead concentrates on the doughy outskirts for more reliable mass inferences.

The researchers tested the AI-discovered equation on thousands of simulated universes from the CCA's CAMELS suite. They found that the equation reduced the variability in galaxy cluster mass estimates by around 20 to 30 percent for large clusters compared with the currently used equation.

The new equation can provide observational astronomers engaged in upcoming galaxy cluster surveys with better insights into the mass of the objects they observe. "There are quite a few surveys targeting galaxy clusters [that] are planned in the near future," Wadekar noted. "Examples include the Simons Observatory, the Stage 4 CMB experiment and an X-ray survey called eROSITA. The new equations can help us in maximizing the scientific return from these surveys."

Wadekar also hopes that this publication will be just the tip of the iceberg when it comes to using symbolic regression in astrophysics. "We think that symbolic regression is highly applicable to answering many astrophysical questions," he said. "In a lot of cases in astronomy, people make a linear fit between two parameters and ignore everything else. But nowadays, with these tools, you can go further. Symbolic regression and other artificial intelligence tools can help us go beyond existing two-parameter power laws in a variety of different ways, ranging from investigating small astrophysical systems like exoplanets, to galaxy clusters, the biggest things in the universe."

Global experts propose a path forward in generating clean power from waste energy

 Simon Fraser University professor Vincenzo Pecunia has led a team of more than 100 internationally-recognized scientists in creating a comprehensive "roadmap" to guide global efforts to convert waste energy into clean power.

"With the rising global energy demand and the challenges posed by climate change, it is more urgent than ever to generate green energy to preserve our planet and sustain human development," says Pecunia, from the School of Sustainable Energy Engineering, where he leads the Sustainable Optoelectronics Research Group.

"Energy harvesting materials present a promising opportunity to generate clean electricity, ultimately enhancing the energy efficiency of our daily lives and supporting our efforts to combat climate change. These materials have the ability to convert ambient energy from various sources including light, heat, radiofrequency waves (like those from Wi-Fi and mobile signals), and mechanical vibrations."

To realize the full potential of energy harvesting technology, Pecunia and 116 leading experts from around the world have published their Roadmap on Energy Harvesting Materials in the Journal of Physics: Materials.

The roadmap pools expert perspectives on various types of energy harvesting, recent advances and challenges and also analyzes key performance metrics of these technologies in relation to their ultimate energy conversion limits. Building on these insights, it outlines strategies for future research to fully harness the potential of energy harvesting materials.

"This roadmap is the result of an unprecedented endeavour, marking the first time that such a large and diverse international network of energy harvesting experts -- from America, Asia, Europe, and Oceania -- have worked together to chart a course for the advancement of these technologies towards seamless integration into everyday objects and environments," says Pecunia, lead author.

Smart systems are developing rapidly, making smart homes, smart cities, smart manufacturing and smart healthcare achievable. Sensors and systems are embedded in our daily lives through devices such as smartphones, fitness trackers and smart home assistant technologies. All of these operate as part of a wide network, known as the Internet of Things (IoT), that is constantly communicating and exchanging data.

"An area of tremendous potential involves using ambient energy harvesters to sustainably power the billions of sensor nodes being deployed for IoT," explains Pecunia. "By providing an eco-friendly alternative to batteries (which face materials scarcity, toxicity, and waste issues), energy harvesters could sustainably power IoT sensors." Pecunia's research group has made major contributions in this area, spearheading the generation of clean electricity from indoor light using printable semiconductors and their integration with printed electronics towards eco-friendly IoT sensors.

Collecting energy from ambient light, vibrations and radiofrequency waves is challenging due to their limited power density. "It's essential to develop energy harvesting materials that can efficiently capture this energy and convert it to electricity," he explains. "Another important priority is to develop energy harvesters that can be applied on all types of surfaces and objects, which requires energy harvesting materials that are mechanically flexible."

The roadmap on energy harvesting technology is a united and global effort to help pave a path forward for researchers and leaders to help expedite the advancement of this research area.

"Our hope is to catalyze research efforts in energy harvesting research across multiple disciplines to ultimately deliver clean energy anywhere, anytime."

Optical switching at record speeds opens door for ultrafast, light-based electronics and computers

 Imagine a home computer operating 1 million times faster than the most expensive hardware on the market. Now, imagine that level of computing power as the industry standard. University of Arizona researchers hope to pave the way for that reality using light-based optical computing, a marked improvement from the semiconductor-based transistors that currently run the world.

"Semiconductor-based transistors are in all of the electronics that we use today," said Mohammed Hassan, assistant professor of physics and optical sciences. "They're part of every industry -- from kids' toys to rockets -- and are the main building blocks of electronics."

Hassan lad an international team of researchers that published the research article "Ultrafast optical switching and data encoding on synthesized light fields" in Science Advances in February. UArizona physics postdoctoral research associate Dandan Hui and physics graduate student Husain Alqattan also contributed to the article, in addition to researchers from Ohio State University and the Ludwig Maximilian University of Munich.

Semiconductors in electronics rely on electrical signals transmitted via microwaves to switch -- either allow or prevent -- the flow of electricity and data, represented as either "on" or "off." Hassan said the future of electronics will be based instead on using laser light to control electrical signals, opening the door for the establishment of "optical transistors" and the development of ultrafast optical electronics.

Since the invention of semiconductor transistors in the 1940s, technological advancement has centered on increasing the speed at which electric signals can be generated -- measured in hertz. According to Hassan, the fastest semiconductor transistors in the world can operate at a speed of more than 800 gigahertz. Data transfer at that frequency is measured at a scale of picoseconds, or one trillionth of a second.

Computer processing power has increased steadily since the introduction of the semiconductor transistor, though Hassan said one of the primary concerns in developing faster technology is that the heat generated by continuing to add transistors to a microchip would eventually require more energy to cool than can pass through the chip.

In their article, Hassan and his collaborators discuss using all-optical switching of a light signal on and off to reach data transfer speeds exceeding a petahertz, measured at the attosecond time scale. An attosecond is one quintillionth of a second, meaning the transfer of data 1 million times faster than the fastest semiconductor transistors.

While optical switches were already shown to achieve information processing speeds faster than that of semiconductor transistor-based technology, Hassan and his co-authors were able to register the on and off signals from a light source happening at the scale of billionths of a second. This was accomplished by taking advantage of a characteristic of fused silica, a glass often used in optics. Fused silica can instantaneously change its reflectivity, and by using ultrafast lasers, Hassan and his team were able to register changes in a light's signal at the attosecond time scale. The work also demonstrated the possibility of sending data in the form of "one" and "zero" representing on and off via light at previously impossible speeds.

"This new advancement would also allow the encoding of data on ultrafast laser pulses, which would increase the data transfer speed and could be used in long-distance communications from Earth into deep space," Hassan said. "This promises to increase the limiting speed of data processing and information encoding and open a new realm of information technology."

The project was funded by a $1.4 million grant awarded to Hassan in 2018 by the Gordon and Betty Moore Foundation, an organization that aims "to create positive outcomes for future generations" by supporting research into scientific discovery, environmental conservation and patient care. The article was also based on work supported by the United States Air Force Office of Scientific Research's Young Investigator Research Program.

Ultra-lightweight multifunctional space skin created to withstand extreme conditions in space

 A new nano-barrier coating could help protect ultra-lightweight carbon composite materials from extreme conditions in space, according to a study from the University of Surrey and Airbus Defence and Space.

The new functionality added to previously developed 'space skin' structures adds a layer of protection to help maintain space payloads while travelling in space, similar to having its very own robust ultralight protective jacket.

The research team has shown that their innovative nano-barrier would help drastically increase the stability of carbon fibre materials, while reducing radiation damage.

Professor Ravi Silva, corresponding author of the study and Director of the Advanced Technology Institute (ATI) at the University of Surrey, said:

"Current aluminium shielding is not thermally stable or fully conformal, and therefore usually undesired for stable structures. Not to mention that aluminium shielding contributes to the mass and cost of satellites. Our nano-barrier addresses these issues and is a promising upgrade to the industry standard which could become a key accessory to all space and aircraft structures that are both mobile and static."

The coating is a highly dense superlattice structure applied to carbon fibre materials at room temperature which does not add over 1 μm of thickness, therefore keeping the materials lightweight.

Visualizing spatial distribution of electric properties at microscales with liquid crystal droplets

 Microelectromechanical systems (MEMS) involve the use and development of micron-sized electrical devices such as microelectrodes, sensors, and actuators that are integrated into computer and smartphone chips. Fabricating such integrated MEMS devices is usually a challenging task as these devices often deviate from their original design owing to the defects introduced during their fabrication and operation. This, in turn, limits their performance. Therefore, it is crucial to identify and rectify these defects.

One way to identify and rectify these defects is by measuring the spatial distribution of electric properties of these devices. However, standard sensor probes do not offer the required spatial resolution, and can only determine the spatially averaged-out electric properties. Due to this, it is possible to detect only the presence of defects, not their location.

Fortunately, liquid crystal droplets (LCDs)-micron-sized droplets of soft matter with molecular orientational order-offer hope on this front. LCDs respond strongly to external stimuli such as an electric field, and can thus act as a high-resolution probe.

Capitalizing on this promise, Dr. Shinji Bono and Prof. Satoshi Konishi from Ritsumeikan University, Japan, have now utilized LCDs for visualizing the electric properties of microstructured electrodes via a technique called particle imaging electrometry. Their findings were published in Volume 13 of the journal Scientific Reports on 16 March 2023.

Dr. Bono explains the research methodology. "The LCDs were dispersed on microelectrodes arranged in a comb-like structure atop a glass slab. Their molecular orientations, determined using polarized optical microscopy, were randomly distributed when the electric field was absent. Then, a voltage was applied across the electrodes." Because of this, the LCDs between the electrodes and in front of the electrode ends underwent rotation, their molecular orientations lining up perpendicular and parallel to the electrodes, respectively. This alignment, revealed by COMSOL simulations performed by the researchers, corresponded to the direction of the electric field, and occurred faster with increasing voltage. The relaxation frequency of rotation was found to vary as the square of the applied voltage.

Further, at high voltages, the LCDs showed translation (linear motion) towards the electrodes, especially their endpoints, the regions with maximum electrostatic energy density. Based on this behavior, the researchers could produce an array of LCDs via periodic modulation of the energy density in a micro-capacitive MEMS device. The LCD array, in turn, served as a periodic modulator of the refractive index, a number characterizing the light bending ability of a material.

These results thus demonstrate that the electric properties of microelectrodes and microelectric devices can be visualized simply by observing the rotational and translational behavior of LCDs under an electric field. Moreover, the technique provides a high spatial resolution (10 μm) as well as high detection accuracy (5 μV/μm). In light of these features, Prof. Konishi has high hopes for its applications. "It will help improve the design and fabrication of integrated microelectrical devices by providing information on the defect location, which so far has remained unavailable. In turn, more sophisticated MEMS technology may become available soon," he concludes.

Sunday 26 March 2023

Vocal tract size, shape dictate speech sounds

 In JASA, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from University Hospital and Medical Faculty of the RWTH Aachen University explored how anatomical variations in a speaker's vocal tract affect speech production.

The vocal tract looks like an air duct, starting at the vocal cords and moving vertically through the larynx before bending at the back of the mouth and running horizontally through the lips. However, surrounding organs, such as the lips, tongue, cheeks, and teeth, can change the shape of the duct and the resulting sound.

"Speaking is like playing a music instrument," said author Antoine Serrurier. "For vowels, the vocal cords are the sound source, and the vocal tract is the instrument."

Using MRI, the team recorded the shape of the vocal tract for 41 speakers as the subjects produced a series of representative speech sounds. They averaged these shapes to establish a sound-independent model of the vocal tract. Then they used statistical analysis to extract the main variations between speakers.

A handful of factors explained nearly 90% of the differences between speakers. Most important were the horizontal and vertical length of the vocal tract. The latter captures the difference between men and women: Females have higher larynxes and therefore shorter vocal tracts. The inclination of the head and the shape of the hard palate were also important.

Increasing the vocal tract length by 1 cm (in the horizontal or vertical direction) changed the important frequencies that distinguish vowels by 7%-8%. The other main factors have smaller acoustic influence on average but could influence particular resonances for certain types of sounds.

"In our view, anatomy is what forms the basis to produce speech and deserves to be well analyzed and understood," said Serrurier. "Our study proposes a method and a model to disentangle the contribution of the morphology from the pure strategy of a speaker."

The researchers plan to increase the number of speakers to make their model more accurate. They also aim to remove the vocal tract size variations to explore the other, less pronounced factors in more detail.

Lower energy consumption thanks to daylight-saving time

 With the start of daylight-saving time, discussions break out -- as they do every year -- about whether or not we should eliminate the time change -- both in politics and in the wider society. Opponents argue that the time change impacts our health, for instance through sleep disturbances. Proponents, on the other hand, often bring forward the argument of saving electricity because of longer days, which means that less artificial light is needed. "That was the original intention behind the introduction of daylight saving. From our point of view, however, it makes sense to look not only at the impact on electricity savings in lighting, but on the overall energy consumption of a building," explains Sven Eggimann. Together with his colleague Massimo Fiorentini and other colleagues at Empa's Urban Energy Systems Lab, he has therefore determined whether and how the time change affects heating and cooling energy consumption.

Going home earlier saves energy

The scientists' hypothesis was that employees start their work an hour earlier in summer due to the time change, and thus leave the office earlier in the afternoon. Since most of the cooling happens later in the afternoon, this can save energy. The assumption behind this is that in an empty office the cooling can be reduced or even turned off completely. As buildings become more intelligent, this would be relatively easy to accomplish in the future.

To test the hypothesis, the researchers simulated the heating and cooling energy used with and without daylight-saving time for different climatic regions based on data from various office buildings in 15 US cities. In order to include the influence of climate change, they took into account not only the current climate, but also future climate scenarios up to the year 2050. This is crucial, as climate change has an enormous impact on a building's energy consumption. In another study, for example, Empa researchers found that in future Switzerland's demand for cooling could match the one for heating due to climate change.

The results of the current study should delight the proponents of daylight-saving time. "Switching to daylight-saving time can reduce an office building's cooling energy by up to almost six percent. At the same time, heating demand can increase by up to 4.4 percent due to the earlier start of work in the morning. However, since much more cooling than heating energy is needed in summer, the time change has a positive overall effect on the energy balance of a building," summarizes Massimo Fiorentini. Across the different climate zones and scenarios, the overall energy savings varied -- peaking at around 3 percent -- but they were evident everywhere. Although this result only relates to office buildings in the US, it also provides valuable insights for Switzerland, as the climatic conditions are comparable for several of the simulated climate zones.

Contribution to climate protection

"Our study shows that the time change can contribute to climate protection. In the discussion about eliminating daylight-saving time, policy makers should therefore not only consider the electricity savings in artificial lighting, but also the impact on the energy balance of office buildings as a whole," says Eggimann. At the same time, the researchers emphasize that the time change is only one of many ways to influence the energy consumption of a building. Technical improvements of the buildings, behavioral changes and a general adjustment of our working hours can also contribute to energy savings and thus CO2 reduction -- regardless of whether or not we change the time every six months.

Turn up your favorite song to improve medication efficacy

 While listening to a favorite song is a known mood booster, researchers at Michigan State University have discovered that music-listening interventions also can make medicines more effective.

"Music-listening interventions are like over-the-counter medications," said Jason Kiernan, an assistant professor in the College of Nursing. "You don't need a doctor to prescribe them."

While previous research studies have used music-listening interventions as a tool to treat pain and anxiety, Kiernan took a novel approach by studying the effects of music-listening interventions on chemotherapy-induced nausea.

"Pain and anxiety are both neurological phenomena and are interpreted in the brain as a state," Kiernan said. "Chemotherapy-induced nausea is not a stomach condition; it is a neurological one."

The small pilot study included 12 patients undergoing chemotherapy treatment who agreed to listen to their favorite music for 30 minutes each time they needed to take their as-needed anti-nausea medication. They repeated the music intervention anytime nausea occurred over the five days beyond their chemotherapy treatment. The patients in the study provided a total of 64 events.

"When we listen to music, our brains fire all kinds of neurons," Kiernan said.

While Kiernan did see a reduction in the ratings of patients' nausea severity and their distress (how much it bothered them to be nauseous), he cautions that it is difficult to isolate whether it was the gradual release of the medication doing its job or the increased benefit of the music. For future studies, Kiernan is drawing inspiration from another previously published study that measured the amount of serotonin, a neurotransmitter, that was released by platelets in the blood after listening to unpleasant and pleasant music.

"Serotonin is the major neurotransmitter that causes chemotherapy-induced nausea," Kiernan said. "Cancer patients take medications to block serotonin's effects."

During that previous study, researchers found that patients who listened to pleasant music experienced the lowest levels of serotonin release, indicating that the serotonin stayed in the blood platelets and was not released to circulate throughout the body. Results also showed that after listening to music they found unpleasant, patients experienced greater stress and increased levels of serotonin release.

"This was intriguing because it provides a neurochemical explanation and a possible way to measure serotonin and the blood platelet release of serotonin in my study," Kiernan said. "In 10 to 20 years, wouldn't it be neat if you could use a nonpharmacological intervention like listening to 10 minutes of your favorite music to complement a medicine?"

The challenge of keeping an audience engaged: How language shapes attention

 Researchers from University of Pennsylvania, University or Maryland, and Emory University published a new Journal of Marketing article that examines how and why the language used in content engages readers.

The study, forthcoming in the Journal of Marketing, is titled "What Holds Attention? Linguistic Drivers of Engagement" and is authored by Jonah Berger, Wendy W. Moe, and David A. Schweidel.

Everyone wants to hold an audience's attention. Brands want consumers to watch their ads, leaders want employees to read their emails, and teachers want students to listen to their lectures.

Similarly, media companies want readers to consume more content. The reason is simple: The further down a news story readers read, the more advertising revenue that article generates; the longer audiences spend watching videos, the higher the rate brands can command. And the more a piece of content holds attention, the more consumers learn about the product, service, or issue discussed.

Why do some articles captivate readers and encourage them to keep reading, while others make them lose interest after just a few sentences? And how does the content (i.e., the language used) shape whether audiences stay engaged? This study addresses these questions by utilizing natural language processing of over 600,000 reading sessions from 35,000 pieces of content, combined with controlled experiments.

Sustained Attention vs. Clickbait

It is important to distinguish sustained attention from other types of engagement. One way of measuring engagement is clicks, views, or other such metrics that measure how many people were exposed to a piece of content. As Berger explains, "While prior research has examined how headlines or advertisements attract attention, we wish to explore how the content is able to hold a reader's attention. Focusing on short-term metrics like views and clicks can lead to clickbait or headlines that attract attention, but it does not necessarily lead to content being consumed."

Companies such as YouTube and Facebook use measures like "dwell time," or how long users spend consuming a piece of content, to better measure engagement, estimate relevance, and improve rankings and recommendations. A catchy headline might lead readers to click on a link, for example, but once they open the article, how much of it do they actually read? Do they stop after the first few sentences? Do they persist for most of the article? Holding attention refers to whether content retains the attracted attention, keeping audiences engaged.

"Our study shows that language that is easier to process encourages continued reading, as does language that evokes emotion," says Moe. But not all emotional language has the same impact. Instead, these effects are driven by the degree to which different discrete emotions evoke arousal and uncertainty. "Consistent with this, language that is anxious, exciting, and hopeful encourages reading while language that is sad discourages it," adds Schweidel. A simulation highlights the implications of these findings for content recommendation algorithms trained to sustain attention.

Managerial Implications and Lessons

The study offers four main lessons for chief marketing officers:

  • It deepens understanding around what holds attention. While some research has examined what attracts attention or what drives word of mouth, there has been less focus on how language sustains attention or makes people consume more content once they have started. This study demonstrates the important role of emotional language and shows how different linguistic features shape content consumption.
  • The findings help improve content design for advertisers, marketers, publishers, and presenters. Since content creators do not just want clicks, the researches show how simple shifts in language can encourage sustained attention. Further, while it is often assumed that certain topics are just better at keeping people engaged (e.g., celebrity gossip rather than financial literacy), they show how writing in certain ways can increase sustained attention, even for "less engaging" topics.
  • The results highlight that what holds attention is not always the same as what grabs attention or encourages word of mouth. While more certain language can increase likes and shares, emotions that make people feel certain are actually detrimental when it comes to sustaining attention. While some have argued that content that requires more cognitive processing should increase clicks, the study shows that content that requires more processing has the opposite effect when it comes to holding attention. Retaining attention is a different type of engagement, and findings from one type of engagement may not necessarily carry over to others. Consequently, when developing content, managers should think carefully about which outcomes they care most about and design the content with that in mind.
  • Because online content consumption has become a critical social issue, the findings have important social implications. Disinformation and hate speech have been linked to negative outcomes for individuals as well as society and our results highlight language's critical role in this process. If angry and anxious content holds attention, as the simulation shows, training algorithms to maximize sustained attention may lead this content to be recommended, with potentially negative implications for consumer welfare.

Saturday 25 March 2023

Telomere shortening -- a sign of cellular aging -- linked to signs of Alzheimer's in brain scans

 Changes in the brain caused by Alzheimer's disease are associated with shortening of the telomeres -- the protective caps on the ends of chromosomes that shorten as cells age -- according to a new study led by Anya Topiwala of Oxford Population Health, part of the University of Oxford, UK, published March 22 in the open-access journal PLOS ONE.

Telomeres on chromosomes protect DNA from degrading, but every time a cell divides, the telomeres lose some of their length. Short telomeres are a sign of stress and cellular aging, and are also associated with a higher risk of neurological and psychiatric disorders. Currently, little is known about the links between telomere length and changes that occur in the brains of people with neurological conditions. Understanding those relationships could offer insights into the biological mechanisms that cause neurodegenerative disorders.

In the new study, researchers compared telomere length in white blood cells to results from brain MRIs and electronic health records from more than 31,000 participants in the UK Biobank, a large-scale biomedical database and research resource containing anonymized genetic, lifestyle and health information from half a million UK participants. The analysis revealed that patients with longer telomeres also tended to have better brain health. They had a larger volume of grey matter in their brains overall and a larger hippocampus, both of which shrink in patients with Alzheimer's disease. Longer telomeres were also associated with a thicker cerebral cortex -- the outer, folded layer of grey matter -- which thins as Alzheimer's disease progresses. The researchers speculate that longer telomeres might therefore help protect patients from developing dementia, though there was no association with stroke or Parkinson's disease.

Overall, the findings show that shorter telomeres can be linked to multiple changes in the brain associated with dementia. To date, this is the largest and richest study of the relationships between telomere length and MRI markers in the brain. The associations suggest that accelerated aging in the brain, as indicated by telomere length, could represent a biological pathway that leads to neurodegenerative disease.

The authors add: "We found associations between telomere length, a marker of biological ageing, and multiple aspects of brain structure. This may explain why individuals with longer telomeres have a lower risk of dementia."

Study finds worrying about election stress can harm your health -- and what you can do about it

 New research from North Carolina State University finds that simply anticipating stress related to political elections causes adverse physical health effects. However, the study also finds there is something people can do to mitigate those negative health effects.

"This is the first study to show that anticipatory stress related to elections can harm our health," says Shevaun Neupert, corresponding author of the study and a professor of psychology at NC State. "It's well established that stress can adversely affect our health. This study tells us that thinking we're going to feel stress in the near future can also adversely affect our health."

The study draws on data collected from 140 adults from across the United States. Study participants were asked to fill out an online survey every day for 30 days, from Oct. 15 to Nov. 13, 2018 -- the weeks immediately before and after the 2018 midterm elections.

"We found that study participants reported worse physical health on days when they also reported having high levels of anticipatory stress -- meaning they expected to experience election-related stress within the next 24 hours," Neupert says. "In other words, simply anticipating possible stress was enough to make them feel worse."

"This study relies on study participants self-reporting about their health, but this is a well-established and widely used approach that has consistently proven to be an objective indicator of physical health and well-being."

The good news is that the researchers found there is a strategy people can use to help preserve their health, even when anticipating stress. It's called problem analysis.

"Problem analysis, in this case, is when people think critically about why they believe they'll experience election-related stress over the next 24 hours," Neupert says. "For example, if they think they're going to have an argument about the election with an acquaintance in the next 24 hours, they might think about why they're going to have that argument or what that argument will be about. Basically, problem analysis is all about mentally engaging with whatever problem they're anticipating."

Here's how effective problem analysis was: on days when study participants anticipated stress, but were also actively engaging in problem analysis, participants reported no decline in physical health.

"One reason we think problem analysis is so important is that it's a necessary first step for many additional coping strategies," Neupert explains. "For example, problem analysis may help people think of ways to avoid having an argument they're anticipating, or help them think of ways to make the argument less heated."

And these findings were true across the board.

"We controlled for the political orientation and age of the study participants," says Brittany Johnson, first author of the study and a former undergraduate at NC State. "We controlled for whether they actually experienced election-related stress on the days when they anticipated it. We controlled for the presence of other types of stress.

"No matter how you slice it, anticipating election-related stress adversely affected health -- with the exception of when people were engaged in problem analysis."

The paper, "Combatting Election Stress: Anticipatory Coping and Daily Self-Reported Physical Health," is published in the journal Psychological Reports.

Is bone health linked to brain health?

"Low bone density and dementia are two conditions that commonly affect older people simultaneously, especially as bone loss often increases due to physical inactivity and poor nutrition during dementia," said study author Mohammad Arfan Ikram, MD, PhD, of the Erasmus University Medical Center in Rotterdam, Netherlands. "However, little is known about bone loss that occurs in the period leading up to dementia. Our study found that bone loss indeed already occurs before dementia and thus is linked to a higher risk of dementia."

The study involved 3,651 people in the Netherlands with an average age of 72 who did not have dementia at the start of the study.

Over an average of 11 years, 688 people or 19% developed dementia.

Researchers looked at X-rays to identify bone density. Participants were interviewed every four to five years and completed physical tests such as bone scans and tests for dementia.

Of the 1,211 people with the lowest total body bone density, 90 people developed dementia within 10 years, compared to 57 of the 1,211 people with the highest bone density.

After adjusting for factors such as age, sex, education, other illnesses and medication use, and a family history of dementia, researchers found that within 10 years, people with the lowest total body bone density were 42% more likely to develop dementia than people in the highest group.

"Previous research has found factors like diet and exercise may impact bones differently as well as the risk of dementia," Ikram added. "Our research has found a link between bone loss and dementia, but further studies are needed to better understand this connection between bone density and memory loss. It's possible that bone loss may occur already in the earliest phases of dementia, years before any clinical symptoms manifest themselves. If that were the case, bone loss could be an indicator of risk for dementia and people with bone loss could be targeted for screening and improved care."

A limitation of the study is that participants were primarily of European origin and age 70 or older at the start of the study, so these findings may vary in different races, ethnicities, and younger age groups.

How the brain's 'internal compass' works

 Scientists have gained new insights into the part of the brain that gives us a sense of direction, by tracking neural activity with the latest advances in brain imaging techniques. The findings shed light on how the brain orients itself in changing environments -- and even the processes that can go wrong with degenerative diseases like dementia, that leave people feeling lost and confused.

"Neuroscience research has witnessed a technology revolution in the last decade allowing us to ask and answer questions that could only be dreamed of just years ago," says Mark Brandon, an Associate Professor of psychiatry at McGill University and researcher at the Douglas Research Centre, who co-led the research with Zaki Ajabi, a former student at McGill University and now a postdoctoral research fellow at Harvard University.

Reading the brain's internal compass

To understand how visual information impacts the brain's internal compass, the researchers exposed mice to a disorienting virtual world while recording the brain's neural activity. The team recorded the brain's internal compass with unprecedented precision using the latest advances in neuronal recording technology.

This ability to accurately decode the animal's internal head direction allowed the researchers to explore how the Head-Direction cells, which make up the brain's internal compass, support the brain's ability to re-orient itself in changing surroundings. Specifically, the research team identified a phenomenon they term 'network gain' that allowed the brain's internal compass to reorient after the mice were disoriented. "It's as if the brain has a mechanism to implement a 'reset button' allowing for rapid reorientation of its internal compass in confusing situations," says Ajabi.

Although the animals in this study were exposed to unnatural visual experiences, the authors argue that such scenarios are already relevant to the modern human experience, especially with the rapid spread of virtual reality technology. These findings "may eventually explain how virtual reality systems can easily take control over our sense of orientation," adds Ajabi.

The results inspired the research team to develop new models to better understand the underlying mechanisms. "This work is a beautiful example of how experimental and computational approaches together can advance our understanding of brain activity that drives behaviour," says co-author Xue-Xin Wei, a computational neuroscientist and an Assistant Professor at The University of Texas at Austin.

Degenerative diseases

The findings also have significant implications for Alzheimer's disease. "One of the first self-reported cognitive symptoms of Alzheimer's is that people become disoriented and lost, even in familiar settings," says Brandon. The researchers expect that a better understanding of how the brain's internal compass and navigation system works will lead to earlier detection and better assessment of treatments for Alzheimer's disease.

From mutation to arrhythmia: Desmosomal protein breakdown as an underlying mechanism of cardiac disease

 Mutations in genes that form the desmosome are the most common cause of the cardiac disease arrhythmogenic cardiomyopathy (ACM), which affects one in 2000 to 5000 people worldwide. Researchers from the group of Eva van Rooij now discovered how a mutation in the desmosomal gene plakophilin-2 leads to ACM. They found that the structural and functional changes in ACM hearts caused by a plakophilin-2 mutation are the result of increased desmosomal protein degradation. The results of this study, published in Science Translational Medicine on March 22nd 2023, further our understanding of ACM and could contribute to the development of new therapies for this disease

ACM is a progressive and inheritable cardiac disease for which currently no treatments exist to halt its progression. Although patients initially do not experience any symptoms, they are at a higher risk of arrhythmias and resulting sudden cardiac arrest. As the disease progresses, patches of fibrotic and fat tissue form in the heart which can lead to heart failure. At this stage, patients require a heart transplantation as treatment.

Plakophilin-2

Over 50% of all ACM cases are caused by a mutation in one of the desmosomal genes, which together form complex protein structures known as desmosomes. Desmosomes form "bridges" between individual heart muscle cells, allowing the cells to contract in a coordinated manner. Most of the desmosomal mutations that cause ACM occur in a gene called plakophilin-2. Nevertheless, very little is known on how mutations in this gene lead to the disease. To change this, the Van Rooij lab first studied human heart samples from ACM patients carrying mutations in the plakophilin-2 gene. "We saw lower levels of all desmosomal proteins and disorganized desmosomal proteins in fibrotic areas of the ACM hearts," says Jenny (Hoyee) Tsui, first author on the paper. Tsui: "In addition, cultured 3D heart muscle tissue originating from a patient with a plakophilin-2 mutation, was unable to continue beating at higher pacing rates, which resembles arrhythmias seen in the clinic."

ACM in mice

The researchers then used a genetic tool called CRISPR/Cas9 to introduce the human plakophilin-2 mutation in mice to mimic ACM. This allowed them to study progression of the disease in more detail. They observed that old ACM mice carrying this mutation had lower levels of desmosomal proteins and heart relaxation issues, similar to ACM patients. Strikingly, the researchers discovered that the mutation lowered levels of desmosomal proteins even in young, healthy mice of which the heart contracted normally. From this they concluded that a loss of desmosomal proteins could underlie the onset of ACM caused by a plakophilin-2 mutation.

Protein degradation

The researchers then moved on to explain the loss of desmosomal proteins. For this they studied both RNA and protein levels in their ACM mice. "The levels of desmosomal proteins were lower in our ACM mice compared to healthy control mice. However, the RNA levels of these genes were unchanged. We discovered that these surprising findings are the result of increased protein degradation in ACM hearts," explains Sebastiaan van Kampen, co-first author of the paper. Tsui adds: "When we treated our ACM mice with a drug that prevents protein degradation, the levels of desmosomal proteins were restored. More importantly, the restored levels of desmosomal proteins improved calcium handling of heart muscle cells, which is vital for their normal function."

Towards new therapies

The results of this study, published in Science Translational Medicine, raise new insights into ACM development and indicate that protein degradation could be an interesting target for future therapies. "Protein degradation occurs in every cell of our body and is crucial for the function of these cells. To overcome side-effects of future therapies we will need to develop drugs that prevent degradation of desmosomal proteins in heart muscle cells specifically," explains Eva van Rooij, group leader at the Hubrecht Institute and last author of the study. More research is thus needed to realize this. In the future, these new specific drugs could potentially be used to halt the onset and prevent progression of ACM

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...