Showing posts with label MATTER & ENERGY. Show all posts
Showing posts with label MATTER & ENERGY. Show all posts

Wednesday, 15 November 2023

Ammonia for fertilizers without the giant carbon footprint

 The production of ammonia for fertilisers -- which has one of the largest carbon footprints among industrial processes -- will soon be possible on farms using low-cost, low-energy and environmentally friendly technology.

This is thanks to researchers at UNSW Sydney and their collaborators who have developed an innovative technique for sustainable ammonia production at scale.

Up until now, the production of ammonia has relied on high-energy processes that leave a massive global carbon footprint -- temperatures of more than 400 oC and pressures exceeding 200 atmospheres that account for 2 per cent of the world's energy and 1.8 per cent of its CO2.

But the researchers have come up with a method that significantly enhances energy efficiency while making environmentally friendly ammonia economically feasible. The new technique eliminates the requirement for high temperatures, high pressure, and extensive infrastructure in ammonia production.

In a paper published recently in the journal Applied Catalysis B: Environmental, the authors show that the process they developed has enabled the large-scale synthesis of green ammonia by increasing its energy efficiency and production rate.

The foundation of this research, previously published by the same research group, has already been licensed to an Australian industry partner, PlasmaLeap Technologies, through the UNSW Knowledge Exchange program. It is set to be translated into the Australian agriculture industry, with a prototype already scaled up and ready for deployment.

The latest study follows on from the proof-of-concept research performed by the same UNSW research group three years ago with significant advances in energy efficiency and production rate in the process, thus improving commercial profitability.

The research also represents an opportunity to use green ammonia in the hydrogen transport market, as liquid ammonia (NH3) can store more hydrogen in a smaller space than liquefied hydrogen (H2), making the transportation of hydrogen energy more economical.

Net zero objectives

While the conventional process used for ammonia production is notably energy-intensive -- relying heavily on fossil fuels as its primary energy and hydrogen sources -- it has been instrumental in increasing crop yields and sustaining a growing global population.

Dr. Ali Jalili, the study's leader and a former Australian Research Council DECRA Fellow at UNSW, says adopting a sustainable approach to ammonia production is crucial for global net zero objectives.

"Currently, the traditional method of producing ammonia -- known as the Haber-Bosch process -- accounts for 2.4 tonnes of CO2 per tonne of ammonia, equivalent to approximately 2 per cent of global carbon emissions. Additionally, Haber-Bosch is economically viable only in large-scale and centralised facilities. Consequently, the transportation from these facilities to farms will increase the CO2 emission by 50 per cent," he says.

"Ammonia-based fertilisers are in critically short supply due to international supply chain disruptions and geopolitical issues, which impact our food security and production costs.

"This, together with its potential for hydrogen energy storage and transportation, makes ammonia key to Australia's renewable energy initiatives, positioning the country among the leaders in renewable energy exports and utilisation."

As well as addressing economic and logistical challenges associated with intermittent energy sources for cities or farms, Dr Jalili says to fully unlock its potential, it is "essential to establish a decentralised and energy-efficient production method that can effectively use surplus renewable electricity."

Researchers develop gel to deliver cancer drugs for solid tumors

 Intratumoral therapy -- in which cancer drugs are injected directly into tumors -- is a promising treatment option for solid cancers but has shown limited success in clinical trials due to an inability to precisely deliver the drug and because most immunotherapies quickly dissipate from the site of injection. A team of researchers from Mass General Brigham, in collaboration with colleagues at the Koch Institute for Integrative Cancer Research, has developed a gel delivery system that overcomes these challenges. The gel is injectable but solidifies upon delivery; contains an imaging agent for visualization under CT scan; and can hold a high concentration of drug for slow, controlled release.

In a paper published in Advanced Healthcare Materials, the team reports that using gel-delivered imiquimod (an immune stimulating drug) in combination with checkpoint inhibitor therapy induced tumor regression and increased survival in mouse models of colon and breast cancer that are usually resistant to checkpoint inhibitor therapy. The treatment also appeared to train the immune system to detect and attack distant tumors that were not directly treated, suggesting that it might be a helpful therapy for metastatic cancers.

"This gel tackles the two problems with existing attempts at making intratumoral cancer immunotherapy: making the therapy visible and practical so that interventional radiologists can confirm delivery, and making sure the drug actually stays in the region of interest," said Avik Som, MD, PhD, of the Department of Radiology at Massachusetts General Hospital, a founding member of the Mass General Brigham healthcare system. "When we inject this gel into a tumor, we're able to teach the immune system to recognize the cancer and trigger it to attack not only the site where the gel was injected, but also other areas in the body where the same cancer may be hiding."

The research team, which consisted of both engineers and medical professionals, first developed and optimized the gel-delivery system in the lab by tweaking the gel's chemical structure. One key aspect of the gel's design was that it needed to shift from being liquid at room temperature to make it injectable but become solid at body temperature inside the tumor in order to form a drug-releasing depot, while also retaining drug encapsulation and delivery capability, and carrying sufficient imaging agent.

After optimizing the gel in the lab, the team tested its ability to treat mouse models of colon and breast cancer that are usually resistant to immunotherapy. To do this, they used the gel to deliver imiquimod, an FDA-approved immune stimulating drug, in combination with checkpoint inhibitor therapy. Each mouse had two tumors of the same type, but the researchers only treated one tumor per mouse, which allowed them to test the gel's ability to stimulate both local and systemic immunity.

They showed that treating with gel-delivered imiquimod in combination with checkpoint inhibitor therapy improved survival in both cancer models. The treatment resulted in an all-or-nothing response -- mice that responded to the treatment showed complete regression of both the treated tumor and a distantly located tumor (a model for metastasis), while non-responders showed no regression at either site. For the colon cancer model, 46% (6/13) survived when the checkpoint inhibitor therapy was combined with gel-delivered imiquimod. For the breast cancer model, 20% (3/15) survived when treated with the combined therapies.

"These two tumors remain challenging to treat today, even though immunotherapies are transforming how we think about treatment," said co-corresponding author Giovanni Traverso, MB, PhD, MBBCH, Department of Medicine at Brigham and Women's Hospital, a founding member of the Mass General Brigham healthcare system and an associate professor in the Department of Mechanical Engineering at MIT. "The fact that we were able to induce responses in distant tumors in these colon and breast cancer models was a big win."

How human faces can teach androids to smile

 Robots able to display human emotion have long been a mainstay of science fiction stories. Now, Japanese researchers have been studying the mechanical details of real human facial expressions to bring those stories closer to reality.

In a recent study published by the Mechanical Engineering Journal, a multi-institutional research team led by Osaka University have begun mapping out the intricacies of human facial movements. The researchers used 125 tracking markers attached to a person's face to closely examine 44 different, singular facial actions, such as blinking or raising the corner of the mouth.

Every facial expression comes with a variety of local deformation as muscles stretch and compress the skin. Even the simplest motions can be surprisingly complex. Our faces contain a collection of different tissues below the skin, from muscle fibers to fatty adipose, all working in concert to convey how we're feeling. This includes everything from a big smile to a slight raise of the corner of the mouth. This level of detail is what makes facial expressions so subtle and nuanced, in turn making them challenging to replicate artificially. Until now, this has relied on much simpler measurements, of the overall face shape and motion of points chosen on skin before and after movements.

"Our faces are so familiar to us that we don't notice the fine details," explains Hisashi Ishihara, main author of the study. "But from an engineering perspective, they are amazing information display devices. By looking at people's facial expressions, we can tell when a smile is hiding sadness, or whether someone's feeling tired or nervous."

Information gathered by this study can help researchers working with artificial faces, both created digitally on screens and, ultimately, the physical faces of android robots. Precise measurements of human faces, to understand all the tensions and compressions in facial structure, will allow these artificial expressions to appear both more accurate and natural.

"The facial structure beneath our skin is complex," says Akihiro Nakatani, senior author. "The deformation analysis in this study could explain how sophisticated expressions, which comprise both stretched and compressed skin, can result from deceivingly simple facial actions."

This work has applications beyond robotics as well, for example, improved facial recognition or medical diagnoses, the latter of which currently relies on doctor intuition to notice abnormalities in facial movement.

So far, this study has only examined the face of one person, but the researchers hope to use their work as a jumping off point to gain a fuller understanding of human facial motions. As well as helping robots to both recognize and convey emotion, this research could also help to improve facial movements in computer graphics, like those used in movies and video games, helping to avoid the dreaded 'uncanny valley' effect.

Thursday, 9 November 2023

Machine learning gives users 'superhuman' ability to open and control tools in virtual reality

 Researchers have developed a virtual reality application where a range of 3D modelling tools can be opened and controlled using just the movement of a user's hand.

The researchers, from the University of Cambridge, used machine learning to develop 'HotGestures' -- analogous to the hot keys used in many desktop applications.

HotGestures give users the ability to build figures and shapes in virtual reality without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.

The idea of being able to open and control tools in virtual reality has been a movie trope for decades, but the researchers say that this is the first time such a 'superhuman' ability has been made possible. The results are reported in the journal IEEE Transactions on Visualization and Computer Graphics.

Virtual reality (VR) and related applications have been touted as game-changers for years, but outside of gaming, their promise has not fully materialised. "Users gain some qualities when using VR, but very few people want to use it for an extended period of time," said Professor Per Ola Kristensson from Cambridge's Department of Engineering, who led the research. "Beyond the visual fatigue and ergonomic issues, VR isn't really offering anything you can't get in the real world."

Most users of desktop software will be familiar with the concept of hot keys -- command shortcuts such as ctrl-c to copy and ctrl-v to paste. While these shortcuts omit the need to open a menu to find the right tool or command, they rely on the user having the correct command memorised.

"We wanted to take the concept of hot keys and turn it into something more meaningful for virtual reality -- something that wouldn't rely on the user having a shortcut in their head already," said Kristensson, who is also co-Director of the Centre for Human-Inspired Artificial Intelligence.

Instead of hot keys, Kristensson and his colleagues developed 'HotGestures', where users perform a gesture with their hand to open and control the tool they need in 3D virtual reality environments.

For example, performing a cutting motion opens the scissor tool, and the spray motion opens the spray can tool. There is no need for the user to open a menu to find the tool they need, or to remember a specific shortcut. Users can seamlessly switch between different tools by performing different gestures during a task, without having to pause their work to browse a menu or to press a button on a controller or keyboard.

"We all communicate using our hands in the real world, so it made sense to extend this form of communication to the virtual world," said Kristensson.

For the study, the researchers built a neural network gesture recognition system that can recognise gestures by performing predictions on an incoming hand joint data stream. The system was built to recognise ten different gestures associated with building 3D models: pen, cube, cylinder, sphere, palette, spray, cut, scale, duplicate and delete.

The team carried out two small studies where participants used HotGestures, menu commands or a combination. The gesture-based technique provided fast and effective shortcuts for tool selection and usage. Participants found HotGestures to be distinctive, fast, and easy to use while also complementing conventional menu-based interaction. The researchers designed the system so that there were no false activations -- the gesture-based system was able to correctly recognise what was a command and what was normal hand movement. Overall, the gesture-based system was faster than a menu-based system.

"There is no VR system currently available that can do this," said Kristensson. "If using VR is just like using a keyboard and a mouse, then what's the point of using it? It needs to give you almost superhuman powers that you can't get elsewhere."

The researchers have made the source code and dataset publicly available so that designers of VR applications can incorporate it into their products.

"We want this to be a standard way of interacting with VR," said Kristensson. "We've had the tired old metaphor of the filing cabinet for decades. We need new ways of interacting with technology, and we think this is a step in that direction. When done right, VR can be like magic."

The research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).

Deep decarbonization scenarios reveal importance of accelerating zero-emission vehicle adoption

 The rapid adoption of zero-emission electric vehicles will move the nation close to an 80% or more drop in transportation greenhouse gas emissions by 2050 from the 2019 level according to researchers from the U.S. Department of Energy's National Renewable Energy Laboratory (NREL).

The researchers came to that conclusion after running thousands of computer simulations on the steps needed to decarbonize passenger and freight travel, which make up the largest contributor to greenhouse gases. While they advised that "no single technology, policy, or behavioral change" is enough by itself to reach the target, eliminating tailpipe emissions would be a major factor.

"There are reasons to be optimistic and several remaining areas to explore," said Chris Hoehne, a mobility systems research scientist at NREL and lead author of a new paper detailing the routes that could be taken. "In the scientific community, there is a lot of agreement around what needs to happen to slash transportation-related greenhouse gas emissions, especially when it comes to electrification. But there is high uncertainty for future transportation emissions and electricity needs, and this unique analysis helps shed light on the conditions that drive these uncertainties."

The paper, "Exploring decarbonization pathways for USA passenger and freight mobility," appears in the journal Nature Communications. Hoehne's co-authors from NREL are Matteo Muratori, Paige Jadun, Brian Bush, Artur Yip, Catherine Ledna, and Laura Vimmerstedt. Two other co-authors are from the U.S. Department of Energy.

While most vehicles today burn fossil fuels, a zero-emission vehicle (ZEV) relies on alternate sources of power, such as batteries or hydrogen. Transportation ranks as the largest source of greenhouse gas emissions in the United States and the fastest-growing source of emissions in other parts of the world.

The researchers analyzed in detail 50 deep decarbonization scenarios, showing that rapid adoption of ZEVs is essential alongside a simultaneous transition to a clean electric grid. Equally important is managing travel demand growth, which would reduce the amount of clean electricity supply needed. The researchers found the most dynamic variable in reducing total transportation-related emissions are measures to support the transition to ZEVs.

Using a model called Transportation Energy & Mobility Pathway Options (TEMPO), the researchers performed more than 2,000 simulations to determine what will be needed to decarbonize passenger and freight travel. The study explores changes in technology, behavior, and policies to envision how passenger and freight systems can successfully transition to a sustainable future. Policy changes may require new regulations that drive the adoption of electric vehicles, for example. Technology solutions will call for continued advancements in batteries, fuel cells, and sustainable biofuels, among others. Behavior comes into play in considering shifts in population and travel needs. Someone moving away from an urban core, for example, might have to travel longer distances to work.

"The transportation sector accounts for about a quarter of greenhouse gas emissions in the United States, and about two-thirds of all that is from personal vehicle travel," Hoehne said.

By employing a combination of strategies, the study shows that the maximum potential for 2050 decarbonization across the simulated scenarios is a staggering 89% reduction in greenhouse gases relative to 2019, equivalent to an 85% reduction from the 2005 baseline.

"Recent progress in technology coupled with the pressing need to address both the climate crisis and air quality issues have elevated the importance of clean transportation solutions," said Muratori, manager of the Transportation Energy Transition Analysis group and architect of the TEMPO model. "This shift has made transitioning the entire sector towards sustainability an achievable goal and a top priority in the United States and worldwide."

Experiment shows biological interactions of microplastics in watery environment

 Scientists have learned over the years that when aquatic organisms such as zooplankton become exposed to microplastics, they eat poorly. Research at Purdue University now shows that their plastic-induced eating difficulties also limit the ability of zooplankton to control algal proliferation.

"If the control of algae by zooplankton is confounded by the presence of microplastics, that could be a cause for concern," said Tomas Höök, professor of forestry and natural resources at Purdue.

When algae bloom out of control, this presents a problem because some species produce toxins. Also, algal blooms can be associated with pea-soupy, unattractive bodies of water and contribute to hypoxia, a low-oxygen condition that may lead to fish kills.

Zooplankton are tiny creatures that live in watery environments and form the base of the food web in many aquatic environments. The organisms examined for the study were two common types of crustaceous zooplankton that differ in size and feeding behavior.

The study highlights how rife plastic has become in the environment. "There's plastic dust in the air. We're all potentially breathing plastic now," said Höök, who also directs the Illinois-Indiana Sea Grant College Program. Plastics are everywhere, he added, including in a lot of the food we eat.

Chris Malinowski, director of research and conservation at the Ocean First Institute, said, "The flow of plastics through the environment is reaching every part of the world." Plastics are found atop snowcapped mountain peaks and on the ocean floor. The rivers in between serve as the vessels that help spread microplastics.

Höök, Malinowski and two co-authors presented their findings in the journal Science of the Total Environment. The study was among the first to examine the effects of microplastics in a simple food web design. This involved investigating impacts on how zooplankton feed on algae in the presence of different environmentally realistic microplastic concentrations and when faced with risk of predation from fish.

advertisement

"Microplastics aren't just having an effect on consumer organisms. They also have the potential to release algae from predatory control," Höök said.

When the researchers noticed increased algal densities in their laboratory experiment after adding higher microplastic concentrations, they were uncertain about its cause. Either the microplastics were getting in the way of zooplankton and preventing normal consumption rates of algae, or they served as better surfaces for algal growth.

Follow-up tests showed that adding microplastics without the zooplankton failed to increase algae production. The microplastics were somehow affecting predation on algae. "That was somewhat surprising," noted Malinowski, a former Purdue postdoctoral scholar.

Plastics can accumulate in biological tissue, similar to mercury and other heavy metals. But plastics also cause gut blockage and related effects that impact feeding, he said. And even though plastics break down in the environment into smaller and smaller fragments, which is not necessarily a good thing, the process plays out over many years.

"Different plastic products that we use every day, like cups, straws and bags, don't truly go away," Malinowski noted. Eventually, they degrade into microplastic particles, which by definition measure less than 5 millimeters, the approximate size of a pencil lead. Scientists find it difficult to sample particles of that size in the environment.

"In terms of the impact that microplastics have in the environment, there's a level of uncertainty with these very small particles, in part simply because they are just very small, and also because they take on different shapes, sizes, configurations and surface properties," Malinowski noted. "All of the research that has gone into this already and all that needs to be done is happening at too slow of a rate relative to the amount of plastic being produced, and this is alarming because we don't truly understand all of the consequences."

Co-authors of the paper include Catherine Searle, associate professor of biological sciences at Purdue, and James Schaber, formerly of Purdue's Bindley Bioscience Center. The work was funded by Purdue University's College of Agriculture and the Department of Forestry and Natural Resources and by the U.S. Department of Agriculture.

Scaling up nano for sustainable manufacturing

 A new self-assembling nanosheet could radically accelerate the development of functional and sustainable nanomaterials for electronics, energy storage, health and safety, and more.

Developed by a team led by Lawrence Berkeley National Laboratory (Berkeley Lab), the new self-assembling nanosheet could significantly extend the shelf life of consumer products. And because the new material is recyclable, it could also enable a sustainable manufacturing approach that keeps single-use packaging and electronics out of landfills.

The team is the first to successfully develop a multipurpose, high-performance barrier material from self-assembling nanosheets. The breakthrough was reported online in the Nov. 8 issue of the journal Nature.

"Our work overcomes a longstanding hurdle in nanoscience -- scaling up nanomaterial synthesis into useful materials for manufacturing and commercial applications," said Ting Xu, the principal investigator who led the study. "It's really exciting because this has been decades in the making."

Xu is a faculty senior scientist in Berkeley Lab's Materials Sciences Division, and professor of chemistry and materials science and engineering at UC Berkeley.

One challenge in harvesting nanoscience to create functional materials is that many small pieces need to come together so that the nanomaterial can grow large enough to be useful. And while stacking nanosheets is one of the simplest ways to grow nanomaterials into a product, "stacking defects" -- gaps between the nanosheets -- are unavoidable when working with existing nanosheets or nanoplatelets.

"If you visualize building a 3D structure from thin, flat tiles, you'll have layers up the height of the structure, but you'll also have gaps throughout each layer wherever two tiles meet," said first author Emma Vargo, a former graduate student researcher in the Xu group and now a postdoctoral scholar in Berkeley Lab's Materials Sciences Division. "It's tempting to reduce the number of gaps by making the tiles bigger, but they become harder to work with," Vargo said.

The new nanosheet material overcomes the problem of stacking defects by skipping the serial stacked sheet approach altogether. Instead, the team mixed blends of materials that are known to self-assemble into small particles with alternating layers of the component materials, suspended in a solvent. To design the system, the researchers used complex blends of nanoparticles, small molecules, and block copolymer-based supramolecules, all of which are commercially available.

Experiments at Oak Ridge National Laboratory's Spallation Neutron Sourcehelped the researchers understand the early, coarse stages of the blends' self-assembly. As the solvent evaporates, the small particles coalesce and spontaneously organize, coarsely templating layers, and then solidify into dense nanosheets. In this way, the ordered layers form simultaneously rather than being stacked one by one in a serial process. The small pieces only need to move short distances to get organized and close gaps, avoiding the problems of moving larger "tiles" and the inevitable gaps between them.

From a previous study led by Xu, the researchers knew that combining nanocomposite blends containing multiple "building blocks" of various sizes and chemistries, including complex polymers and nanoparticles, would not only adapt to impurities but also unlock a system's entropy, the inherent disorder in mixtures of materials that Xu's group harnessed to distribute the material's building blocks.

The new study builds on this earlier work. The researchers predicted that the complex blend used for the current study would have two ideal properties: In addition to having high entropy to drive the self-assembly of a stack of hundreds of nanosheets formed simultaneously, they also expected that the new nanosheet system would be minimally affected by different surface chemistries. This, they reasoned, would allow the same blend to form a protective barrier on a variety of surfaces, such as the glass screen of an electronic device, or a polyester mask.

Demonstrating a new 2D nanosheet's ease of self-assembly and high performance

To test the performance of the material as a barrier coating in several different applications, the researchers enlisted the help of some of the nation's best research facilities.

During experiments at Argonne National Laboratory's Advanced Photon Source, the researchers mapped out how each component comes together, and quantified their mobilities and the manner in which each component moves around to grow a functional material.

Based on these quantitative studies, the researchers fabricated barrier coatings by applying a dilute solution of polymers, organic small molecules, and nanoparticles to various substrates -- a Teflon beaker and membrane, polyester film, thick and thin silicon films, glass, and even a prototype of a microelectronic device -- and then controlling the rate of film formation.

Transmission electron microscope experiments at Berkeley Lab's Molecular Foundry show that by the time the solvent had evaporated, a highly ordered layered structure of more than 200 stacked nanosheets with very low defect density had self-assembled on the substrates. The researchers also succeeded in making each nanosheet 100 nanometers thick with few holes and gaps, which makes the material particularly effective at preventing the passage of water vapor, volatile organic compounds, and electrons, Vargo said.

Other experiments at the Molecular Foundry showed that the material has great potential as a dielectric, an insulating "electron barrier" material commonly used in capacitors for energy storage and computing applications.

In collaboration with researchers in Berkeley Lab's Energy Technologies Area, Xu and team demonstrated that when the material is used to coat porous Teflon membranes (a common material used to make protective face masks), it is highly effective in filtering out volatile organic compounds that can compromise indoor air quality.

And in a final experiment in the Xu lab, the researchers showed that the material can be redissolved and recast to produce a fresh barrier coating.

Now that they've successfully demonstrated how to easily synthesize a versatile functional material for various industrial applications from a single nanomaterial, the researchers plan to finetune the material's recyclability and add color tunability (it currently comes in blue) to its repertoire.

Monday, 23 October 2023

Pivotal breakthrough in adapting perovskite solar cells for renewable energy

 A huge step forward in the evolution of perovskite solar cells recorded by researchers at City University of Hong Kong (CityU) will have significant implications for renewable energy development.

The CityU innovation paves the way for commercialising perovskite solar cells, bringing us closer to an energy-efficient future powered by sustainable sources.

"The implications of this research are far-reaching, and its potential applications could revolutionise the solar energy industry," said Professor Zhu Zonglong of the Department of Chemistry at CityU, who collaborated with Professor Li Zhong'an at Huazhong University of Science and Technology.

New approach

Perovskite solar cells are a promising frontier in the solar energy landscape, known for their impressive power conversion efficiency. However, they have one significant drawback: thermal instability, i.e. they don't tend to perform well when exposed to high temperatures.

The team at CityU has engineered a unique type of self-assembled monolayer, or SAM for short, and anchored it on a nickel oxide surface as a charge extraction layer.

"Our approach has dramatically enhanced the thermal robustness of the cells," said Professor Zhu, adding that thermal stability is a significant barrier to the commercial deployment of perovskite solar cells.

action layer, our improved cells retain over 90% of their efficiency, boasting an impressive efficiency rate of 25.6%, even after operated under high temperatures, around (65 degrees Celsius) for over 1,000 hours. This is a milestone achievement," said Professor Zhu.

Raising the heat shield

The motivation for this research was born from a specific challenge in the solar energy sector: the thermal instability of perovskite solar cells.

"Despite their high power conversion efficiency, these solar cells are like a sports car that runs exceptionally well in cool weather but tends to overheat and underperform on a hot day. This was a significant roadblock preventing their widespread use," said Professor Zhu.

The CityU team has focused on the self-assembled monolayer (SAM), an essential part of these cells, and envisioned it as a heat-sensitive shield that needed reinforcement.

"We discovered that high-temperature exposure can cause the chemical bonds within SAM molecules to fracture, negatively impacting device performance. So our solution was akin to adding a heat-resistant armour -- a layer of nickel oxide nanoparticles, topped by a SAM, achieved through an integration of various experimental approaches and theoretical calculations," Professor Zhu said.

To counteract this issue, the CityU team introduced an innovative solution: anchoring the SAM onto an inherently stable nickel oxide surface, thereby enhancing the SAM's binding energy on the substrate. Also, they synthesised a new SAM molecule of their own, creating an innovative molecule that promotes more efficient charge extraction in perovskite devices.

Better efficiency in higher temperatures

The primary outcome of the research is the potential transformation of the solar energy landscape. By improving the thermal stability of perovskite solar cells through the innovatively designed SAMs, the team has laid the foundation for these cells to perform efficiently even in high-temperature conditions.

"This breakthrough is pivotal as it addresses a major obstacle that previously impeded wider adoption of perovskite solar cells. Our findings could significantly broaden the utilisation of these cells, pushing their application boundaries to environments and climates where high temperatures were a deterrent," said Professor Zhu.

The importance of these findings cannot be overstated. By bolstering the commercial viability of perovskite solar cells, CityU is not merely introducing a new player in the renewable energy market, it's setting the stage for a potential game-changer that could play a vital role in the global shift towards sustainable and energy-efficient sources.

"This technology, once fully commercialised, could help decrease our dependence on fossil fuels and contribute substantially to combating the global climate crisis," he added.

Researchers urge: It's high time for alliances to ensure supply chain security

 Understanding supply networks would have a significant impact: improving supply security, promoting and objective monitoring of the green transition, strengthening human rights compliance, and reducing tax evasion. International alliances are needed for such an understanding, as emphasized by a research team led by the Complexity Science Hub in a recent commentary in Science.

Even though many companies only know their immediate trading partners, they depend on countless other supply relations up and down the supply chain. A supply shortage anywhere in this supply network may affect suppliers, suppliers of suppliers, and so on, as well as customers and their customers' customers. "Such supply disruptions caused an estimated loss of 2% of global GDP in 2021 -- approximately $1.9 trillion -- and significantly contributed to the current high inflation," explains CSH researcher Anton Pichler.

UNIMAGINABLE OPPORTUNITIES FOR THE FIRST TIME

"For a long time, it was unthinkable to analyze the global economy at the company level, let alone its complex network of supply interconnections," says Pichler. That is changing now.

For almost a century, only aggregated data could be analyzed, such as the average values of entire industry sectors, for example, the automotive industry. Therefore, predicting how individual company failures will affect the system was simply not possible. What happens to the economy when a specific company stops its production? What if an earthquake paralyzes an entire region?

13 BILLION SUPPLY CONNECTIONS

Thanks to a new generation of data on the company level and a set of new analysis methods, we are entering a new era. Despite the vast amount of data -- there are approximately 300 million companies worldwide, each with an average of 40 domestic suppliers, resulting in up to 13 billion supply connections -- researchers can now map the connections between individual companies.

OPTIMAL APPROACH: VALUE ADDED TAX DATA

Currently, value-added tax (VAT) data is the most promising option for reconstructing reliable large-scale supply networks. Several countries like Spain, Hungary, or Belgium use a standardized VAT collection that practically records all domestic business-to-business (b2b) transactions. With these, one can map the entire national trade of a country.

TAX EVASION -- €130 BILLION

In most countries like Germany, Austria, or France, where VAT is not collected for individual b2b transactions but only accumulated over a specific period, such a mapping is currently not possible. "The standardized b2b collection could reduce administrative overheads for companies and would contribute substantially to tax compliance," says CSH researcher Christian Diem, co-author of the study. Estimates suggest that VAT-related fraudulent activities in the European Union (EU) amount to €130 billion annually. A tax gap that could be massively reduced.

CLIMATE, HUMAN RIGHTS, AND SUPPLY SECURITY

The researchers stress that it is not only tax evasion but also other major challenges of our time depend on the detailed knowledge of supply networks -- ideally on a global scale. "For individual companies, it's nearly impossible to ensure that all trading partners, their suppliers, and their suppliers' suppliers operate environmentally friendly and in compliance with human rights. If this were centrally documented in a gigantic network, it could be more easily ensured," emphasizes Pichler.

ONE-FIFTH OF THE GLOBAL ECONOMY ON A MAP

The next step is to link trade data from different countries. Currently, the EU records trade in goods between its member states at the company level. If they also included services and linked them with VAT data, this could lead to a comprehensive cross-border company-level network. According to the authors, this would represent almost 20% of the global GDP. The European Commission laid the legal foundation by proposing "VAT in the Digital Age." "Unfortunately, this is far from being realized," says Stefan Thurner, an author of the commentary and President of the Complexity Science Hub, "So far, we do not have a single situation where the supply chain networks of any two countries have been joined and merged. This would be an essential next step."

INTERNATIONAL ALLIANCE

To create a truly international picture of supply interconnections, hundreds of datasets must be joined, analytical tools developed, and an institutional framework must be created together with secure infrastructure for storing and processing enormous amounts of sensitive data.

"To advance this endeavor, a strong international alliance of various interest groups is required, including national governments, statistical offices, international organizations, central banks, the private sector, and academia," explains Thurner. The first collaboration in science, involving authors in macroeconomics, supply chain research, and statistics, now aims to establish a foundation. The researchers hope to inspire others to join their efforts.

FIRST STEPS

Diem, Pichler, Thurner, and colleagues from the University of Cambridge and the University of Oxford have hosted representatives of European ministries, national banks, statistical offices, and researchers at a workshop in Vienna on June 5 and 6, 2023

Researchers demonstrate a high-speed electrical readout method for graphene nanodevices

 The 'wonder material' graphene is well-known for its high electrical conductivity, mechanical strength, and flexibility. Stacking two layers of graphene with atomic layer thickness produces bilayer graphene, which possesses excellent electrical, mechanical, and optical properties. As such, bilayer graphene has attracted significant attention and is being utilized in a host of next-generation devices, including quantum computers.

But complicating their application in quantum computing comes in the form of gaining accurate measurements of the quantum bit states. Most research has primarily used low-frequency electronics to overcome this. However, for applications that demand faster electronic measurements and insights into the rapid dynamics of electronic states, the need for quicker and more sensitive measurement tools has become evident.

Now, a group of researchers from Tohoku University have outlined improvements to radio-frequency (rf) reflectometry to achieve a high-speed readout technique. Remarkably, the breakthrough involves the use of graphene itself.

Rf reflectometry works by sending radio frequency signals into a transmission line and then measuring the reflected signals to obtain information about samples. But in devices employing bilayer graphene, the presence of significant stray capacitance in the measurement circuit leads to rf leakage and less-than-optimal resonator properties. Whilst various techniques have been explored to mitigate this, clear device design guidelines are still awaited.

"To circumvent this common shortfall of rf reflectometry in bilayer graphene, we employed a microscale graphite back-gate and an undoped silicon substrate," says Tomohiro Otsuka, corresponding author of the paper and associate professor at Tohoku University's Advanced Institute for Materials Research (WPI-AIMR). "We successfully realized good rf matching conditions, calculated the readout accuracy numerically, and compared these measurements with direct current measurements to confirm its consistency. This allowed us to observe Coulomb diamonds through rf reflectometry, a phenomenon indicating the formation of quantum dots in the conduction channel, driven by potential fluctuations caused by bubbles."

Otsuka and his team's proposed improvements to rf reflectometry provide important contributions to the development of next-generation devices such as quantum computers, and the exploration of physical properties using two-dimensional materials, such as graphene.


Accelerating waves shed light on major problems in physics

 Whenever light interacts with matter, light appears to slow down. This is not a new observation and standard wave mechanics can describe most of these daily phenomena.

For example, when light is incident on an interface, the standard wave equation is satisfied on both sides. To analytically solve such a problem, one would first find what the wave looks like at either side of the interface, and then employ electromagnetic boundary conditions to link the two sides together. This is called a piecewise continuous solution.

However, at the boundary, the incident light must experience an acceleration. So far, this has not been accounted for.

"Basically, I found a very neat way to derive the standard wave equation in 1+1 dimensions. The only assumption I needed was that the speed of the wave is constant. Then I thought to myself: what if it's not always constant? This turned out to be a really good question," says Assistant Professor Matias Koivurova from the University of Eastern Finland.

By assuming that the speed of a wave can vary with time, the researchers were able to write down what they call an accelerating wave equation. While writing down the equation was simple, solving it was another matter.

"The solution didn't seem to make any sense. Then it dawned on me that it behaves in ways that are reminiscent of relativistic effects," Koivurova recounts.

Working together with the Theoretical Optics and Photonics group, led by Associate Professor Marco Ornigotti from Tampere University, the researchers finally made progress. To obtain solutions that behave as expected, they needed a constant reference speed -- the vacuum speed of light. According to Koivurova, everything started to make sense after realising that. What followed was investigation of the surprisingly far-reaching consequences of the formalism.

No hope for a time machine?

In a breakthrough result, the researchers showed that in terms of accelerating waves, there is a well-defined direction of time; a so called 'arrow of time.' This is because the accelerating wave equation only allows solutions where time flows forward, but never backward.

"Usually, the direction of time comes from thermodynamics, where an increasing entropy shows which way time is moving," Koivurova says.

However, if the flow of time were to reverse, then entropy would start to decrease until the system reached its lowest entropy state. Then entropy would be free to increase again.

This is the difference between 'macroscopic' and 'microscopic' arrows of time: while entropy defines the direction of time for large systems unambiguously, nothing fixes the direction of time for single particles.

"Yet, we expect single particles to behave as if they have a fixed direction of time!" Koivurova says.

Since the accelerating wave equation can be derived from geometrical considerations, it is general, accounting for all wave behavior in the world. This in turn means that the fixed direction of time is also a rather general property of nature.

Relativity triumphs over the controversy

Another property of the framework is that it can be used to analytically model waves that are continuous everywhere, even across interfaces. This in turn has some important implications for the conservation of energy and momentum.

"There is this very famous debate in physics, which is called the Abraham-Minkowski controversy. The controversy is that when light enters a medium, what happens to its momentum? Minkowski said that the momentum increases, while Abraham insisted that it decreases," Ornigotti explains.

Notably, there is experimental evidence supporting both sides.

"What we have shown, is that from the point of view of the wave, nothing happens to its momentum. In other words, the momentum of the wave is conserved," Koivurova continues.

What allows the conservation of momentum are relativistic effects. "We found that we can ascribe a 'proper time' to the wave, which is entirely analogous to the proper time in the general theory of relativity" Ornigotti says.

Since the wave experiences a time that is different from the laboratory time, the researchers found that accelerating waves also experience time dilation and length contraction. Koivurova notes that it is precisely length contraction that makes it seem like the momentum of the wave is not conserved inside a material medium.

Exotic applications

The new approach is equivalent to the standard formulation in most problems, but it has an important extension: time-varying materials. Inside time-varying media light will experience sudden and uniform changes in the material properties. The waves inside such materials are not solutions to the standard wave equation.

This is where the accelerating wave equation comes into the picture. It allows the researchers to analytically model situations which were only numerically accessible before.

Such situations include an exotic hypothetical material called disordered photonic time crystal. Recent theoretical investigations have shown that a wave propagating inside the said material will slow down exponentially, while also increasing exponentially in energy.

"Our formalism shows that the observed change in the energy of the pulse is due to a curved space-time the pulse experiences. In such cases, energy conservation is locally violated," Ornigotti says.

Tuesday, 17 October 2023

Photonic crystals bend light as though it were under the influence of gravity

 A collaborative group of researchers has manipulated the behavior of light as if it were under the influence of gravity. The findings, which were published in the journal Physical Review A on September 28, 2023, have far-reaching implications for the world of optics and materials science, and bear significance for the development of 6G communications.

Albert Einstein's theory of relativity has long established that the trajectory of electromagnetic waves -- including light and terahertz electromagnetic waves -- can be deflected by gravitational fields.

Scientists have recently theoretically predicted that replicating the effects of gravity -- i.e., pseudogravity -- is possible by deforming crystals in the lower normalized energy (or frequency) region.

"We set out to explore whether lattice distortion in photonic crystals can produce pseudogravity effects," said Professor Kyoko Kitamura from Tohoku University's Graduate School of Engineering.

Photonic crystals possess unique properties that enable scientists to manipulate and control the behavior of light, serving as 'traffic controllers' for light within crystals. They are constructed by periodically arranging two or more different materials with varying abilities to interact with and slow down light in a regular, repeating pattern. Furthermore, pseudogravity effects due to adiabatic changes have been observed in photonic crystals.

Kitamura and her colleagues modified photonic crystals by introducing lattice distortion: gradual deformation of the regular spacing of elements, which disrupted the grid-like pattern of protonic crystals. This manipulated the photonic band structure of the crystals, resulting in a curved beam trajectory in-medium -- just like a light-ray passing by a massive celestial body such as a black hole.

Specifically, they employed a silicon distorted photonic crystal with a primal lattice constant of 200 micrometers and terahertz waves. Experiments successfully demonstrated the deflection of these waves.

"Much like gravity bends the trajectory of objects, we came up with a means to bend light within certain materials," adds Kitamura. "Such in-plane beam steering within the terahertz range could be harnessed in 6G communication. Academically, the findings show that photonic crystals could harness gravitational effects, opening new pathways within the field of graviton physics," said Associate Professor Masayuki Fujita from Osaka University.

Neutrons see stress in 3D-printed parts, advancing additive manufacturing

 Using neutrons to see the additive manufacturing process at the atomic level, scientists have shown that they can measure strain in a material as it evolves and track how atoms move in response to stress.


The automotive, aerospace, clean energy and tool-and-die industries -- any industry that needs complex and high-performance parts -- could use additive manufacturing," said Alex Plotkowski, materials scientist in ORNL's Materials Science and Technology Division and the lead scientist of the experiment. Plotkowski and his colleagues reported their findings in Nature Communications.


ORNL scientists have developed OpeN-AM, a 3D printing platform that can measure evolving residual stress during manufacturing using the VULCAN beamline at ORNL's Spallation Neutron Source, or SNS, a Department of Energy Office of Science user facility. When combined with infrared imaging and computer modeling, this system enables unprecedented insight into material behavior during manufacturing.


In this case, they used low-temperature transformation, or LTT, steel, physically measuring how atoms move in response to stress, whether it's temperature or load, using the OpeN-Am platform.


Residual stresses are stresses that remain even after a load or the cause of the stress is removed; they can deform a material or, worse, cause it to fail prematurely. Such stresses are a major challenge for fabricating accurate components with desirable properties and performance.


The scientists conceived and, over the course of two years, produced this experiment that can measure strain in the material as it evolves, which determines how stresses will be distributed.


"Manufacturers will be able to tailor residual stress in their components, increasing their strength, making them lighter and in more complex shapes. The technology can be applied to anything you want to manufacture," Plotkowski said.


advertisement


"We have successfully shown that there is a way to do that," he said. "We are demonstrating we understand connections in one case to anticipate other cases."


The scientists recently earned a 2023 R&D 100 Award for this technology. R&D World magazine announced the winners in August. Plotkowski and other winners will be recognized at the organization's award ceremony Nov. 16 in San Diego.


The scientists used a custom wire-arc additive manufacturing platform to perform what's called operando neutron diffraction of an LTT metal at SNS. Using SNS's VULCAN beamline, they processed the steel and recorded data at various stages during manufacturing and after cooling to room temperature. They combined diffraction data with infrared imaging to confirm results. The system was designed and built at the Manufacturing Demonstration Facility, or MDF, a DOE Advanced Materials and Manufacturing Technologies Office user consortium, where a replicate system of the platform was also constructed to plan and test experiments before executing at the beamline.


SNS operates a linear particle accelerator that produces beams of neutrons to study and analyze materials at the atomic scale. The research tool they developed allows scientists to peer inside a material as it's being produced, literally observing the mechanisms at work in real time.


The LTT steel was melted and deposited in layers. As the metal solidified and cooled, its structure transformed in what is called a phase transformation. When that happens, atoms rearrange and take up different space, and the material behaves differently.


Normally, transformations that happen at high temperatures are hard to understand when looking at a material only after processing. By observing the LTT steel during processing, the scientists' experiment shows that they can understand and manipulate the phase transformation.


advertisement


"We want to understand what these stresses are, explain how they got there, and figure out how to control them," Plotkowski said.


"These results provide a new pathway to design desirable residual stress states and property distributions within additive manufacturing components by using process controls to improve nonuniform spatial and temporal variations of thermal gradients around key phase transformation temperatures," the authors write.


Plotkowski hopes scientists from around the world come to ORNL to do similar experiments on metals they would like to use in manufacturing.


This research was funded by ORNL's Laboratory Directed Research and Development program, which supports high-risk research and development in areas of potential high value to national programs.

Solar design would harness 40% of the sun's heat to produce clean hydrogen fuel

MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce "solar thermochemical hydrogen." The system harnesses the sun's heat to directly split water and generate hydrogen -- a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a "grey" energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun's heat to generate that much more hydrogen. The increase in efficiency could drive down the system's overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

"We're thinking of hydrogen as the fuel of the future, and there's a need to generate it cheaply and at scale," says the study's lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. "We're trying to achieve the Department of Energy's goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen."

Ghoniem's study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

Solar stations

Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) -- a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver's heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal "oxidation" is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

Each reactor would first pass through a hot station, where it would be exposed to the sun's heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor's metal. That metal would then be in a "reduced" state -- ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

Rust and rails

Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system's efficiency is too low to be practical.

A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would run continuously and would enerate separate streams of pure hydrogen and oxygen.

The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

"We have to think of every bit of energy in the system, and how to use it, to minimize the cost," Ghoniem says. "And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun's heat to produce hydrogen."

In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

"When fully implemented, this system would be housed in a little building in the middle of a solar field," Patankar explains. "Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production." 

Solar design would harness 40% of the sun's heat to produce clean hydrogen fuel

 MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce "solar thermochemical hydrogen." The system harnesses the sun's heat to directly split water and generate hydrogen -- a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a "grey" energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun's heat to generate that much more hydrogen. The increase in efficiency could drive down the system's overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

"We're thinking of hydrogen as the fuel of the future, and there's a need to generate it cheaply and at scale," says the study's lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. "We're trying to achieve the Department of Energy's goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen."

Ghoniem's study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

Solar stations

Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) -- a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver's heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal "oxidation" is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

Each reactor would first pass through a hot station, where it would be exposed to the sun's heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor's metal. That metal would then be in a "reduced" state -- ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

Rust and rails

Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system's efficiency is too low to be practical.

A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would run continuously and would enerate separate streams of pure hydrogen and oxygen.

The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

"We have to think of every bit of energy in the system, and how to use it, to minimize the cost," Ghoniem says. "And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun's heat to produce hydrogen."

In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

"When fully implemented, this system would be housed in a little building in the middle of a solar field," Patankar explains. "Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production."

This work was supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech.

New polymer membranes, AI predictions could dramatically reduce energy, water use in oil refining

 A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy and water required while extracting even more useful materials.

The so-called DUCKY polymers -- more on the unusual name in a minute -- are reported Oct. 16 in Nature Materials. And they're just the beginning for the team of Georgia Tech chemists, chemical engineers, and materials scientists. They also have created artificial intelligence tools to predict the performance of these kinds of polymer membranes, which could accelerate development of new ones.

The implications are stark: the initial separation of crude oil components is responsible for roughly 1% of energy used across the globe. What's more, the membrane separation technology the researchers are developing could have several uses, from biofuels and biodegradable plastics to pulp and paper products.

"We're establishing concepts here that we can then use with different molecules or polymers, but we apply them to crude oil because that's the most challenging target right now," said M.G. Finn, professor and James A. Carlos Family Chair in the School of Chemistry and Biochemistry.

Crude oil in its raw state includes thousands of compounds that have to be processed and refined to produce useful materials -- gas and other fuels, as well as plastics, textiles, food additives, medical products, and more. Squeezing out the valuable stuff involves dozens of steps, but it starts with distillation, a water- and energy-intensive process.

Researchers have been trying to develop membranes to do that work instead, filtering out the desirable molecules and skipping all the boiling and cooling.

"Crude oil is an enormously important feedstock for almost all aspects of life, and most people don't think about how it's processed," said Ryan Lively, Thomas C. DeLoach Jr. Professor in the School of Chemical and Biomolecular Engineering. "These distillation systems are massive water consumers, and the membranes simply are not. They're not using heat or combustion. They just use electricity. You could ostensibly run it off of a wind turbine, if you wanted. It's just a fundamentally different way of doing a separation."

What makes the team's new membrane formula so powerful is a new family of polymers. The researchers used building blocks called spirocyclic monomers that assemble together in chains with lots of 90-degree turns, forming a kinky material that doesn't compress easily and forms pores that selectively bind and permit desirable molecules to pass through. The polymers are not rigid, which means they're easier to make in large quantities. They also have a well-controlled flexibility or mobility that allows pores of the right filtering structure to come and go over time.

advertisement

The DUCKY polymers are created through a chemical reaction that's easy to produce at a scale that would be useful for industrial purposes. It's a flavor of a Nobel Prize-winning family of reactions called click chemistry, and that's what gives the polymers their name. The reaction is called copper-catalyzed azide-alkyne cycloaddition -- abbreviated CuAAC and pronounced "quack." Thus: DUCKY polymers.

In isolation, the three key characteristics of the polymer membranes aren't new; it's their unique combination that makes them a novelty and effective, Finn said.

The research team included scientists at ExxonMobil, who discovered just how effective the membranes could be. The company's scientists took the crudest of the crude oil components -- the sludge left at the bottom after the distillation process -- and pushed it through one of the membranes. The process extracted even more valuable materials.

"That's actually the business case for a lot of the people who process crude oils. They want to know what they can do that's new. Can a membrane make something new that the distillation column can't?" Lively said. "Of course, our secret motivation is to reduce energy, carbon, and water footprints, but if we can help them make new products at the same time, that's a win-win."

Predicting such outcomes is one way the team's AI models can come into play. In a related study recently published in Nature Communications, Lively, Finn, and researchers in Rampi Ramprasad's Georgia Tech lab described using machine learning algorithms and mass transport simulations to predict the performance of polymer membranes in complex separations.

"This entire pipeline, I think, is a significant development. And it's also the first step toward actual materials design," said Ramprasad, professor and Michael E. Tennenbaum Family Chair in the School of Materials Science and Engineering. "We call this a 'forward problem,' meaning you have a material and a mixture that goes in -- what comes out? That's a prediction problem. What we want to do eventually is to design new polymers that achieve a certain target permeation performance."

Complex mixtures like crude oil might have hundreds or thousands of components, so accurately describing each compound in mathematical terms, how it interacts with the membrane, and extrapolating the outcome is "non-trivial," as Ramprasad put it.

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...