Thursday, 31 January 2019

Rules On Living A Content and successful Life!


Successful living is a state where the mind and body are in perfect sync. In this state, one is able to make use of all of the available resources to live happily and with satisfactory results as far as life, work and relationships go. This does not mean that there are absolutely no problems. It simply means that you look at the problems and challenges as opportunities for growth, and solve them to live to the best of your abilities. So, what are the rules for successful living? Here is our list!

1. Believe and understand: 

Believing in your abilities is one side of the coin. Understanding your limitations is the other side. So, once you have both things in place, it becomes easier to plan your actions in a realistic manner. When you believe in your competence and understand your limitations, you will either take on those tasks that will be commensurate with your skills, or you will equip yourself with higher skills so as to take on even more varied activities and tasks.

2.  Simplify: This is an often repeated and extremely underrated term. To be more organised, you do not merely need the latest modular fittings in your home and office. One of the aspects of simplifying is decluttering. When you declutter, you are effectively removing all those things that do not serve you. If these things were to remain in front of you, they will only serve to expend your energy with thoughts of wastage and wastage of time as well, since you will be working your way through chaos to get to your core.

3. Moderation :   Simplification and moderation go hand in hand. Superfluous acts may give instant gratification, but they do not serve you in the long run. They strip you of self-control and can even alienate you from your relationships as you get closer to things rather than people. So, it is good to have a healthy dose of everything in your life for true balance and successful living.

4. Perspective: If problems are bogging you down, then there are chances that your perspective is all wrong. Being more open and looking at the big picture are two sure shot ways of ensuring that the problems come and go without affecting your equilibrium.

Being in the moment and putting your family first is a part of creating a balanced situation in life where judgements, material wants and egos will not matter.
🍂🍂🍂🍂🍂🍂

Identifying 'friends' in an objective manner

In recent years, behavioral patterns of social creatures, such as humans, cattle, ants, etc., have been discovered by using wearable sensors called Radio Frequency Identification (RFID) devices.
The SocioPatterns project led by Dr. Alain Barrat and colleagues has made public the dataset of contact records of individual pairs collected by RFID devices. However, since the RFID datasets contain any kind of contacts between individuals, they can include non-essential contacts that are observed merely by chance, as opposed to intentional events such as conversation among close friends.
Dr. Teruyoshi Kobayashi of Kobe University and his team developed a new method for identifying individuals that have essential connections between them -- what they call "significant ties." Dr. Kobayashi says: "The point is that we need to distinguish between the contact events that could happen by chance and the events that would not happen without a significant relationship between two individuals." Their findings were published in Nature Communications on January 15.
Naturally, the total number of contacts recorded will be larger for those who are socially very active than for those who are shy. This means that counting the numbers of bilateral interactions is not enough to find "friends" in social networks. The new method proposed by Dr. Kobayashi and his team allows one to control for the difference in individuals' activity levels. Interestingly, the extracted significant ties based on face-to-face networks collected in a primary school in Lyon, France form several clusters, each of which accurately mimics an actual school class. Dr. Kobayashi comments: "It is quite natural that contacts within each class explain most of the significant ties, but this phenomenon is not well captured by the existing methods that were originally developed for static networks."
An advantage of this method is that it can be applied to any kind of dynamic networks formed by bilateral temporal interactions. For example, Dr. Kobayashi and Dr. Taro Takaguchi (one of the coauthors) investigated the interbank market in Italy and confirmed that the fraction of banks that are regarded as being connected by significant ties increased particularly at the time of the global financial crisis in 2008-2009.
On the possibility of future application, Dr. Kobayashi adds: "This method is expected to capture the evolution of various complex networks from interbank markets to a flock of cows. If it's implemented on a face-to-face network of students, for instance, one may be able to detect signs of bullying and/or ostracism."

Nudging does not necessarily improve decisions

Nudging, the concept of influencing people's behavior without imposing rules, bans or coercion, is an idea that government officials and marketing specialists alike are keen to harness, and itis often viewed as a one-size-fits-all solution. Now, a study by researchers from the University of Zurich puts things into perspective: Whether a nudge really does improve decisions depends on a person's underlying decision-making process.
Nudging is a well-known and popular concept in behavioral economics. It refers to non-coercive interventions that influence the choices people make by changing the way a situation is presented. A well-known example of this is placing the salad bar near the cafeteria entrance to promote a healthy diet. It has been shown that simple change has an effect on the food people choose to eat for lunch. However, is a light salad really the best option from the employee's perspective, or is it their employer who will benefit from staff who perform better in the afternoon? And, is improving the decisions we make really that simple?
Measuring the quality of a decision
Whether a nudge ultimately results in a person making decisions that are better suited to their needs is an important factor in assessing the effectiveness of nudges. This is the starting point of the research work of Nick Netzer and Jean-Michel Benkert from the Department of Economics at the University of Zurich. How do you measure whether a nudge improves a decision in the eyes of the person being nudged? "We can't determine whether a nudge improves the choices a person makes until we understand how they reach their decisions," says Nick Netzer, putting the hype surrounding nudging into perspective. "Depending on which behavioral model we take as a starting point, it is possible to measure the effectiveness of nudges -- or not."
Traditional economics assumes that a person's preferences can be inferred from their decisions and behavior. According to the rational behavior model, a person's decision to have a salad or a steak for lunch is based on which meal meets their needs. When it comes to assessing nudges, however, this model is problematic, since nudging manipulates precisely the behavior that is supposed to shed light on a person's preferences. The researchers therefore looked to alternative behavioral models to determine the assumptions under which a nudge can be assessed in a meaningful way.
First-best choice
According to the "satisficing" model, a person will consider their alternatives subsequently and choose the first one that meets their needs in a satisfactory way. The person will order the salad because it is the first option that adequately fulfills their requirements. Although they might have enjoyed the steak more, they will not consider that option, since they have already made up their mind. In this model, hardly any conclusions can be drawn about the true preferences of a person, and their decisions cannot be improved through nudging either.
Limited attention
If we assume decisions are made according to the limited attention model, however, the situation changes: This model is based on the idea that a person will only ever consider a certain number of possibilities -- for example, only the first three meals on a menu that features five options. The person will then ponder these options and choose the best meal out of this selection. Unlike with the satisficing model, conclusions can be drawn about a person's preferences, as the UZH researchers have now shown. Decisions that are based on such a decision-making process can be improved by nudging. Therefore, if you know that a salad is indeed an ideal meal, then placing it among the first three items on the menu will ensure that a person will at least consider this meal and maybe also choose it.
Success of nudges depends on decision-making process
It is therefore necessary to know what a person's true needs and preferences are in order to assess the success of nudges when it comes to improving decisions. If we do not have this information, any nudging that takes place is done without knowing what is in a person's best interests. "Our findings show that the success of nudging greatly depends on how we view the human decision-making process," says Nick Netzer. "We can't conclusively determine whether nudging makes sense as long as current scientific knowledge in economics, psychology and neuroscience doesn't allow nudging to be assessed in a consistent manner."

Women, your inner circle may be key to gaining leadership roles

Women who communicate regularly with a female-dominated inner circle are more likely to attain high-ranking leadership positions, according to a new study by the University of Notre Dame and Northwestern University.
Published in the Proceedings of the National Academy of Sciences, the study showed that more than 75 percent of high-ranking women maintained a female-dominated inner circle, or strong ties to two or three women whom they communicated with frequently within their network. For men, the larger their network -- regardless of gender makeup -- the more likely they are to earn a high-ranking position. Unfortunately, when women have social networks that resemble their male counterparts', they are more likely to hold low-ranking positions.
"Although both genders benefit from developing large social networks after graduate school, women's communication patterns, as well as the gender composition of their network, significantly predict their job placement level," said Nitesh V. Chawla, Frank M. Freimann Professor of Computer Science and Engineering at Notre Dame, director of the Interdisciplinary Center for Network Science and Applications and co-author of the study. "The same factors -- communication patterns and gender composition of a social network -- have no significant effect for men landing high-ranking positions."
For the study, researchers reviewed social and communication networks of more than 700 former graduate students from a top-ranked business school in the United States. Each student in the study had accepted leadership-level positions, which were normalized for industry and region-specific salaries. Researchers then compared three variables of each student's social network: network centrality, or the size of the social network; gender homophily, or the proportion of same-sex contacts; and communication equality, or the amount of strong versus weak network ties.
Women with a high network centrality and a female-dominated inner circle have an expected job placement level that is 2.5 times greater than women with low network centrality and a male-dominated inner circle. When it comes to attaining leadership positions, women are not likely to benefit from adding the best-connected person to their network. While those connections may improve access to public information important to job search and negotiations, female-dominated inner circles can help women gain gender-specific information that would be more important in a male-dominated job market.
"We also saw that inner circles benefit from each other, suggesting that women gain gender-specific private information and support from their inner circle, while non-overlapping connections provide other job market details," said Chawla.

How do children draw themselves? It depends who's looking

It's the archetypal child's drawing -- family, pet, maybe a house and garden, and the child themselves. Yet how do children represent themselves in their drawings, and does this representation alter according to who will look at the picture?
A research team led by academics from the University of Chichester has examined this issue and found that children's expressive drawings of themselves vary according to the authority of and familiarity with the adult who will view the picture. The study is published today, 25th January 2019, in the British Journal of Developmental Psychology.
The results of the study are significant, because it is important to understand children's drawings for different audiences. Drawings are often used in clinical, forensic, educational and therapeutic situations to garner information about how a child feels and to supplement verbal communication.
The research team worked with 175 children aged eight and nine, 85 boys and 90 girls. The children were arranged in seven groups -- one where no audience was specified and six audience groups varying by audience type. These groups represented professionals (policeman, teacher) and men with whom the children were familiar, and those with whom they were not.
The children were invited to draw three pictures of themselves -- one as a baseline, one happy and one sad.
The results of the study show that children's drawings of themselves are more expressive if the audience for those drawings is familiar to the child. Girls drew themselves more expressively than boys.
Some anomalies appeared in the results. For example, boys and girls performed differently in happy and sad drawings for the familiar and unfamiliar policeman groups. Girls showed more expressivity than boys in their happy drawings when the audience was a policeman they knew, whereas boys' sad drawings showed more expressivity than girls' in the unfamiliar policeman group. While the authors of the study suggest reasons for this, they see merit in further study.
They also suggest that this current study could be used as the basis for future studies investigating other professional and personal interactions, such as between a doctor and their patient.
The study was led by Dr Esther Burkitt, Reader in Developmental Psychology at the University of Chichester. She commented: "This current study builds on the findings of previous studies carried out by our team. Its findings have implications for the use of children's drawings by professionals as a means to supplement and improve verbal communication. Being aware that children may draw emotions differently for different professional groups may help practitioners to better understand what a child feels about the topics being drawn. This awareness could provide the basis of a discussion with the child about why they drew certain information for certain people. Our findings indicate that it matters for which profession children think they are drawing themselves, and whether they are familiar with a member of that profession."

Youth with disabilities have increased risk for technology-involved peer harassment

New research from the University of New Hampshire finds that while youths with disabilities, mental health diagnoses and special education services experience peer harassment or bullying at similar rates as other youth, understanding differences in how they experience it may lead to solutions that minimize risk to all youth.
According to the researchers, 30 percent of youth ages 10-20 surveyed reported experiencing some form of harassment victimization. Youths with a learning disability were more likely to experience harassment in person, while youths with a physical disability were more likely to experience harassment via technology. Depression was associated with peer harassment both in person and via technology.
"We hope these findings help schools consider the context in which these events occur and possible ways to minimize risk to all youths, including those with disabilities or those receiving special services in schools," the researchers said. In particular, the researchers believe that peer-to-peer programs that give youth leadership skills and opportunities to partner with school staff will be most successful.
"One of the most interesting things to come out of this research is an increased understanding of just how integrated technology is in the lives of youths," the researchers said. "We need to focus on helping youths learn how to take care of each other and feeling safe talking to trusted adults."

Train the brain to form good habits through repetition

You can hack your brain to form good habits -- like going to the gym and eating healthily -- simply by repeating actions until they stick, according to new psychological research involving the University of Warwick.
Dr Elliot Ludvig from Warwick's Department of Psychology, with colleagues at Princeton and Brown Universities, have created a model which shows that forming good (and bad) habits depends more on how often you perform an action than on how much satisfaction you get from it.
The new study is published in Psychological Review.
The researchers developed a computer simulation, in which digital rodents were given a choice of two levers, one of which was associated with the chance of getting a reward. The lever with the reward was the 'correct' one, and the lever without was the 'wrong' one.
The chance of getting a reward was swapped between the two levers, and the simulated rodents were trained to choose the 'correct' one.
When the digital rodents were trained for a short time, they managed to choose the new, 'correct' lever when the chance of reward was swapped. However, when they were trained extensively on one lever, the digital rats stuck to the 'wrong' lever stubbornly, even when it no longer had the chance for reward.
The rodents preferred to stick to the repeated action that they were used to, rather than have the chance for a reward.
Dr Elliot Ludvig, Associate Professor in the University of Warwick's Department of Psychology and one of the paper's authors, commented:
"Much of what we do is driven by habits, yet how habits are learned and formed is still somewhat mysterious. Our work sheds new light on this question by building a mathematical model of how simple repetition can lead to the types of habits we see in people and other creatures. "
Dr Amitai Shenhav, Assistant Professor in Brown University's Department of Cognitive, Linguistic, and Psychological Sciences and one of the paper's authors, commented:
"Psychologists have been trying to understand what drives our habits for over a century, and one of the recurring questions is how much habits are a product of what we want versus what we do. Our model helps to answer that by suggesting that habits themselves are a product of our previous actions, but in certain situations those habits can be supplanted by our desire to get the best outcome."
This research opens up a better understanding of conditions like Obsessive Compulsive Disorder and Tic Disorder -- both of which are characterised by repeated behaviours.
The next stage will be to conduct similar experiments in a real-world scenario, observing human behaviour in action-based versus reward-based tests.

Giving high school students the tools to question classic literature

Generations of students have read Shakespeare and Hemingway for high school literature class and Jeanne Dyches, assistant professor in Iowa State University's School of Education, would like students to question that tradition.
"As a field, we need to think about how our disciplines are advancing certain stories, silencing certain stories and socializing our students to think that what we're teaching them is neutral," Dyches said. "We need to have a conversation around why certain texts are taught year after year."
The titles often at the top of high school reading lists are considered "classics" or required for "cultural literacy," she said. However, the authors -- typically white European men -- do not reflect the diversity of students in the classroom. Dyches says assigning these texts without questioning issues of race or gender may exclude students who do not see themselves in the text, and make them feel their voices are not valued. This lack of questioning also normalizes the experiences of students who belong to dominant groups.
That is why Dyches encourages educators to consider the ideology ingrained in the texts they assign, and give students the tools to question what they are reading. For a new paper, published by Harvard Educational Review, Dyches spent time in a high school literature class teaching students to critically examine and question the discipline of English language arts.
Students reviewed more than a century's worth of national studies on the titles most commonly taught, national and local standards for recommended readings, as well as local and state curriculum policies. The high school was located in a predominately white, suburban Midwest community.
Her research found the lessons sharpened students' awareness and recognition of messages of power and oppression within classic literature. By the end of the study, 77 percent of students -- a 27 percent increase -- recognized the politicized nature of teaching these traditional texts. Dyches says while most students were uncomfortable talking about oppression and injustice in a specific text, students of color demonstrated more awareness of these issues.
"We all have different experiences and reactions when we're having conversations that challenge us to question and consider race, gender and sexuality and all the messy intersections," Dyches said. "It's OK for students who have never heard these things to still be grappling with their own racial understanding and social-cultural identity. But we must still create opportunities for students to learn, wrestle with and apply new critical lenses to their educational experiences and the world around them."
Bland, yet timeless
Dyches surveyed students at the beginning and end of the study to understand their perceptions and relationships with the texts they were reading in literature class. In their responses, students described the texts as "bland and ineffective," adding that they "can't relate to any of it," yet they still considered the titles to be "timeless" and important "to improve upon their reading and writing skills." Dyches said students read the texts because they believed doing so would prepare them for college.
Their responses illustrate a commonly held belief about the "value" of classic literature, which is based more on tradition than literary standards, Dyches said. The problem is students and educators alike do not think to question why this is the case. In fact, Dyches says until she started researching social justice issues, she was unaware of the historical perspectives and ideologies she promoted through the texts she assigned.
Not only does she want to empower students to question what they're reading in class, Dyches also wants teachers to recognize the political context of their decisions. Educators, like all people, have different biases or beliefs, Dyches said. However, if teachers know this and address those biases in the classroom, she says that is a step in the right direction.
"We're all political beings and whether you recognize it or not, you're always teaching from your belief systems. It's essential to recognize and understand how our ideas or beliefs influence our teaching. I would argue you're being just as political when you assign 'Macbeth' as when you assign 'The Hate U Give,'" Dyches said.
Telling an untold story
After working with students to identify the political structures in literature, Dyches asked students to tell the story from a different perspective and bring marginalized voices into the conversation. She said the assignment challenged students and forced them to think about how the changes affected the story or how they comprehended the story.
For example, students rewrote "Romeo and Juliet" to address issues of race by making Romeo and Juliet an interracial couple. In the paper, Dyches describes how the students incorporated social media into the story as the characters used Twitter to write "love tweets," creating a national social movement documented through the hashtag #lovehasnobounds.
This is one way teachers can continue to teach classic literature -- by questioning how race and patriarchy influence the narrative, Dyches said. While the study focused specifically on literature, she would like to see students apply the same critical thinking skills to other disciplines and aspects of their lives.

Predicting gentrification in order to prevent it

A new research model allows urban planners, policymakers and community leaders to better focus resources to limit gentrification in vulnerable neighborhoods throughout the U.S.
By examining the "people, place and policy" factors that determine whether a neighborhood will gentrify or not, the model offers a better understanding of what fosters gentrification and what limits it. This process reveals the roles that government and policy can proactively play in limiting its most damaging impacts.
"This model is a new way of thinking about what influences gentrification and how to prevent it," said study co-author Jeremy Németh, PhD, associate professor of Urban and Regional Planning at the University of Colorado Denver. "This study debunks the argument that gentrification is an uncontrollable consequence of market forces, and outlines specific strategies where communities have real power to limit it."
Public agencies, nonprofits and city governments with limited resources can use publicly available data to model gentrification likelihood, establish early warning systems and then develop prevention strategies for their communities.
The study, "Toward a socio-ecological model of gentrification: How people, place, and policy shape neighborhood change," is published in the Journal of Urban Affairs.
"We're offering the model as a tool for city governments and anti-gentrification actors to be more proactive in targeting proven interventions in the most vulnerable neighborhoods," said Németh, who co-authored the paper with Alessandro Rigolon, assistant professor of Recreation, Sport, and Tourism at the University of Illinois at Urbana-Champaign.
For this research, gentrification is defined as the influx of middle- and upper-class residents in a spatially concentrated fashion, which often results in the displacement of long-time residents, who disproportionately are poorly educated, lower-income people of color.
The researchers tested the predictive gentrification model in the five most populous U.S. regions: Chicago, Los Angeles, New York City, San Francisco and Washington, D.C.
Three "place" factors -- access to jobs, proximity to transit stations and the quality of housing stock -- emerged as strong predictors of a neighborhood's likelihood to gentrify across all regions. As they heavily influence these place factors, this points to the critical role urban planners play in shaping gentrification forces.
The diversity of a neighborhood is the "people" factor with the strongest predictive value, the study found.
"We know from years of research on implicit bias that if a neighborhood has a very high share of Black or Latinx residents, it is much less likely to gentrify than one with a mix of several racial or ethnic groups," said Németh.
He and Rigolon said they weren't surprised by the finding that racial/ethnic diversity is a strong predictor of gentrification.
Although these factors weren't tested in this national-level study, several recent studies in California have shown that local "policy" strategies proven to slow gentrification include rent controls, community land trusts, and anti-eviction ordinances.
This first-of-its-kind study offers communities a model to identify the neighborhoods most vulnerable to gentrification and a roadmap to implement proven anti-gentrification strategies before it's too late.

China not 'walking the walk' on methane emissions

Chinese regulations on coal mining have not curbed the nation's growing methane emissions over the past five years as intended, says new research from a team led by Carnegie's Scot Miller and Anna Michalak. Their findings are published in Nature Communications.
China is the world's largest producer and consumer of coal, which is used to generate more than 70 percent of its electricity. It also emits more methane than any other nation, and the coal sector accounts for about 33 percent of this total. This happens when underground pools of methane gas are released during the mining process.
In the atmosphere, methane acts as a greenhouse gas, trapping heat and contributing to climate change. The detrimental impacts of climate change include increased heat waves, longer droughts, more-severe hurricanes, and a greater number of animal extinctions, leaving many policymakers around the world scrambling to reduce emissions.
In China, regulations to reduce methane emissions from coal mining took full effect in 2010 and required methane to be captured or to be converted into carbon dioxide. The team of researchers set out to use atmospheric modeling and data from Japan's GOSAT satellite to evaluate whether these new rules actually curbed Chinese methane emissions.
"Our study indicates that, at least in terms of methane emissions, China's government is 'talking the talk,' but has not been able to 'walk the walk,'" explained lead author Miller, who is now at Johns Hopkins University.
Although the goal stated in China's 12th Five Year Plan was to remove or convert 5.6 million metric tons (5.6 teragrams) of methane from coal mines by 2015, the team found that methane emissions instead rose by about 1.1 million metric tons (1.1 teragrams) per year between 2010 and 2015. This is in line with the nation's annual increases methane emissions going back to 2000.
Overall, Chinese methane emissions increased by 50 percent from 2000 to 2015. This could account for as much as 24 percent of the total global increase in methane emissions over the same period.
"China had an aspiration and an opportunity to reduce its release of coal-mining-related methane, but our analysis of satellite data shows business-as-usual emissions of this harmful greenhouse gas," Michalak said. "It's therefore unlikely that China's ambitious goals for reducing methane emissions from coal mining were met."
Infrastructure and technology challenges may be hampering the nation's ability to achieve their emissions reduction goals, the authors explained.
For example, the lack of pipelines to transport methane harvested from the remote, mountainous mining areas to more populated regions present a challenge. Likewise, methane capture tools are poorly suited to the conditions where coal seams are found in China, resulting in a low-quality product.

Children looking at screens in darkness before bedtime are at risk of poor sleep

Pre-teens who use a mobile phone or watch TV in the dark an hour before bed are at risk of not getting enough sleep compared to those who use these devices in a lit room or do not use them at all before bedtime.
The study by researchers from the University of Lincoln, Imperial College London, Birkbeck, University of London and the Swiss Tropical and Public Health Institute in Basel, Switzerland is the first to analyse the pre-sleep use of media devices with screens alongside the impact of room lighting conditions on sleep in pre-teens.
It found that night-time use of phones, tablets and laptops is consistently associated with poor sleep quality, insufficient sleep, and poor perceived quality of life. Insufficient sleep has also been shown to be associated with impaired immune responses, depression, anxiety and obesity in children and adolescents.
Data was collected from 6,616 adolescents aged between 11 and 12 and more than 70 per cent reported using at least one screen based device within one hour of their bedtime. They were asked to self-report a range of factors including their device use in both lit and darkened rooms, their weekday and weekend bedtimes, how difficult they found it to go to sleep and their wake up times.
The results showed that those who used a phone or watched television in a room with a light on were 31 per cent more likely to get less sleep than those who didn't use a screen. The likelihood increased to 147 per cent if the same activity took place in the dark.
It has been reported that globally, 90 per cent of adolescents are not sleeping the recommended nine to 11 hours per night, which has coincided with an increase in the use of screen-based media devices. In the UK alone, it is estimated that 98 per cent of 12 to 15 year olds watch television and over 90 per cent use mobile phones at home.
Previous studies have shown that sufficient sleep duration and quality are vital in childhood to maintain physical and mental development. Sleep is also crucial for cognitive processes and a lack of sufficient sleep has been directly related to poor academic performance.
Lead author, Dr Michael Mireku, a researcher at the University of Lincoln's School of Psychology said: "While previous research has shown a link between screen use and the quality and length of young people's sleep, ours is the first study to show how room lighting can further influence this.
"Our findings are significant not only for parents but for teachers, health professionals and adolescents themselves. We would recommend that these groups are made aware of the potential issues surrounding screen use during bedtime including insufficient sleep and poor sleep quality."

Want to squelch fake news? Let the readers take charge

A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation
Would you like to rid the internet of false political news stories and misinformation? Then consider using -- yes -- crowdsourcing.
That's right. A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation.
"What we found is that, while there are real disagreements among Democrats and Republicans concerning mainstream news outlets, basically everybody -- Democrats, Republicans, and professional fact-checkers -- agree that the fake and hyperpartisan sites are not to be trusted," says David Rand, an MIT scholar and co-author of a new paper detailing the study's results.
Indeed, using a pair of public-opinion surveys to evaluate of 60 news sources, the researchers found that Democrats trusted mainstream media outlets more than Republicans do -- with the exception of Fox News, which Republicans trusted far more than Democrats did. But when it comes to lesser-known sites peddling false information, as well as "hyperpartisan" political websites (the researchers include Breitbart and Daily Kos in this category), both Democrats and Republicans show a similar disregard for such sources. Trust levels for these alternative sites were low overall. For instance, in one survey, when respondents were asked to give a trust rating from 1 to 5 for news outlets, the result was that hyperpartisan websites received a trust rating of only 1.8 from both Republicans and Democrats; fake news sites received a trust rating of only 1.7 from Republicans and 1.9 from Democrats. By contrast, mainstream media outlets received a trust rating of 2.9 from Democrats but only 2.3 from Republicans; Fox News, however, received a trust rating of 3.2 from Republicans, compared to 2.4 from Democrats.
The study adds a twist to a high-profile issue. False news stories have proliferated online in recent years, and social media sites such as Facebook have received sharp criticism for giving them visibility. Facebook also faced pushback for a January 2018 plan to let readers rate the quality of online news sources. But the current study suggests such a crowdsourcing approach could work well, if implemented correctly.
"If the goal is to remove really bad content, this actually seems quite promising," Rand says.
The paper, "Fighting misinformation on social media using crowdsourced judgments of news source quality," is being published in Proceedings of the National Academy of Sciences. The authors are Gordon Pennycook of the University of Regina, and Rand, an associate professor in the MIT Sloan School of Management.
To promote, or to squelch?
To perform the study, the researchers conducted two online surveys that had roughly 1,000 participants each, one on Amazon's Mechanical Turk platform, and one via the survey tool Lucid. In each case, respondents were asked to rate their trust in 60 news outlets, about a third of which were high-profile, mainstream sources.
The second survey's participants had demographic characteristics resembling that of the country as a whole -- including partisan affiliation. (The researchers weighted Republicans and Democrats equally in the survey to avoid any perception of bias.) That survey also measured the general audience's evaluations against a set of judgments by professional fact-checkers, to see whether the larger audience's judgments were similar to the opinions of experienced researchers.
But while Democrats and Republicans regarded prominent news outlets differently, that party-based mismatch largely vanished when it came to the other kinds of news sites, where, as Rand says, "By and large we did not find that people were really blinded by their partisanship."
In this vein, Republicans trusted MSNBC more than Breitbart, even though many of them regarded it as a left-leaning news channel. Meanwhile, Democrats, although they trusted Fox News less than any other mainstream news source, trusted it more than left-leaning hyperpartisan outlets (such as Daily Kos).
Moreover, because the respondents generally distrusted the more marginal websites, there was significant agreement among the general audience and the professional fact-checkers. (As the authors point out, this also challenges claims about fact-checkers having strong political biases themselves.)
That means the crowdsourcing approach could work especially well in marginalizing false news stories -- for instance by building audience judgments into an algorithm ranking stories by quality. Crowdsourcing would probably be less effective, however, if a social media site were trying to build a consensus about the very best news sources and stories.
Where Facebook failed: Familiarity?
If the new study by Rand and Pennycook rehabilitates the idea of crowdsourcing news source judgments, their approach differs from Facebook's stated 2018 plan in one crucial respect. Facebook was only going to let readers who were familiar with a given news source give trust ratings.
But Rand and Pennycook conclude that this method would indeed build bias into the system, because people are more skeptical of news sources they have less familiarity with -- and there is likely good reason why most people are not acquainted with many sites that run fake or hyperpartisan news.
"The people who are familiar with fake news outlets are, by and large, the people who like fake news," Rand says. "Those are not the people that you want to be asking whether they trust it."
Thus for crowdsourced judgments to be a part of an online ranking algorithm, there might have to be a mechanism for using the judgments of audience members who are unfamiliar with a given source. Or, better yet, suggest, Pennycook and Rand, showing users sample content from each news outlet before having the users produce trust ratings.
For his part, Rand acknowledges one limit to the overall generalizability of the study: The dymanics could be different in countries that have more limited traditions of freedom of the press.
"Our results pertain to the U.S., and we don't have any sense of how this will generalize to other countries, where the fake news problem is more serious than it is here," Rand says.
All told, Rand says, he also hopes the study will help people look at America's fake news problem with something less than total despair.
"When people talk about fake news and misinformation, they almost always have very grim conversations about how everything is terrible," Rand says. "But a lot of the work Gord [Pennycook] and I have been doing has turned out to produce a much more optimistic take on things."

Extratropical volcanoes influence climate more than assumed

The eruption of Sarychev Peak in 2009 seen from the ISS. The eruption transported sulphur gases into the stratosphere.
In recent decades, extratropical eruptions including Kasatochi (Alaska, USA, 2008) and Sarychev Peak (Russia, 2009) have injected sulfur into the lower stratosphere. The climatic forcing of these eruptions has however been weak and short-lived. So far, scientists have largely assumed this to be a reflection of a general rule; that extratropical eruptions lead to weaker forcing than their tropical counterparts. Researchers from the GEOMAR Helmholtz Centre for Ocean Research Kiel, the University of Oslo, the Max Planck Institute for Meteorology in Hamburg together with colleagues from Switzerland, the UK and the USA now contradict this assumption in the international journal Nature Geoscience.
"Our investigations show that many extratropical volcanic eruptions in the past 1250 years have caused pronounced surface cooling over the Northern Hemisphere, and in fact, extratropical eruptions are actually more efficient than tropical eruptions in terms of the amount of hemispheric cooling in relation to the amount of sulfur emitted by the eruptions," says Dr. Matthew Toohey from GEOMAR, first author of the current study.
Large-scale cooling after volcanic eruptions occurs when volcanoes inject large quantities of sulfur gases into the stratosphere, a layer of the atmosphere which starts at about 10-15 kilometers height. There the sulfur gases produce a sulfuric aerosol haze that persists for months or years. The aerosols reflect a portion of incoming solar radiation, which can no longer reach the lower layers of the atmosphere and the Earth's surface.
Until now, the assumption was that aerosols from volcanic eruptions in the tropics have a longer stratospheric lifetime because they have to migrate to mid or high latitudes before they can be removed. As a result they would have a greater effect on the climate. Aerosols from eruptions at higher latitudes would be removed from the atmosphere more quickly.
The recent extratropical eruptions, which had minimal but measurable effects on the climate, fit this picture. However, these eruptions were much weaker than that of Pinatubo. To quantify the climate impact of extratropical vs. tropical eruptions, Dr. Toohey and his team compared new long-term reconstructions of volcanic stratospheric sulfur injection from ice cores with three reconstructions of Northern Hemisphere summer temperature from tree rings extending back to 750 CE. Surprisingly, the authors found that extratropical explosive eruptions produced much stronger hemispheric cooling in proportion to their estimated sulfur release than tropical eruptions.
To help understand these results, Dr. Toohey and his team performed simulations of volcanic eruptions in the mid to high latitudes with sulfur amounts and injection heights equal to that of Pinatubo. They found that the lifetime of the aerosol from these extratropical explosive eruptions was only marginally smaller than for tropical eruptions. Furthermore, the aerosol was mostly contained within the hemisphere of eruption rather than globally, which enhanced the climate impact within the hemisphere of eruption.
The study goes on to show the importance of injection height within the stratosphere on the climate impact of extratropical eruptions. "Injections into the lowermost extratropical stratosphere lead to short-lived aerosol, while those with stratospheric heights similar to Pinatubo and the other large tropical eruptions can lead to aerosol lifetimes roughly similar to the tropical eruptions," says co-author Prof. Dr. Kirstin Krüger from the University of Oslo.
The results of this study will help researchers better quantify the degree to which volcanic eruptions have impacted past climate variability. It also suggests that future climate will be affected by explosive extratropical eruptions. "There have been relatively few large explosive eruptions recorded in the extratropics compared to the tropics over the last centuries, but they definitely do happen" says Dr. Toohey. The strongest Northern Hemisphere cooling episode of the past 2500 years was initiated by an extratropical eruption in 536 CE. This new study helps explain how the 536 CE eruption could have produced such strong cooling.

Molecular analysis of anchiornis feather gives clues to origin of flight

An international team of researchers has performed molecular analysis on fossil feathers from a small, feathered dinosaur from the Jurassic. Their research could aid scientists in pinpointing when feathers evolved the capacity for flight during the dinosaur-bird transition.
Anchiornis was a small, feathered, four-winged dinosaur that lived in what is now China around 160 million years ago -- almost 10 million years before Archaeopteryx, the first recognized bird. A team of researchers from the Nanjing Institute of Geology and Paleontology, North Carolina State University, and the University of South Carolina analyzed Anchiornis feathers to see how they differed at the molecular level from those of younger fossil birds and modern birds.
"Modern bird feathers are composed primarily of beta-keratin (β-keratin), a protein also found in skin, claws, and beaks of reptiles and birds. Feathers differ from these other β-keratin containing tissues, because the feather protein is modified in a way that makes them more flexible," says Mary Schweitzer, professor of biological sciences at NC State with a joint appointment at the North Carolina Museum of Natural Sciences and co-author of a paper describing the research.
"At some point during the evolution of feathers, one of the β-keratin genes underwent a deletion event, making the resultant protein slightly smaller. This deletion changed the biophysics of the feather to something more flexible -- a requirement for flight. If we can pinpoint when, and in what organisms, that deletion event occurred, we will have a better grasp on when flight evolved during the transition from dinosaurs to birds."
The researchers, led by Yanhong Pan, a visiting researcher from the Nanjing Institute, examined fossilized feathers from Anchiornis, using high-resolution electron microscopy, as well as multiple chemical and immunological techniques to determine the molecular composition of the feathers. They did the same to other feathers from the Mesozoic and Cenozoic eras, as well as other β-keratin tissues not expected to show this deletion, then compared results with modern bird feathers and tissues.
They found that Anchiornis feathers were comprised of both β-keratins and alpha-keratins (α-keratins), a protein all terrestrial vertebrates have, including mammals. This was surprising because α-keratin is present in only small amounts in modern feathers. In addition to co-expressing both keratin proteins, the Anchiornis feathers had already undergone the deletion event that sets feathers apart from other tissues.
"Molecular clocks, which scientists use as benchmarks for evolutionary and genetic divergence, predict that the deletion, and thus functional flight feathers, evolved around 145 million years ago," Schweitzer says. "Anchiornis is millions of years older, yet has the shortened protein form. This work shows that we can utilize molecular fossil data to root molecular clocks and improve their accuracy -- we can start to put timing on genetic events in the dinosaur-bird transition via absence or presence of these two keratins. The data also give us more information about how feathers evolved to enable flight."
The work appears in Proceedings of the National Academy of Sciences. Pan is lead author. Wenxia Zheng and Elena Schroeter of NC State and Roger Sawyer from the University of South Carolina also contributed to the work, which was supported in part by the National Science Foundation and the Packard Foundation.

Ancient Mongolian skull is the earliest modern human yet found in the region

A much debated ancient human skull from Mongolia has been dated and genetically analysed, showing that it is the earliest modern human yet found in the region, according to new research from the University of Oxford. Radiocarbon dating and DNA analysis have revealed that the only Pleistocene hominin fossil discovered in Mongolia, initially called Mongolanthropus, is in reality a modern human who lived approximately 34,000 to 35,000 years ago.
The skullcap, found in the Salkhit Valley northeast Mongolia is, to date, the only Pleistocene hominin fossil found in the country.
The skullcap is mostly complete and includes the brow ridges and nasal bones. The presence of archaic or ancient features have led in the past to the specimen being linked with uncharacterized archaic hominin species, such as Homo erectus and Neanderthals. Previous research suggested ages for the specimen ranging from the Early Middle Pleistocene to the terminal Late Pleistocene.
The Oxford team re-dated the specimen to 34,950 -- 33,900 years ago. This is around 8,000 years older than the initial radiocarbon dates obtained on the same specimen.
To make this discovery, the Oxford team employed a new optimised technique for radiocarbon dating of heavily contaminated bones. This method relies on extracting just one of the amino acids from the collagen present in the bone. The amino acid hydroxyproline (HYP), which accounts for 13% of the carbon in mammalian collagen, was targeted by the researchers. Dating this amino acid allows for the drastic improvement in the removal of modern contaminants from the specimens.
The new and reliable radiocarbon date obtained for the specimen shows that this individual dates to the same period as the Early Upper Palaeolithic stone tool industry in Mongolia, which is usually associated with modern humans. The age is later than the earliest evidence for anatomically modern humans in greater Eurasia, which could be in excess of 100,000 years in China according to some researchers.
This new result also suggests that there was still a significant amount of unremoved contamination in the sample during the original radiocarbon measurements. Additional analyses performed in collaboration with scientists at the University of Pisa (Italy) confirmed that the sample was heavily contaminated by the resin that had been used to cast the specimen after its discovery.
"The research we have conducted shows again the great benefits of developing improved chemical methods for dating prehistoric material that has been contaminated, either in the site after burial, or in the museum or laboratory for conservation purposes." said Dr Thibaut Devièse first author on the new paper and leading the method developments in compound specific analysis at the University of Oxford. "Robust sample pretreatment is crucial in order to build reliable chronologies in archaeology."
DNA analyses were also performed on the hominin bones by Professor Svante Pääbo's team at the Max-Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Diyendo Massiliani and colleagues reconstructed the complete mitochondrial genome of the specimen. It falls within a group of modern human mtDNAs (haplogroup N) that is widespread in Eurasia today, confirming the view of some researchers that the cranium is indeed a modern human. Further nuclear DNA work is underway to shed further light on the genetics of the cranium.
'This enigmatic cranium has puzzled researchers for some time," said Professor Tom Higham, who leads the PalaeoChron research group at the University of Oxford. "A combination of cutting edge science, including radiocarbon dating and genetics, has now shown that this is the remain of a modern human, and the results fit perfectly within the archaeological record of Mongolia which link moderns to the Early Upper Palaeolithic industry in this part of the world."

Deep history of archaic humans in southern Siberia

Oxford University scientists have played a key role in new research identifying the earliest evidence of some of the first known humans -- Denisovans and Neanderthals, in Southern Siberia.
Professor Tom Higham and his team at the Oxford Radiocarbon Accelerator Unit at the University of Oxford worked in collaboration with a multi-disciplinary team from the UK, Russia, Australia, Canada and Germany, on the detailed investigation over the course of five years, to date the archaeological site of Denisova cave. Situated in the foothills of Siberia's Altai Mountains, it is the only site in the world known to have been occupied by both archaic human groups (hominins) at various times.
The two new studies published in Nature, now put a timeline on when Neanderthals and their enigmatic cousins, the Denisovans, were present at the site and the environmental conditions they faced before going extinct.
Denisova cave first came to worldwide attention in 2010, with the publication of the genome obtained from the fingerbone of a girl belonging to a group of humans not previously identified in the palaeoanthropological record; the Denisovans. Further revelations followed on the genetic history of Denisovans and Altai Neanderthals, based on analysis of the few and fragmentary hominin remains. Last year, a bone fragment discovered by researchers at Oxford's Research Laboratory for Archaeology and the History of Art and the University of Manchester, yielded the genome of the daughter of Neanderthal and Denisovan parents -- the first direct evidence of interbreeding between two archaic hominin groups. But reliable dates for the hominin fossils recovered from the cave have remained elusive, as have dates for the DNA, artefacts, and animal and plant remains retrieved from the sediments.
Excavations for the past 40 years led by Professors Anatoly Derevianko and Michael Shunkov from the Institute of Archaeology and Ethnography (Siberian Branch of the Russian Academy of Sciences) in Novosibirsk, revealed the longest archaeological sequence of Siberia.
In the new research, the Oxford team obtained fifty radiocarbon ages from bone, tooth and charcoal fragments recovered from the upper layers of the site, as part of the ERC funded 'PalaeoChron' project. In addition to these, more than 100 optical ages were obtained for the cave sediments, most of which are too old for radiocarbon dating, by researchers at the University of Wollongong in Australia. A minimum age for the bone fragment of mixed Neanderthal/Denisovan ancestry was also obtained by uranium-series dating by another Australian team. "This is the first time we are able to confidently assign an age to all archaeological sequence of the cave and its contents" said Professor Higham.
To determine the most probable ages of the archaic hominin fossils, a novel Bayesian model was developed by the Oxford team that combined several of these dates with information on the stratigraphy of the deposits and genetic ages for the Denisovan and Neanderthal fossils relative to each other -- the latter based on the number of substitutions in the mitochondrial DNA sequences, which were analysed by the Max Planck Institute for Evolutionary Anthropology in Germany.
The improved age estimates for the hominin fossils obtained using the novel Bayesian age model, "incorporates all of the dating evidence available for these small and isolated fossils, which can sometimes be displaced after deposition in a cave sequence" said Dr Katerina Douka (Max Planck Institute for the Science of Human History, Germany), lead author of the study that reports the new radiocarbon dates and the human fossils age estimates.
"This new chronology for Denisova Cave provides a timeline for the wealth of data generated by our Russian colleagues on the archaeological and environmental history of the cave over the past three glacial-interglacial cycles" said lead author of the optical dating study, Professor Zenobia Jacobs of the University of Wollongong in Australia.
The new studies show that the cave was occupied by Denisovans from at least 200,000 years ago, with stone tools in the deepest deposits suggesting human occupation may have begun as early as 300,000 years ago. Neanderthals visited the site between 200,000 and 100,000 years ago, with "Denny," the girl of mixed ancestry, revealing that the two groups of hominins met and interbred around 100,000 years ago.
Most of the evidence for Neanderthals at Denisova Cave falls within the last interglacial period around 120,000 years ago, when the climate was relatively warm, whereas Denisovans survived through much colder periods, too, before disappearing around 50,000 years ago.
Modern humans were present in other parts of Asia by this time, but the nature of any encounters between them and Denisovans remains open to speculation in the absence of any fossil or genetic traces of modern humans at the site.
The Oxford team also identified the earliest evidence thus far in northern Eurasia for the appearance of bone points and pendants made of animal teeth that are usually associated with modern humans and signal the start of the Upper Palaeolithic. These date to between 43,000 and 49,000 years ago.
So, 'while these new studies have lifted the veil on some of the mysteries of Denisova Cave, other intriguing questions remain to be answered by further research and future discoveries' said Professor Richard 'Bert' Roberts, a co-author on the two papers.
Professor Higham commented that 'it is an open question as to whether Denisovans or modern humans made these personal ornaments found in the cave. We hoping that in due course the application of sediment DNA analysis might enable us to identify the makers of these items, which are often associated with symbolic and more complex behaviour in the archaeological record'.

Climate change may increase congenital heart defects

Rising temperatures stemming from global climate change may increase the number of infants born with congenital heart defects (CHD) in the United States over the next two decades and may result in as many as 7,000 additional cases over an 11 year-period in eight representative states (Arkansas, Texas, California, Iowa, North Caroline, Georgia, New York and Utah), according to new research in the Journal of the American Heart Association, the Open Access Journal of the American Heart Association/American Stroke Association.
"Our findings underscore the alarming impact of climate change on human health and highlight the need for improved preparedness to deal the anticipated rise in a complex condition that often requires lifelong care and follow-up," said study senior author Shao Lin, M.D., Ph.D., M.P.H., professor in the School of Public Health at University of Albany, New York. "It is important for clinicians to counsel pregnant women and those planning to become pregnant on the importance of avoiding extreme heat, particularly 3-8 weeks post conception, the critical period of pregnancy."
Congenital heart defects are the most common birth defect in the United States affecting some 40,000 newborns each year, according to the Centers for Disease Control and Prevention.
"Our results highlight the dramatic ways in which climate change can affect human health and suggest that pediatric heart disease stemming from structural heart malformations may become an important consequence of rising temperatures," said the leading author Wangjian Zhang, M.D., Ph.D., a post-doctoral research fellow at the University of Albany.
The projected increase in children with congenital heart disease will pose greater demand on the medical community caring for newborns with heart disease in their infancy and well beyond.
While previous research has found a link between maternal heat exposure and the risk for heart defects in the offspring, the precise mechanisms remain unclear. Studies in animals suggest that heat may cause fetal cell death or interfere with several heat-sensitive proteins that play a critical role in fetal development, the researchers say.
The estimates in the current study are based on projections of the number of births between 2025 and 2035 in the United States and the anticipated rise in average maternal heat exposure across different regions as a result of global climate change. The greatest percentage increases in the number of newborns with CHD will occur in the Midwest, followed by the Northeast and South.
In their analysis, the researchers used climate change forecasts obtained from NASA and the Goddard Institute for Space Studies. They improved the spatial and temporal resolutions of the forecasts, simulated changes in daily maximum temperatures by geographic region and then calculated the anticipated maternal heat exposure per region for spring and summer. For each pregnancy and region, they defined three exposure indicators: 1) the count of excessively hot days (EHD) as the number of days exceeding the 90th (EHD90) or 95th (EHD95) percentile for the same season of the baseline period at the same region; 2) the frequency of extreme heat events (EHE) as the number of occurrences of at least three consecutive EHD 90 days or two consecutive EHD 95 days; and 3) the duration of EHE as the number of days for the longest EHE within the 42-day period.
To obtain a parameter for congenital heart defect (CHD) burden projections, the investigators used data from an earlier study, also led by Lin, which gauged the risk of congenital heart defects based on maternal heat exposure for births occurring between 1997 and 2007. The researchers then integrated the heat-CHD associations identified during the baseline period with the projected increases in maternal heat exposure over a period between 2025 and 2035 to estimate the potential changes in CHD burden.
"Although this study is preliminary, it would be prudent for women in the early weeks of pregnancy to avoid heat extremes similar to the advice given to persons with cardiovascular and pulmonary disease during heart spells," said Shao Lin, M.D., Ph.D., M.P.H., associate director of environmental health services, University at Albany, State University of New York.

Prairie strips transform farmland conservation

Modern agriculture's large monoculture fields grow a lot of corn and soybeans, planted annually. The outputs from row crops can be measured both in dollars paid in the market and also in non-market costs, known as externalities. Soil, nutrients, groundwater, pollinators, wildlife diversity, and habitat (among other things) can be lost when crop yields are maximized.
Now it appears that prairie strips have an extraordinary power to change this pattern.
A prairie strip is much what it sounds like: a strip of diverse herbaceous vegetation running through a farm's rowcrops. In the American Midwest, chances are the soil that now supports crops was once covered in prairie before cultivation. Prairie plants are a mixture of native grasses, wildflowers, and other stiff-stemmed plants. They have deep roots that draw water and nutrients from far below the surface. They are perennials, returning to grow each spring.
"Research shows that areas of native prairie planted in the right places in a farm field can provide benefits that far outweigh losses from converting a small portion of a crop field to prairie," said Lisa Schulte Moore of Iowa State University. "For example, when we work with farmers to site prairie strips on areas that were not profitable to farm, we can lower their financial costs while creating a wide variety of benefits."
Schulte Moore is a team member with STRIPS: Science-based Trials of Rowcrops Integrated with Prairie Strips. STRIPS showed that converting just 10% of a row-cropped field to prairie strips:
  • reduces soil loss by 95%,
  • reduces overland water flow by 37%, and
  • reduces the loss of two key nutrients (nitrogen and phosphorus) from the soil by nearly 70% and 77%, respectively.
It also leads to greater abundance and diversity of beneficial insects, pollinators such as bees and monarch butterflies, and birds. Going from zero to 10% prairie provided far more than a 10% increase in the measured benefits.
"Some of these benefits can impact our pocketbooks but are not accounted for by typical financial markets," said Schulte Moore. These include ecological benefits such as flood control, cleaner water, and carbon from the atmosphere stored.
Market benefits also exist: more productive soil in the fields can, in time, translate into better yields, fiber and honey production, forage for livestock, and hunting leases.
The STRIPS research began in Iowa in 2007. Because of promising scientific results, five years later the researchers began working with farmers to introduce prairie strips onto commercial farms. While the research results have been more variable in these more complicated settings, the findings are encouraging and cooperating farmers are liking what they see.
The plantings require a modest investment in site preparation and seed planting. Maintenance tasks include some mowing in the establishment years and spot treatment for weeds. So far, the researchers have not seen competition between the prairie plants and crops that impact yield.
Conservation Reserve Program (CRP) contracts through the USDA's Farm Service Agency can greatly reduce the cost of establishing prairie strips. Overall, Schulte Moore said, this is one of the most economical best-practice conservation steps farmers can take.
Still, lack of stable financial rewards for establishing and maintaining prairie strips is a barrier to widespread adoption. "Finding ways to return economic value to farmers and farmland owners is crucial," Schulte Moore said. She is now focused on developing marketable products from prairie strips, such as renewable energy sources from prairie biomass. That would help make what is already a solid investment into a can't-lose proposition.

Scientists use Nobel-prize winning chemistry for clean energy breakthrough

Scientists have used a Nobel-prize winning Chemistry technique on a mixture of metals to potentially reduce the cost of fuel cells used in electric cars and reduce harmful emissions from conventional vehicles.
The researchers have translated a biological technique, which won the 2017 Nobel Chemistry Prize, to reveal atomic scale chemistry in metal nanoparticles. These materials are one of the most effective catalysts for energy converting systems such as fuel cells. It is the first time this technique has been for this kind of research.
The particles have a complex star-shaped geometry and this new work shows that the edges and corners can have different chemistries which can now be tuned to reduce the cost of batteries and catalytic convertors.
The 2017 Nobel Prize in Chemistry was awarded to Joachim Frank, Richard Henderson and Jacques Dubochet for their role in pioneering the technique of 'single particle reconstruction'. This electron microscopy technique has revealed the structures of a huge number of viruses and proteins but is not usually used for metals.
Now, a team at the University of Manchester, in collaboration with researchers at the University of Oxford and Macquarie University, have built upon the Nobel Prize winning technique to produce three dimensional elemental maps of metallic nanoparticles consisting of just a few thousand atoms.
Published in the journal Nano Letters, their research demonstrates that it is possible to map different elements at the nanometre scale in three dimensions, circumventing damage to the particles being studied.
Metal nanoparticles are the primary component in many catalysts, such as those used to convert toxic gases in car exhausts. Their effectiveness is highly dependent on their structure and chemistry, but because of their incredibly small structure, electron microscopes are required in order to provide image them. However, most imaging is limited to 2D projections.
"We have been investigating the use of tomography in the electron microscope to map elemental distributions in three dimensions for some time," said Professor Sarah Haigh, from the School of Materials, University of Manchester. "We usually rotate the particle and take images from all directions, like a CT scan in a hospital, but these particles were damaging too quickly to enable a 3D image to be built up. Biologists use a different approach for 3D imaging and we decided to explore whether this could be used together with spectroscopic techniques to map the different elements inside the nanoparticles."
"Like 'single particle reconstruction' the technique works by imaging many particles and assuming that they are all identical in structure, but arranged at different orientations relative to the electron beam. The images are then fed in to a computer algorithm which outputs a three dimensional reconstruction."
In the present study the new 3D chemical imaging method has been used to investigate platinum-nickel (Pt-Ni) metal nanoparticles.
Lead author, Yi-Chi Wang, also from the School of Materials, added: "Platinum based nanoparticles are one of the most effective and widely used catalytic materials in applications such as fuel cells and batteries. Our new insights about the 3D local chemical distribution could help researchers to design better catalysts that are low-cost and high-efficiency."
"We are aiming to automate our 3D chemical reconstruction workflow in the future," added author Dr Thomas Slater."We hope it can provide a fast and reliable method of imaging nanoparticle populations which is urgently needed to speed up optimisation of nanoparticle synthesis for wide ranging applications including biomedical sensing, light emitting diodes, and solar cells."

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...