Thursday 12 October 2023

Software can detect hidden and complex emotions in parents

 Researchers have conducted trials using a software capable of detecting intricate details of emotions that remain hidden to the human eye.

The software, which uses an 'artificial net' to map key features of the face, can evaluate the intensities of multiple different facial expressions simultaneously.

The University of Bristol and Manchester Metropolitan Universityteam worked with Bristol's Children of the 90s study participants to see how well computational methods could capture authentic human emotions amidst everyday family life. This included the use of videos taken at home, captured by headcams worn by babies during interactions with their parents.

The findings, published in Frontiers, show that scientists can use machine learning techniques to accurately predict human judgements of parent facial expressions based on the computers' decisions.

Lead author Romana Burgess, PhD student on the EPSRC Digital Health and Care CDT in the School of Electrical, Electronic and Mechanical Engineering at the University of Bristol, explained: "Humans experience complicated emotions -- the algorithms tell us that someone can be 5% sad or 10% happy, for example.

"Using computational methods to detect facial expressions from video data can be very accurate, when the videos are of high quality and represent optimal conditions -- for instance, when videos are recorded in rooms with good lighting, when participants are sat face-on with the camera, and when glasses or long hair are kept from blocking the face.

"We were intrigued by their performance in the chaotic, real-world settings of family homes.

"The software detected a face in around 25% of the videos taken in real world conditions, reflecting the difficulty in evaluating faces in these kind of dynamic interactions."

The team used data from the Children of the 90s health study -- also known as Avon Longitudinal Study of Parents and Children (ALSPAC). Parents were invited to attend a clinic at the University of Bristol when their babies were 6 months old.

At the clinic, as a part of the ERC MHINT Headcam Study, parents were provided with two wearable headcams to take home and use during interactions with their babies. Parents and infants both wore the headcams during feeding and play interactions.

They then used an 'automated facial coding' software to computationally analyse parents' facial expressions in the videos and had human coders analyse the facial expressions in the same videos.

The team quantified how frequently the software was able to detect the face in the video, and evaluated how often the humans and the software agreed on facial expressions.

Finally, they used machine learning to predict human judgements based on the computers decisions.

No comments:

Post a Comment

Novel C. diff structures are required for infection, offer new therapeutic targets

  Iron storage "spheres" inside the bacterium C. diff -- the leading cause of hospital-acquired infections -- could offer new targ...