Home Artificial Intelligence A step forward within the age of artificial intelligence that turns thoughts into movies

A step forward within the age of artificial intelligence that turns thoughts into movies

1
A step forward within the age of artificial intelligence that turns thoughts into movies

A black and white video clip from the Nineteen Sixties (top) of a person running to a automobile to open a trunk, and a video clip created by deciphering the brain activity of a rat (bottom) (Picture = EPFL)

The era of turning thoughts into movies shouldn’t be far off. Research to read brain signals and convert them into images with artificial intelligence (AI) has begun to point out progress.

Researchers on the Technical University of Lausanne (EPFL) in Switzerland announced on the third (local time) within the scientific journal Nature that they’ve developed an AI tool that may interpret the brain signals of mice in real time after which reproduce the video the mice are watching.

The researchers measured brain activity in mice using electrode probes implanted in areas of the brain’s visual cortex, then taught which brain signals were related to which frame of a movie they were watching.

The researchers collected brain activity data by having 50 mice watch a 30-second video nine times. Then, the collected brain activity data was trained on an AI model called ‘CEBRA’ to map the brain signal to a selected frame of the video.

They then measured brain activity while having latest mice watch the identical video, and interpreted this brain activity data as sebra.

In consequence, Sevra was capable of predict the frame the mouse was watching in real time, and the researchers converted the expected frame into a picture.

Aside from the case where the rat was wandering around and never paying full attention, the reconstructed images closely matched the unique images, albeit with some imperfections.

(Video=EPFL)

But that doesn’t suggest we’re getting closer to a technology that may project an individual’s memories or dreams onto a movie screen or computer monitor.

Sebra was capable of do that because he had been trained with the video beforehand.

It isn’t difficult to assume a model developing well enough to acknowledge images with none specific pre-training, which could be seen here, but immediately it’s unimaginable.

Nonetheless, the researchers see the outcomes of this study as a breakthrough for brand new research. They are saying Sebra will provide insight into neurological function and the way the brain interprets stimuli.

By uncovering the link between brain activity patterns and visual input, the researchers hope they’ll make clear how the tool creates visual sensations in individuals with visual impairments. It is usually expected to assist diagnose and treat brain disorders equivalent to dementia.

Meanwhile, researchers on the University of Texas have developed an AI system that measures an individual’s brain activity with functional magnetic resonance imaging (fMRI) and reconstructs what he thinks or imagines into sentences. It’s still removed from perfect, but it surely’s the primary time a machine has been capable of decipher an individual’s thoughts as complex sentences relatively than single words or short phrases.

Chan Park, cpark@aitimes.com

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here