Transforming Imagination Into Brain Imaging

Researchers at the University of California have developed an algorithm that is able to read human brain electric signals and interpret some of them. The created algorithm was adopted to work with Functional Magnetic Resonance Imaging, or fMRI, method. The initial goal of Jack Gallant and Shinji Nishimoto from Gallant's Lab was to decipher and rebuild the human visual experience with following prospect of reconstructing other types of dynamic imagery like dreams and memories one day. This is the first time anyone was able to convert brain signal into moving dynamic image.

Reconstructed Brain Activity

Berkeley neuroscientist Jack Gallant describes accomplished results as a "major leap toward reconstructing internal imagery," saying: "we are opening a window into the movies in our minds." Gallant believes that once they are able to decode and reproduce dynamic imaging of human brain they will be able to start working on things like dreams and imagination. The scientist says that the brain is a very complicated system that consists of hundreds modules every one of which must be understood and functionally reproduced if researchers intend to synchronize the imaging activity of the brain with computer.

Shinji Nishimoto, his college from the same university, is also at the opinion that they will have learn how brain works first, using the new algorithm they created in real-life conditions, in order to connect between abstract visual experiences and things like dreams, memories and intentions. The ability to use existing fMRI technology for this study became a blessing for the team; previously it was thought that it is impossible to render any brain activity with fMRI. 

The ultimate goal of this this project is to create a digital version of the human brain that perceives the world the way human mind does in order to broadcast imagery inside human minds. However, both, Gallant and Nishimoto are confident that they are decades away from being able to read human thoughts and intentions. The team members used themselves as study subjects, watching videos while being scanned by the magnetic resonance imaging machine for several hours at a time. Then they used all the collected data to develop a computer model that had matched features of that data - shapes, colors and movements - with patterns of brain activity. 

Real Time Brain Imaging

Even decades ago, when digital world was young, the idea of merging it with the reality looked attractive. How much better does it look now, when the portion of information humans receives from digital sources has risen hundreds of times comparing to even several years ago. The merging of ''sense reality'' with ''digital reality'' appears inevitable to many, and innovations like augmented reality glasses, quantum computing and real-time brain imaging are just the tip of the iceberg. We might be able to enjoy whole picture decade or so from today. 

If brain-produced electric signal can be successfully captured, translated and reproduced digitally, why regular digital video/audio output can't be adopted to human brain format and streamed back into our heads? I would love to spare TV expense by watching my favorite TV show inside my own mind by plugging into my new Augmented Reality iPod.

References: Worldbulletin, Cihan News Agency, Jack Gallant and Shinji Nishimoto with Gallant's lab, Thomas Naselaris, An T. Vu, Yuval Benjamini, Professor Bin Yu from Berkeley, Fox News, Michio Kaku.