Scientists from Yale, Dartmut and Cambridge universities have developed a MindLLM model that can convert signals of the functional resonance layer (FMRI) into text. Unlike previous methods, it does not require personal settings for each person.

Earlier, efforts to convert brain activity into text had encountered low accurate problems, a set of limited tasks and could not work with different people. The current models depend on the personal characteristics of the brain and the knowledge is poorly tolerated for new users. MindLLM uses a different approach based on the handling of general laws of brain function, allowing it to better adapt to different people and tasks.
The model includes two main components: FMRI encoder and linguistic nerve network. Scan the brain to divide it into small three -dimensional areas – Vokseli, the number and locations in different people. However, the functions of the brain are still similar and the MindLLM analyzes the activity of the brain, with this feature.
A special signal processing mechanism helps the model to understand the meaning of information and brain guidance method (bit) that improves its definitive data decoding ability. This allows MindLLM to perform complex tasks, such as creating descriptions of brain signals, answers for reasonable questions and reasoning.
In tests, the model shows that the adaptation is 16.4% better for new users and 25% is better dealing with new tasks compared to previous solutions. MindLLM also revealed the connections between the activity of certain areas of the brain and cognitive functions such as awareness and thinking.