Open this publication in new window or tab >>Show others...
2025 (English)In: Data in Brief, E-ISSN 2352-3409, Vol. 63, article id 112258Article in journal (Refereed) Published
Abstract [en]
Inner speech, or covert speech, refers to the internal generation of language without overt articulation. Decoding inner speech has significant implications for brain-computer interfaces (BCIs), particularly for assistive communication in individuals with speech and motor impairments. To facilitate research in this area, we introduce a publicly available dataset comprising simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data during inner speech production.Data were collected from three healthy, right-handed participants performing an inner speech task. The task involved silent repetition of visually presented words belonging to either a social or numerical category. The experiment consisted of 40 trials per word, with eight unique words and starts with a fixation period of two seconds. Stimuli were displayed for two seconds at the beginning of each session, followed by a 12-second rest period to allow hemodynamic responses to return to baseline. Participants were instructed to remain still and avoid movements to minimize artifacts.EEG was recorded using a 64-channel MR-compatible cap (BrainCap MR, EasyCap GmbH) at a 5 kHz sampling rate. Electrocardiogram (ECG) signals were simultaneously acquired using an additional electrode placed on the trapezius muscle to facilitate cardioballistic artifact correction. Gradient and cardioballistic artifacts were corrected using BrainVision Analyzer software.Functional MRI data were acquired using a 3T scanner with a 48-channel headcoil, and an echo-planar imaging (EPI) sequence optimized for whole-brain coverage. The repetition time (TR) was 2 s. High-resolution anatomical T1-weighted images were also acquired for structural reference. The dataset is publicly available in the OpenNeuro repository.The aim of this dataset is to provide a resource for studying inner speech processing, multimodal neuroimaging, EEG-fMRI fusion techniques, and BCI-driven speech prosthesis development.
Place, publisher, year, edition, pages
Elsevier Inc., 2025
Keywords
Multimodal neuroimaging, Inner speech, Synchronous data, Fmri, EEG
National Category
Neurosciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-115733 (URN)10.1016/j.dib.2025.112258 (DOI)2-s2.0-105022797054 (Scopus ID)
Funder
The Kempe Foundations, JCSMK23–0102Luleå University of Technology, LTU-154–2023, 3 [LTU-4908–2022
Note
Godkänd;2025;Nivå 0;2025-12-09 (u8);
Full text license: CC BY
2025-12-092025-12-092025-12-09Bibliographically approved