Conference program

The conference program consists of four keynotes and 30 regular presentations. The program starts at 7:00 UTC and ends at 19:45 UTC. All times in the program are in Coordinated Universal Time or UTC. The time in the UK (GMT) is currently equal to UTC, the Central European Time (CET) is UTC+1, and in Boston, MA, and the Eastern Time is UTC-5. Download the program PDF: coming soon.

Neural oscillations and spiking assemblies drive each other in the brain

Peter Stratton (The University of Queensland)
Coauthors: Francois Windels, Allen Cheung, Shanzhi Yan, Pankaj Sah
Click here for abstract
Oscillations in activity are hallmarks of all neural systems. These oscillations are implicated in almost every sensory, motor and cognitive function, and are perturbed in characteristic ways in brain diseases. However, little is known about how these oscillations are controlled, or how they are causally connected with the underlying activities of individual neurons. In particular, rapid fluctuations in oscillatory power and phase are observed continuously across the brain, and are often presumed to reflect reconfiguration of brain networks to meet ongoing processing demands. However, the questions of if, and how, this network reconfiguration is reflected in short-lived spike-to-spike correlations between neurons remain unexplored. We recorded local field potential (LFP) oscillations and neural spiking activity using tetrodes in the rat basolateral amygdala during fear conditioning. We show that brief spike correlations between neurons continuously form short-lived neuronal assemblies. Significantly, the assembly couplings drive oscillations across frequencies which, in turn, reconfigure the assemblies. This interplay of population oscillations with individual neuronal coupling establishes reciprocal control across spatial and temporal scales. Neither oscillations nor spikes fully dictate the patterns of neural activity, but each instead continuously influences the other. These results help explain how the brain can dynamically reconfigure itself to process information much faster than plasticity mechanisms allow.

Avoidance of painful facial expression predicted future pain intensity within chronic pain patients: an ERP study

Yang Wang (Southwest University, Chongqing)
Click here for abstract
Recent studies of event-related potentials (ERPs) have identified N2pc amplitudes as a neural marker for early attention allocation in anxiety-disordered groups. In this study, we assessed the status of N2pc responses and reaction time (RT) as measures of pain-related attention biases and risk factors for later increases in pain intensity within an adult chronic musculoskeletal pain sample. Participants (n = 71) completed a dot-probe task featuring pain-neutral (P-N) and happy-neutral (H-N) facial expression image pair presentations, during which ERPs were assessed. Although there was no evidence of attention biases based on RT responses, the sample displayed larger mean N2pc amplitudes contralateral to (1) painful facial expression in P-N trials and (1) happy facial expression in H-N trials, a pattern reflecting increased early attention allocation to affectively-valenced facial expressions. More critically, cohorts who showed weaker baseline N2pc amplitudes, reflecting early avoidance, specific to painful facial expressions were more likely to report pain intensity elevations at six month follow-up, independent of all other significant baseline influences. Together, results underscored the potential utility of N2pc amplitudes as a neural measure for early attention allocation and changes in pain intensity over time among chronic pain patients.

The case for preprints in biology and where we go...

Plinio Casarotto (University of Helsinki)
Click here for abstract
Peer-review is the cornerstone of modern scientific process. A checkpoint for the relevance and possible mistakes in a study aiming to be published. The process is based on, or at least should be, a coordinated effort between authors, editors and reviewers to draw a line separating the scientific relevant content of a study of mere opinion. Without peer-review these two situations are blurry. However, it adds a lengthy variable to the process of scientific publication and delays, if not blocks, its dissemination. The consequences are many, as example grant proposals and career advance can be stalled. Not to mention it is an unpredictable process, where solid and relevant data are not published due to journal’s rejection. In this scenario preprints appear as a complement to the usual journal-based peer-review process. A preprint is a finalized scientific manuscript, organized, formatted and uploaded by the authors to a public server (preprint server). It contains complete data and methodologies; it is often the same manuscript submitted to a journal, and available to anyone with internet access. Contrary to the final version of an accepted article, updated versions of a preprint can be submitted upon new data collection or comments, but prior preprint versions are retained and cannot be removed. Preprints serve as indicators of productivity and accomplishment for funding agencies and hiring committees. Promotes visibility and potential for early collaborations, allows feedback on the manuscript, establishes priority of discoveries and ideas. The posting and discussion of preprinted studies are developing in the field of biology with many journals such as Science, EMBO and e-Life embracing the idea and facilitating the submission of preprints throughout their platforms 1,2. Preprints are not a substitute to the journal-based peer review process, but complementary; it addresses the time delay and flexibility that nowadays scientific publications demand and are a valuable tool to address productivity, especially of early career researchers. The challenge we face now is how to integrate preprints in biology, and especially in neuroscience, to the level they reached in physics and mathematics, to be major hubs for discussion and improvement for studies. Is it a matter of time or other measures are necessary? 1. Vale, R. D. & Hyman, A. A. Priority of discovery in the life sciences. Elife 5, (2016). 2. Berg, J. M. et al. SCIENTIFIC COMMUNITY. Preprints for the life sciences. Science 352, 899–901 (2016).

Diminished and right-lateralized mismatch fields to speech-sound changes in dyslexia

Anja Thiede (University of Helsinki)
Coauthors: Parkkonen, L. (Aalto University), Virtala, P (University of Helsinki), Mäkelä, J. (Helsinki University Central Hospital), Kujala, T. (University of Helsinki)
Click here for abstract
Dyslexia is a reading and writing impairment, which is associated with deficiencies in auditory processing of speech. One major theory, the phonological deficit theory, suggests that the underlying reason of dyslexia is impaired storage or retrieval of phonological representations in the brain. Several studies conducted with electroencephalography (EEG) have shown that the mismatch negativity, a neural change discrimination response, to different auditory non-speech- and speech-sound features is diminished in children and adults with dyslexia. However, similar studies with magnetoencephalography (MEG) are scarce; yet, MEG has shown that mismatch fields (MMFs) to tone frequency changes are diminished in the left hemisphere of dyslexic adults. Here we studied 25 dyslexic and 25 non-dyslexic Finnish adults with MEG, magnetic resonance imaging (MRI), as well as neuropsychological tests evaluating reading and phonological skills. During the MEG recording, the bisyllabic Finnish-sounding pseudoword /tata/ was repeatedly presented to the participants, occasionally (≈25%) interspersed with auditory deviants differing in duration, frequency, or vowel identity of the second syllable. Participants were instructed to focus on a silenced movie and ignore the sounds. We expected to find diminished and atypically lateralized MMFs in dyslexic adults to all speech-sound changes. We found diminished left-hemispheric MMFs to duration and vowel changes in dyslexic compared to non-dyslexic adults (p < .01 and p < .05, respectively), as well as an atypical right-hemispheric lateralization in dyslexics in MMFs to duration and frequency changes (both p < .05). In addition, MMFs to vowel changes were left-lateralized in the control group (p < .01). These findings, confirming our hypothesis, suggest weakened speech-sound discrimination in dyslexia, presumably reflecting impaired phonological representations, in line with the phonological deficit theory. Atypically lateralized MMFs further suggest that dyslexics may employ atypical brain mechanisms for speech processing. These findings propose extensive patterns of abnormal speech discrimination in dyslexia, supporting and extending the previous body of research mostly carried out with EEG.

Word abstractness is an emerging property of language, mirrored in neural activity

Annika Hulten (Aalto University)
Coauthors: Marijn van Vliet, Lotta Lammi, Sasa Kivisaari, Tiina Lindh-Knuutila, Ali Faisal & Riitta Salmelin (Aalto University)
Click here for abstract
Introduction. Understanding abstract concepts is a fundamental part of human language abilities that enables us to discuss e.g. fictive or non-tangible aspects of life. A common view is that statistical regularities in our environment guide how conceptual knowledge becomes organized in the brain, following basic Hebbian formalisms. Previous neuroimaging, behavioral and clinical studies suggest that abstract words are processed differently from concrete words. Here we show how brain-level differences between abstract versus concrete words can emerge from the language environment through basic principles of co-occurrence and optimization. Methods. We first assessed internal structure of two qualitatively different semantic feature sets (one based on corpus statistics and one on the answers to a set of 85 questions) describing 123 Finnish nouns (60 abstract and 63 concrete), using self-organizing maps (SOM). Next, we used a regression-based decoder to evaluate if both feature sets could successfully model the magnetoencephalography (MEG) responses to reading the single words in 20 healthy volunteers. We then evaluated where and when in the brain the feature information is expressed using representational similarity analysis (RSA) between each feature set and source-localized (minimum norm estimates) MEG data. Group-level effects were assessed using a cluster permutation test. Results. For each feature set, the SOM showed an almost binary division between abstract and concrete words. Each feature set also adequately modeled the feature space in the majority of participants; the top score for item-level decoding was 79.1 % correct. The emerging abstract-concrete dimension prevalent in the thematic corpus-based feature set, co-varied with activity in the precentral gyrus, the middle frontal and middle/inferior temporal cortex, as well as the temporo-parietal junction at 280 – 360 ms. The 85 question features contained explicit information about both the abstractness and taxonomic classifications, and this information co-varied with activity in the superior and inferior frontal cortex at 300 – 380 ms. Discussion. Our results highlight abstractness of a word as an emergent property that arises from statistical regularities in the language environment and is mirrored in brain activity during word reading. Notably, abstractness-concreteness is only one dimension of the meaning of word. When taxonomic categories are included in the semantic model, the decoding performance improves and the frontal cortex activation seems to capture both of these aspects. Conversely, temporal cortex activity seems specific to semantics that arise from thematic relationships between words. The present study is an example of how multivariate methods may be utilized to reveal emergent patterns, in service of cognitive neuroscience.


Discussions can freely continue under the hashtag #brainTC.

KEYNOTE Making sense of the facial movements of others

Aina Puce (Indiana University (Bloomington))
How do we identify emotions and give meaning to facial movements and gestures? Over the years our work with eye and mouth movements has relied heavily on the N170 event-related potential (ERP), partly because functional magnetic resonance imaging (fMRI) is not sensitive enough to observe these subtle and very brief changes in neural activity. Our studies of simple mouth opening and closing movements indicate that we likely use biological motion mechanisms to make sense of mouth movements – consistent N170 modulation is elicited by natural and impoverished facial images. This makes sense, as mouth movements occur with the assistance of an articulated joint – the mandible. Indeed, changes in gaze direction (i.e. social attention) elicit N170 modulation to natural facial images only – no modulation occurs in impoverished faces. So here we likely rely on local luminance and contrast changes – thanks to our distinctive human primate eyes with white sclera and dark irises. What role do internal goals or task play for decoding facial actions? What about arousal? To answer the first question we run implicit and explicit tasks with the same stimuli on the same subjects in the lab. Our results with manipulations of a. social attention and b. positive emotions indicate that with implicit tasks N170 modulation by stimulus condition is present, whereas in explicit tasks it goes away. Why? We think that there is a sensory gain increase from top-down modulation during an explicit task. With respect to the second question, our study on altered mouth configuration and presence/absence of teeth did not show N170 modulation as a function of explicit or implicit task. Instead the N170 modulator was now the presence/absence of teeth – and this was correlated with behavioral judgments of how arousing the viewed stimulus was. So how to make sense of all these findings? With respect to existing literature, there is a tremendous variation in findings that we believe comes about due to differences in: 1. EEG reference electrodes; 2. implicit vs. explicit tasks (typically only 1 task is run in a study); 3. static vs. dynamic stimuli; 4. brain mechanisms for decoding information from the upper vs. lower face; 5. the arousing nature of the stimuli themselves (e.g. presence/absence of teeth). With respect to how social information is processed from the human face, we believe that there are 2 modes of processing: a ‘default’ and a ‘socially aware’ mode. The former is active in real-life, and in implicit tasks run in the lab. The latter is activated when we are explicitly required to evaluate social information – whether in real life or in the lab. It is possible to switch between one mode and the other, due to internally changed goals/intentions (top-down), or to the arousing nature of the sensory input (bottom-up) that may draw our attention to their social content.

Evaluating Health and Distinguishing Gender only from Raw EEG Data Using Deep Convolutional Neural Networks

Thomas Jochmann (Technische Universität Ilmenau)
Coauthors: Jens Haueisen (Technische Universität Ilmenau)
Click here for abstract
Introduction The medical evaluation of EEG requires elaborate work from specialists who can recognize and interpret patterns in the large and noisy datasets. Since the sensors only receive the superposition of many small and simultaneous activities, it can be assumed that many patterns remain hidden in the mixture and are not available for diagnosis. While in other medical tests, such as of hormone-levels, gender provides essential information, there are no gender-specific features involved in EEG diagnosis. Anatomical and endocrinological differences between males and females are indisputable, but distinguishing marks between male and female brains are highly controversial. In this work, we present a neural network model for automatic classification of EEGs. Based on the training data, it can either detect abnormal datasets or distinguish gender. Methods We built a convolutional neural network architecture for binary classification. The network consists of 41 functional layers, including pooling, merging, and dropout layers, with 1.1 million tunable parameters. The binary output denotes the gender, respectively the healthiness. The EEGs were examined by trained neurologists and labeled as normal or abnormal. The input is an EEG data matrix with 21 channels and 1250 time steps, corresponding to 10 seconds at 125 Hz. The distinguishing spatio-temporal representations are learned from the data and cover time-delayed cross-frequency and cross-sensor connections. We used 2680 recordings from 2167 adult patients of the TUH Abnormal EEG Corpus (v1.1.2) [1]. That database was split into training (2425 recordings) and evaluation (255 recordings) data. Results We identified normal and abnormal EEGs in the 255 evaluation recordings with an accuracy of 86 % after averaging the results over 6 minutes. The accuracy after 10 seconds was already 80 %. We identified males and females in the 255 evaluation recordings with an accuracy of 70 %. Discussion Our method outperforms the latest published baseline for pathology detection (86 % vs. 84 %) on the used dataset [2]. The ability to distinguish gender from EEGs indicates currently unrevealed differences in the formation of male and female EEGs. The different signals could result from: 1. Functional differences in the cortical networks that are emitting the signals. 2. Anatomical differences in the volume conduction structure shaping the signals when they propagate from the sources towards the EEG sensors throughout the brain, bone and skin tissue. 3. Gender-specific prevalence of EEG-altering diseases, as the dataset was sourced from a hospital and contained data from diseased patients. Our ongoing research now aims to identify and interpret the distinguishing spatio-temporal patterns. [1] Obeid and Picone, “The Temple University Hospital EEG Data Corpus,” Front. Neurosci., 2016 [2] Schirrmeister et al., arXiv:1708.08012, 2017

The next-generation magnetoencephalography: on-scalp sensor arrays to overcome the limits of current systems

Joonas Iivanainen (Aalto University)
Coauthors: Rasmus Zetter, Lauri Parkkonen (Aalto University)
Click here for abstract
Magnetoencephalography (MEG) is a non-invasive neuroimaging technique that detects the extracranial magnetic fields of electrically active neuron populations in the human brain. Due to the weakness of neuromagnetic fields, extremely sensitive magnetometers are required. Until recently, superconducting quantum interference devices (SQUIDs) have been the only sensors with adequate sensitivity for MEG. However, SQUID-based MEG systems have several drawbacks. First, thermal insulation needed by SQUIDs requires a sensor-to-head distance of at least 2 cm. Second, SQUID-based sensor arrays are not adaptable to the head shape of the individual, which further increases the sensor-to-head distance especially with children. These factors set an upper limit for the sensitivity and spatial resolution that can be achieved with current MEG systems. However, these limits could now be overcome as recent progress in optical magnetometry has enabled compact high-sensitivity sensors called optically-pumped magnetometers (OPMs). As room-temperature sensors, OPMs could be placed directly on the scalp of the subject allowing construction of on-scalp MEG sensor arrays that would increase the sensitivity and spatial resolution of MEG considerably. In this presentation, I will provide an introduction to MEG and OPMs, review the simulations we have performed to quantify the benefits of OPM arrays and present some of our OPM MEG measurements.

Dynamical Coupling for Additional dimeNsions

Koos Zevenhoven (Aalto University)
Click here for abstract
Complex dynamical systems are everywhere; the brain is one, and many parts of brain scanner technologies such as a hybrid MEG–MRI device can be seen as dynamical systems. Traditional control systems and control theory are based on realtime feedback applied to drive the system towards a desired state. Especially for complex systems, however, such control may be ineffective. Dynamical coupling for additional dimensions (DynaCAN) takes into account the nature of the system by using optimized waveforms to couple into the system. Often features at different time scales can be used to couple as if more spatial degrees of freedom were available for control input. DynaCAN is demonstrated and discussed in a range of problems in neuroscience technology.

A 7-channel on-scalp MEG system based on high-Tc SQUIDs

Christoph Pfeiffer (Chalmers University of Technology)
Coauthors: Silvia Ruffieux, Alexei Kalabukhov, Maxim Chukharkin, Dag Winkler (Chalmers University of Technology) and Justin F. Schneiderman (MedTech West and University of Gothenburg)
Click here for abstract
To understand the human brain we need to be able to accurately measure its function. Modern neuroscience therefore relies on state-of-the-art functional neuroimaging technologies. Magnetoencephalography (MEG) allows us to non-invasively measure neural activity in the brain with good spatial and very high temporal accuracy. Commercial systems are, however, still relatively rare –especially when it comes to clinical use. These systems rely on sensors that have to be cooled with liquid helium. Liquid helium is very expensive and a finite resource. Due to its extremely low temperature (4 K ~-270 C) sophisticated thermal insulation is necessary to isolate the room-temperature subject from the cold sensors. In modern MEG systems the sensors are therefore separated from the subject’s head by 2 cm or more. Optimally, MEG systems would measure the field as close to the head as possible since the neuromagnetic signals get weaker with distance. Replacing the sensors by high-Tc SQUIDs, which can be cooled by liquid nitrogen, can help reduce this issue. Liquid nitrogen is significantly warmer (77 K~-196 C) than liquid helium, allowing us to place the sensors as close as 1 mm to the subject’s head. Our group is therefore developing a high-Tc SQUID-based MEG system where the sensors are situated much closer to the head, so-called “on-scalp MEG”. After successfully performing first MEG measurements with single channel systems, we developed a MEG system with 7 high-Tc SQUID sensors in a single cryostat. The sensors are arranged in a dense pattern and aligned to the average curvature of the head to achieve high spatial sampling of a small area with minimal distance from the head for each sensor. We will present the system design and results from a measurement of alpha activity in a human subject.

Analysis of functional connectivity and oscillatory power using DICS: from raw MEG data to group-level statistics in Python

Susanna Aro (Aalto University)
Coauthors: Marijn Van Vliet (Aalto University), Mia Liljeström (Aalto University, Karolinska Institute), Riitta Salmelin (Aalto University), Jan Kujala (Aalto University)
Click here for abstract
Dynamic Imaging of Coherent Sources (DICS) is a beamforming technique in the frequency-domain that allows the study of the cortical sources of oscillatory activity and synchronization between brain regions from EEG/MEG data. Here, we present a DICS-based data analysis pipeline that allows group-level evaluation and visualization of oscillatory power and functional connectivity. The pipeline is implemented as a Python module that is integrated with the MNE-Python package. In our presentation, we’ll give an overview of the different analysis steps of the openfMRI ds000117 "familiar vs. unfamiliar vs. scrambled faces” MEG dataset, starting from the raw data all the way to group-level statistics and visualizations. We start by computing cross-spectral density (CSD) matrices using a wavelet approach in several frequency bands (alpha, theta, beta, gamma). We then provide a way to create comparable source spaces across subjects and discuss the cortical mapping of spectral power. For connectivity analysis, we present a canonical computation of coherence that facilitates a stable estimation of all-to-all connectivity. Finally, we use group-level statistics to limit the network to cortical regions for which significant differences between experimental conditions are detected and produce vertex- and parcel-level visualizations of the different brain networks. The aim of the pipeline is to facilitate this type of analysis and to educate both novice and experienced data analysts with the “tricks of the trade” within such analyses. A more complete description of the analysis pipeline is available at: Find the ConPy Python module here:

A closed-loop BCI based on EEG alpha activity modulated by attention

Irene Vigué Guix (Pompeu Fabra University)
Click here for abstract
Alpha oscillations (8-14 Hz) are one of the most prominent electrophysiological signals from the human scalp. Previous research demonstrated that alpha activity is modulated by covert spatial attention and these modulations could be used as a control signal for brain-computer interface (BCI). The main goal of this study was to build a classifier, on a trial-by-trial basis, of EEG recordings during both offline and online covert visuospatial attention (CVSA) tasks. Alpha power imbalance measured while the user oriented attention to left or right hemifield was used as control signal for a closed-loop BCI system based on the EEG signal. The study was divided in an offline data exploration stage and an online implementation stage. In the first stage, I used a dataset from a covert spatial attention task using a Posner paradigm to explore a set of parameters (e.g., electrodes, frequencies, and trial selection strategies) to find which of these parameters were more suited for BCI. The decision was based on the performance of the classifier. In the second stage of the study, I designed a Posner task and used the parameters selected in the first stage to train a classifier for the online task. In the online BCI setup, ongoing brain EEG activity was recorded, classified, and used to provide real-time feedback to subjects about their internal attention state, according to the output of the classifier. Overall, offline and online classification confirmed that it is possible to discern attention shifts to the left and right locations based on modulations of power over the posterior alpha rhythm. Nonetheless, the performance of the classifier was lower than expected and possible modifications are proposed to improve it.

Attentional processes in the control children as revealed with brain event-related potentials and their source localization

Praghajieeth Raajhen Santhana Gopalan (University of Jyväskylä, Finland)
Coauthors: Otto Loberg, Jarmo Hämäläinen, and Paavo H.T. Leppänen (University of Jyväskylä)
Click here for abstract
Attention is one of the fundamental elements of all the aspects of cognition. Attention can be conceptualized as three sophisticated functions of alerting, orienting, and inhibition. In this study, we investigated the contribution of EEG-based event-related brain potentials (ERPs) components of Attention Network Test (ANT) and their cortical sources activations in school-aged children with typical reading skills. ANT paradigm tapping into attentional mechanism was measured by EEG/ERPs using 128 electrodes combined with simultaneous measurement of eye-tracking data from 81 typically developing 12-13 years old children at the sixth grade. The underlying ERPs components and neuronal sources of ERP activity were modelled as CLARA distributed source analysis method using MatLab toolbox FieldTrip and Brain Electrical Source Analysis (BESA), respectively. Behaviorally, the attention network effects were studied using reaction time (RT) differences between the stimulus conditions and it obtained a distinct significant RT difference between conditions. The shortest and longest reaction times were observed for congruent targets and incongruent targets, respectively. The ERP results showed that the amplitude of target N1 response, typically reflecting basic visual processing, was modulated during alerting and orienting, whereas later P3 response, typically reflecting feature-based attention, was modulated during inhibition response. The grand-averaged ERPs were collapsed across all the conditions to identify the neuronal sources related to the target N1 period (140-200 ms) and target P3 period (480–700 ms). Source level statistics were calculated based on the source waveforms associated with the neuronal sources obtained from the target N1 and target P3 period of collapsed grand average ERP. The temporal sequence of the neuronal source activation in response to attention was localized significantly in the anterior cingulate, superior temporal gyrus, bilateral lingual gyrus, and anterior temporal lobe. This high-density EEG/ERP brain response effects and dipole source localization can be used to study associations with typically developing children’s attentional process and how it is related to three attentional functions.

Pre-stimulus connectivity patterns predict perception during binocular rivalry onset

Elie El Rassi (University of Salzburg)
Coauthors: Nathan Weisz (University of Salzburg)
Click here for abstract
Binocular rivalry is a powerful tool for studying the neural correlates of visual attention and perception. When two stimuli are presented dichoptically in a controlled setting, people report seeing one dominant percept at a time rather than a combination of the two stimuli. In a MEG study, I show that pre-stimulus connectivity patterns in category-sensitive brain regions could predict participants' percept of a face or a house at the onset of binocular rivalry. Additionally the percept can be very reliably decoded from post-stimulus evoked responses.


Discussions can freely continue under the hashtag #brainTC.

KEYNOTE Is the speech production network a voice production network?

Sophie Scott (University College London)
Theoretical models of speech production typically focus on linguistic factors that underpin this complex skill. Similarly, neurobiological models of speech production are typically constructed around the linguistic aspects of speech (e.g. Blank et al, 2002). However there is increasing evidence that when we speak, we also convey emotional and personal aspects of our voices, and some of this is controlled by the same neural systems that unlike speech production. I will outline the evidence for different kinds of vocal changes that recruit the speech production network, and set out a revised model for a voice production network.

Classification of EEG signals for study of emotional stress relief using music.

Karthika Kamath (Veermata Jijabai Technological Institute)
Coauthors: Pradnya Patil, Nirmal Patil, Nikhita Raghunath, Savi Kankani (Veermata Jijabai Technological Institute)
Click here for abstract
Stress is commonly recognized as a state in which an individual is expected to perform too much under sheer pressure and in which he/she can only marginally contend with the demands. According to the latest neuroscience, the human brain is the main target of mental stress because the perceptions of the human brain determine whether a situation is threatening and stressful. To obtain the cortical response to stress, non-invasive neuroimaging modalities, such as electroencephalography (EEG), furnish the most suited modalities to measure functional changes in the brain. EEG signals are often assessed in several distinct frequency bands, such as: Delta (1-4 Hz), Theta (4-8 Hz), Alpha (8-12.5 Hz) and Beta (12.5-30 Hz) to examine their relationship with the emotional states. Studies and observation reveal that music therapy can be used to mitigate stress levels. It has a unique link to our emotions, so can be an extremely effective stress management tool. This project aims at studying the correlation between a music genre and the amount of stress that it relieves. Music genres like Classical, Jazz, Metallic are chosen and we hypothesis that listening to music can have a tremendously relaxing effect on our minds and bodies, especially slow, quiet classical music. The project is divided into several phases, starting off with the data acquisition system wherein with the help of 8 EEG electrodes the EEG signal of the subject is observed. The stress is induced in the subject with help of Daniel Kahneman’s experiment of Add-3 from his book “Thinking Fast and Slow”. It is a mental exercise wherein four digit numbers will be told to the subject and the subject needs to add 3 to each of the digits and tell the new number within a specified time maintained by a rhythmic beat. The negative feedback will be given to the subject by indicating their wrong answers. Post which the subject is made to listen to a selected music. In the next phase of the project the EEG data obtained is processed and relevant features are extracted and applied to a classification algorithm to validate the hypothesis.

Multi-Link Analysis: Brain Network Comparison via Sparse Connectivity Analysis

Alessandro Crimi (University Hospital Zurich)
Coauthors: Luca Giancardo (Italian Institute for Technology), Fabio Sambataro (Udine University), Alessandro Gozzi (Italian Institute for Technology), Vittorio Murino (Italian Institute for Technology), Diego Sona (Italian Institute for Technology)
Click here for abstract
The analysis of the brain from a connectivity perspective is unveiling novel insights into brain structure and function. Discovery is, however, hindered by the prior knowledge used to make hypotheses. On the other hand, exploratory data analysis is made complex by the high dimensionality of data. Indeed, in order to assess the effect of pathological states on brain networks, neuroscientists are often required to evaluate experimental effects in case-control studies, with hundreds of thousand connections. In this paper, we propose an approach to identify the multivariate relationships in brain connections that characterise two distinct groups, hence permitting the investigators to immediately discover sub-networks that contain information about the differences between experimental groups. In particular, we are interested in data discovery related to connectomics, where the connections that characterize differences between two groups of subjects are found, rather than maximizing accuracy in classification since this does not guarantee reliable interpretation of specific differences between groups. In practice, our method exploits recent machine learning techniques employing sparsity to deal with weighted networks describing the whole-brain macro connectivity. We evaluated our technique on synthetic, functional and structural connectomes from human and mice brain data. In our experiments, we automatically identified disease-relevant connections in datasets with unsupervised and anatomy driven parcellation approaches using high-dimensional datasets.

Connect: An Open Platform to Bridge Research and Citizens with Neurovelepmental Disabilities

Aaron Engelberg (CRI - Centre Researches Interdisciplinaires)
Coauthors: Roberto Toro (Institut Pasteur)
Click here for abstract
Everyone with a Web browser should be allowed the possibility to research their own questions, discuss them, collect data, formulate and test hypotheses. As a concrete instantiation of this idea, we will develop an open health platform that would allow persons affected by cognitive, learning and neurodevelopmental disabilities – patients, families, caregivers, educators, scientists – to better understand their own condition and try to find collectively the best strategies to help themselves. We will search to better understand the technological and design issues of collective citizen research by building a platform to help create a citizen research community around the issues of cognitive, learning and neurodevelopmental disabilities. This platform should (1) Provide a protocol to connect individual stakeholders: patients, families, educators, caregivers, software developers, researchers; (2) Ensure the conservation of the data, its accessibility, security, and transparency of the framework; (3) Facilitate the access to the platform: help participants understand data collection, visualisation, analysis and hypothesis testing; provide code sample; providing guidance on the issues of cognitive, learning, and neurodevelopmental disabilities.


Discussions can freely continue under the hashtag #brainTC.

Machine Learning Approaches to Assist Screening in Preclinical Systematic Reviews

Alexandra Bannach-Brown (University of Edinburgh & Aarhus University)
Coauthors: SLIM Consortium
Click here for abstract
Objectives: The screening phase of a systematic review (SR) is time-consuming. The number of papers being published in biomedical sciences is growing, which in turn extends the time required to screen articles for inclusion in a SR. The longer the SR process takes, the more out of date the results are when published. Machine learning (ML) approaches aim to reduce the amount of time required to perform this stage by analysing papers and ranking documents by relevancy, based on a sample of documents screened for inclusion by two independent human screeners. Methods: Here we apply 5 ML approaches to assist the screening stage in 2 preclinical neuroscientific systematic reviews. The neuropathic pain project is an update to an existing SR where the training set of dual human screened data is large. The depression project is a SR at the beginning of the timeline, where the optimal amount of training data required to achieve a high performing algorithm is determined. Performance was assessed using sensitivity and specificity. Results: In the neuropathic pain dataset, the best performing ML approach achieved sensitivity of 0.978 and specificity of 0.708. In the depression dataset, the best performing ML approach achieved sensitivity of 0.987 and specificity of 0.860. The performance of the ML algorithms were improved with the implementation of error analysis in the depression dataset, where the ML algorithm identifies potential human errors in the testset. Conclusion: We show here that ML tools have a high level of performance. They can reduce the time required to conduct the screening stage of a SR. ML tools can be integrated into existing computer-based systematic review tools ( for ease of use, to further reduce human time required to conduct the screening stage of SR, allowing for more wide-spread use of ML approaches in SR.

Do you really want to use Granger Causality in Neuroscience? No problem (sort of)

Daniele Marinazzo (Ghent University)
Coauthors: Luca Faes (Palermo University), Sebstiano Stramaglia (Bari University)
Click here for abstract
Granger Causality in neuroscience has been object of many vigorous objections and created partisan groups. While some criticisms are well founded, and need to be addressed, other are quite superficial, or simply avoidable using the right tools and terminology. I will try to give a (biased) overview of issues that are still open, and others that we could close.

The speed of light through the human visual system

Sarang Dalal (Aarhus University)
Coauthors: Britta U. Westner (Aarhus University), Christopher J. Bailey (Aarhus University), Martin J. Dietz (Aarhus University), Tzvetan Popov (University of Konstanz)
Click here for abstract
The speed at which visual information propagates down the visual pathway remains surprisingly unclear (Bair, 1999; Zeki, 2015). High-frequency responses measurable with MEG, EEG, and ERG (electoretinography) appear to reflect the precise moment of information transfer or neural processing more robustly than classic evoked responses. We aimed to characterize the timing of impulses down the visual pathway with a combination of ERG and MEG source reconstruction. Healthy participants (N=10) were presented with visual stimuli consisting of 200 light flashes of 1 ms duration with an ISI averaging 1 second. ERG responses were examined with respect to the corresponding responses of thalamus and visual cortex, as reconstructed with MEG. We furthermore implemented a novel neuroimaging strategy, combining beamforming with the Hilbert transform to yield analytic amplitude and phase across the whole brain for several high gamma bands (55-75 Hz, 75-95 Hz, 105-120 Hz and 120-145 Hz), and assessed with nonparametric statistics based on variability across trials. Intertrial phase coherence was then calculated from these results per voxel and frequency band. The filtered and source-reconstructed data were also used to examine neural connectivity between ERG and MEG- derived cortical maps in further detail, yielding what we term retinocortical coherence. The first high gamma brain responses appear at approximately 25 ms, lagging the ~115 Hz retinal oscillatory potential by only 2-5 ms, and appear to originate from the thalamus. Responses in primary visual cortex and associative areas commence at a latency of 30 ms, i.e., within 10 ms after the retinal oscillatory potential and following the LGN by 5-15 ms (depending on frequency band). Our results support the view that high-frequency modulations reflect the precise timing of information handling in both cortex as well as its afferents: the timing of the ERG oscillatory potential indeed suggests that it arises from the output stages of the retina, a plausible thalamic response occurs only a few milliseconds later, and finally after another short delay, a massive cortical response appears in several structures in primary and higher-order visual cortex. These high gamma band responses occurred much earlier than the classic visual response, with initial brain activity detected already between 25-30 ms. Measuring ERG together with MEG may therefore provide a more informative measure of information processing at each stage of the visual pathway, and potentially provide improved diagnostics to discover disturbances of the visual pathway in disease.

Educating a new generation of young researchers

Human Brain Project Education Programme (Human Brain Project Education Programme & Medical University Innsbruck)
Coauthors: Theresa Rass, Elisabeth Wintersteller, Alois Saria (Medical University Innsbruck & Human Brain Project Education Programme)
Click here for abstract
Educating a new generation of young scientists In October 2013, the Human Brain Project (HBP) was launched as a European Flagship research project. The project strives to advance the fields of neuroscience, brain medicine and ICT by developing an overarching Research Infrastructure combining and interconnecting relevant methods and ideas from the different areas. In order to support this innovative and transdisciplinary approach and to distribute knowledge about the Project's results and tools, an interdisciplinary Education Programme has been developed as an element of the HBP. The HBP Education Programme has defined a teaching and training strategy tailored to the needs of the Human Brain Project. The programme consists of various formats targeted at early career researchers working in the main research areas of neuroscience, medicine and ICT. On the one hand, advanced schools are offered to tackle specific problems and questions of the various research fields. Once a year, a transdisciplinary Student Conference is organised to bring together young researchers from different disciplines, foster scientific exchange and provide a fertile soil for new, innovative ideas. In order to also target scientists outside their area of specialisation, a special HBP Curriculum has been developed. It combines online courses and face-to-face workshops offering basic lessons in the key disciplines as well as modules dealing with complementary subjects like research ethics, intellectual property rights or the translation and exploitation of research results. Over the last two years, the HBP Education Programme Office has organised 13 events with a total number of 465 student participants and 185 lecturers and tutors who provided their know-how and contributed to the programme on a volunteer basis. At all events, a number of lectures, tutorials and student talks were recorded and added to the HBP Education E-Library to make the contents also accessible for the broader scientific community. A total of 260 videos are publicly available on the programme's YouTube channel. At a time when the field of science is constantly evolving and technologies are developing at an increasing pace, the mutual exchange of different perspectives and approaches plays an important role for the advance of new insights and ideas. Through its innovative and interdisciplinary training formats, the HBP Education Programme contributes to the education of a new generation of young researchers. It provides them with the necessary skills to see the bigger picture and interconnect different aspects of a problem, and thus work successfully within the framework of research infrastructures like the one of the Human Brain Project.

Systematic non-stationarity of alpha rhythms in the human brain: Long term frequency sliding and power changes

Christopher Benwell (University of Glasgow)
Coauthors: Christian Keitel, Raquel E London, Chiara F Tagliabue, Domenica Veniero, Joachim Gross, Gregor Thut
Click here for abstract
An implicit assumption underlying current theories and the analysis techniques employed by many electro- & magnetoencephalographic (EEG/MEG) studies investigating neural oscillations is that, in the absence of experimental manipulation, the properties of a neural ‘oscillator’ measurable at the scalp remain approximately stationary over time. Here, across several EEG and MEG experiments, we show that these assumptions are false for one of the most prominent frequency bands, the alpha-band. Specifically, alpha power increases and instantaneous frequency decreases systematically over the course of a typical experimental session (~1-2 hours). Our results suggest the existence of two non-stationary endogenous processes inherent in alpha-band activity. Source-space analyses revealed that these processes may occur in partially overlapping cortical networks with a common right-lateralized focus along the ventral visual processing stream. As well as providing novel insight into the intrinsic properties of widespread neural networks, the findings are of fundamental importance for the analysis and interpretation of studies aimed at both identifying functionally relevant oscillatory networks, and also driving these networks through external entrainment.

Variability and reliability of effective connectivity within the core default mode network: An extensive longitudinal spectral DCM study

Hannes Almgren (Ghent University)
Coauthors: Frederik Van de Steen (Ghent University), Simone Kühn (Max Planck Institute for Human Development, University Clinic Hamburg-Eppendorf), Adeel Razi (University College London, University of Engineering and Technology, Pakistan), Karl Friston (University College London), Daniele Marinazzo (Ghent University)
Click here for abstract
Effective connectivity within resting state networks has been subject of many studies. Spectral DCM for resting state fMRI (spDCM; Friston et al., 2014) is a method that allows to infer effective connectivity within intrinsic brain networks. Most research applying spectral DCM have focused on group-averaged connectivity within the default mode network (DMN; e.g., Sharaev et al., 2016), however, no study has yet investigated subject- and session specific differences and reliability in effective connectivity. In the present study we investigated whether, and to what extent, effective connectivity patterns within the core default mode network are stable both within and between subjects. Therefore we applied and combined spDCM analyses of four extensive longitudinal resting state fMRI datasets. These datasets allowed to infer robust connectivity estimates for each subject, and to draw conclusions beyond specific dataset features. Preprocessing included slice-time correction, spatial realignment, coregistration with anatomical image, normalization to MNI space, and spatial smoothing. To infer session-specific effective connectivity, spDCM was applied for each session separately. A hierarchical empirical Bayes (PEB) model (Friston et al., 2016) was then specified to estimate subject- and group-level connectivity. Across datasets, individuals consistently showed hemispheric asymmetry of effective connectivity in the core default mode network. Principal component analyses (PCA) revealed that differences in hemispheric asymmetry was the main source of between-subject variability. Connectivity patterns were very similar between subjects if hemispheric asymmetry was taken into account (average between-subject correlation = 0.80). Hemispheric asymmetry was found to be reliably for most subjects. Also, individual connections arising from either right or left IPC showed high sign-stability (i.e., positive in more than 70% of sessions), which coincided with the individual’s asymmetry. The overall stability of the asymmetry of effective connectivity in the DMN could speak to the use of this characteristic as a biomarker for future studies, as well as a relevant covariate. We also found that some subjects showed more variable hemispheric asymmetry. Finally, we found that processing techniques (e.g. global signal regression and ROI size) had little effect on inference and reliability of connectivity for the majority of subjects. (Empirical) Bayesian model reduction increased reliability (within-subjects) and stability (between-subjects) of connectivity patterns. Friston, K. J., et al. (2014). A DCM for resting state fMRI. NI, 94, 396–407 Friston, K. J., et al. (2016). Bayesian model reduction and empirical Bayes for group (DCM) studies. NI, 128, 413–431. Sharaev, M. G., et al. (2016). Effective Connectivity within the Default Mode Network: Dynamic Causal Modeling of Resting-State fMRI Data. Front. Hum. Neurosc., 10 (14).


Discussions can freely continue under the hashtag #brainTC.

KEYNOTE Building a social brain

Rebecca Saxe (MIT)
Humans spend a lot of time looking at other human faces. Correspondingly, there are multiple regions of the human brain that are face selective. A fundamental question is: how do adult human brains come to have this functional organization? How does the innate architecture of the human brain interact with experience of the visual world to create an adult face-expert brain? To test this question, we conduct neuroimaging experiments in human infants looking at faces. We find that cortical regions including the fusiform face area (FFA), the superior temporal sulcus (STS) and the medial prefrontal cortex (MPFC) show increased activity when babies view dynamic faces, compared to natural scenes, by age 6 months. What determines this pattern of cortical organization? One view is that extensive learning of the non-random statistics of face images is what drives cortical specialization. By contrast, I will argue that specialization is also influenced by the functional role of faces in social interaction, and thus by brain architecture connecting visual regions to the parts of the brain that process multimodal signals of social reward.

Addressing the reliability fallacy: Similar group effects may arise from unreliable individual effects

Vanessa Teckentrup (University of Tübingen)
Coauthors: Juliane Fröhner (Technische Universität Dresden), Caroline Burrasch (University of Tübingen), Michael N. Smolka (Technische Universität Dresden), Nils B. Kroemer (University of Tübingen)
Click here for abstract
To cast valid predictions of future behavior or diagnose mental disorders, the reliable measurement of a “biomarker” such as the brain activation to prospective reward is a prerequisite. Surprisingly, only a small fraction of functional magnetic resonance imaging (fMRI) studies report or cite the reliability of brain activation maps involved in group analyses. Using simulations and exemplary multi-session data from healthy participants performing reward tasks, we demonstrate in our brainTC talk that reproducing a group activation map over sessions is not a sufficient indication of reliable measurements at the individual level. Instead, selecting regions solely based on significant main effects across persons may yield estimates that fail to reliably capture unique individual variance, for example in the subjective evaluation of a monetary offer. Critically, we show that facilitating inter-individual variation in brain response may substantially improve reliability of an alleged fMRI-based biomarker. Collectively, our results call for more attention on the reliability of candidate biomarkers at the level of the individual including potential means of improving the reliability. Thus, caution is warranted in employing brain activation patterns prematurely for clinical applications such as diagnosis or tailored interventions before their reliability has been conclusively established. To facilitate assessing and reporting of the reliability of fMRI contrasts in future studies, we provide a toolbox that incorporates common measures of global and local reliability. More broadly, we anticipate that by systematically applying open software tools to available data, we can improve the reliability and, ultimately, the replicability of prospective biomarkers by identifying key limiting factors in experimental design, data acquisition, and analysis.

Making Artificial Connectome Based on Deep Structure Networks

Winpen Hann (Mindputer Lab, Daiseer Bio-Science Research Co. LTD)
Coauthors: Haina Hwa
Click here for abstract
Deep Structure Network is a new technology of brain-like model. The first prototype of DSN had successfully completed in 2015, which could superimpose multiple single networks into a composite network. The newly-formed composite network could keep the signal processing of the original single networks and operate them independently, and it was demonstrated by tests that if there was some signal competition within the composite network it could make concession or take precedence, and no signal mix-up. This kind of prototype is named the deep structure brain model, or called Mindputer for short. The unique advantage of Mindputer is it can assemble artificial connectome at the cellular and synapse's level. Mindputer Lab had performed several experiments on assembling artificial connectome using this prototype during 2016-2017. It has been demonstrated that the artifical connectome could show transparently the micro activity circuits, the boundary between units and stairs, the tracking of function cross units and stairs, arbuscular structure of dendrites. This research is still going on, we will publish part of the information in Brain Twitter Conference.

KEYNOTE Open science and related brain topics

OHBM (Organization for Human Brain Mapping)
What is open science? Paraphrasing Kirstie Whitaker (@kirsti_j) from a recent OHBM media post (, open does not only refer to free access to research articles, scientific data, code, reviews and educational resources. It also refers to making science inclusive and accessible to everyone, soliciting input from non-scientists and reaching out to under-represented groups. Finally, open science includes reporting studies with more transparency, allowing other researchers to clearly understand and possibly reproduce that study on their own. The Organization for Human Brain Mapping (OHBM) is contributing to all of these aspects of open science through various outlets. Over the course of ten tweets we will cover the flagship OHBM open science initiatives (COBIDAS [], Hackathon [], OnDemand [], Replication Award []), as well the different ways in which OHBM promotes diversity (Diversity and Gender Committee []) and science outreach (BrainMapping blog []). We’ll save the last tweet for our most recent (and most ambitious) open science initiative yet. We can’t wait to share it with the Twittersphere!


Discussions can freely continue under the hashtag #brainTC.

The Brain Catalogue: An open portal for comparative neuroanatomy research

Katja Heuer (Institut Pasteur, Max Planck Institute for Human Cognitive and Brain Sciences)
Coauthors: Marc Herbin (MNHN), Mathieu Santin (ICM), Roberto Toro (Institut Pasteur)
Click here for abstract
We belong to a lineage old of 60 million years: mammals have colonised almost all ecosystems on earth, and show an incredible diversity, ranging from the small pygmy shrew to the 33 meters long blue whale. Our aim is to create an open-access collection of high quality vertebrate brain MRIs: the Brain Catalogue. We are scanning the Vertebrate Brain Collection of the National Museum of Natural History in Paris, which contains close to 2,000 different specimens. We are also developing a Web portal in the spirit of citizen science, that provides easy access to the entire collection of digitised brains. The current release contains 32 specimens, including a Bottlenose Dolphin, a Black Rhinoceros, a Sloth Bear, a Leopard, and even a Thylacine – a marsupial extinct in the early 20th century. All MRI scans, as well as the segmentations and cortical surface reconstructions can be already accessed at

MNE-CPP: Framework for Real-Time MEG/EEG Data Processing

Lorenz Esch (Technische Universität Ilmenau)
Coauthors: Christoph Dinh (Martinos Center), Jens Haueisen (TU Ilmenau), Matti Hämäläinen (Martinos Center)
Click here for abstract
From a neuroscientific point, the potential use case scenarios for real-time neuronal data processing are manifold. Such approaches not only enable a faster and intuitive insight on instantaneous brain functions, but more importantly they create the foundation for a wide range of neurofeedback scenarios. Due to their high temporal resolution, Magnetoencephalography (MEG) and Electroencephalography (EEG) are ideal candidates to follow brain activation in real-time. MNE-CPP is an open-source framework to build software applications for MEG/EEG acquisition and real-time as well as offline analysis. The key features of the MNE-CPP project are its modular structure and low number of external dependencies. Latter only include Qt5 for Graphical User Interface (GUI) programming and the Eigen library for linear algebra. Another characteristic of MNE-CPP is the ability to meet clinical software approval. MNE-CPP is organized in two separate layers. The library layer hosts all major functionalities as sub libraries, e.g., for visualization, data processing etc. The application layer includes applications and examples build by the MNE-CPP community utilizing the library layer. Two examples are MNE Scan (data acquisition and real-time processing) and MNE Browse (offline data analysis). Recent advances of MNE-CPP include new ways of processing and visualizing MEG and EEG data offline and in real-time. Dipole fitting was added as a new feature and can now be visualized with our 3D library. The 3D library itself underwent some major restructuring and now includes real-time cortical smoothing for both sensor and source level data. Furthermore, we added a forward solver and an individual BEM warping algorithm to the library layer. Also, the connectivity library was improved. We introduced a new sub library called “Deep” providing an interface to Microsoft’s Deep Learning framework CNTK. MNE-CPP is still undergoing major development efforts. Contributions by the community are therefore crucial and encouraged. Over the recent years we primarily focused on EEG and MEG as supported modalities. We plan to extend MNE-CPP’s generic capability to handle arbitrary data streams from a wider range of electrophysiological devices. Our long-term goal is to continue the development of MNE-CPP in the direction of a multi-purpose acquisition, processing and visualization framework. Also, we plan to merge different measurement modalities that in future will include invasive electrophysiological measurements. At the same time, we will keep a project structure and organization in order to meet clinical software approval requirements. This will further facilitate software build with MNE-CPP to be deployed in a broad field of operations, even clinical ones.

Optimal control theory applied to neuroscience, a new perspective using active inference

Manuel Baltieri (University of Sussex)
Coauthors: Christopher L. Buckley (University of Sussex)
Click here for abstract
Over the last few decades, optimal control theory has become the dominant framework for the analysis and modelling of motor control in neuroscience. Recent developments highlight the importance of considering probabilistic formulations of optimal control (stochastic optimal control (SOC)) to better model phenomena of the real world, often including ambiguous, noisy and uncertain information. The vast majority of models in SOC and neuroscience however still heavily rely on a paradigm based on Linear Quadratic Gaussian (LQG) control (Gazzaniga, 2004), whereby estimation or perception and (motor) control are represented as separate and independent processes, following the idea of "certainty equivalence" and the ``separation principle". The limitations of this approach are mainly due to its formulation in terms of linear dynamical systems with additive Gaussian noise. Some recent work highlighted however other drawbacks of proposals based on LQG, mainly the lack of dependence between control signals, or actions, and the level of uncertainty of observations (i.e. the volatility of sensory information). We will show how this is due to the very nature of Kalman filters, used for estimation in LQG-based frameworks. While optimal for a set of conditions well documented in the literature, Kalman filters have a series of limitations in a biological context. Recent efforts in the formulation of perception and action as Bayesian inference processes, as in active inference (Friston, 2010), outclass Kalman filters and thus LQG in tasks where the sensitivity to the uncertainty of the measurements is crucial for motor control. The implications for the neurosciences are potentially enormous, with recent examples showing already applications to studies in psychiatric disturbs where this sensitivity in motor control is a key symptomatic component (Lawson et al., 2017). Karl Friston. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2):127–138, 2010. Michael S Gazzaniga. The cognitive neurosciences. MIT press, 2004. Rebecca P Lawson, Christoph Mathys, and Geraint Rees. Adults with autism overestimate the volatility of the sensory environment. Nature neu- roscience, 20(9):1293, 2017.

Neural basis of Need for Cognitive Closure in bargaining behaviour: a fMRI study

Adriano Acciarino (Sapienza University of Rome)
Coauthors: Marco Tullio Liuzza, Lucia Mannetti, Salvatore Maria Aglioti, Emiliano Macaluso
Click here for abstract
The principle aim of this study is to look for neural differences in bargaining behaviour at the Ultimatum Game (UG, Güth et al., 1982), with Ingroup vs Outgroup proposers, for responders with different levels of Need for Cognitive Closure (NCC, Kruglanski, 1990). NCC is a psychological motivational construct defined as “the desire for an answer (any answer) on a given topic instead of confusion and ambiguity”. 37 participants (17 with high levels of NCC, “Hi-NCC”, and 20 with low levels of it, “Lo-NCC”) played the role of receivers. Minimal group manipulation: they had to wear purple or beige t-shirts, and the faces in the stimuli, taken from the neutral expressions of the Karolinska database, could have a purple/beige frame. Cover story: if the frame was the same colour as participant’s t-shirt, proposer and receiver were similar for personological and temperamental features on the base of the NCC scale (Webster & Kruglanski, 1994), that was presented to participants as a general personality questionnaire. Also, we told our participants that proposers’ behaviour was modelled on the base of an older experiment performed by the persons portrayed in the pictures. Starting from 10€ virtual amount, the virtual proposers’ offers were: to give 1€ and keep 9€, to give 3€ and keep 7€, to give 4€ and keep 6€, to give 5€ and keep 5€. Hi-NCC rejected significantly more the offer 3:7 when made by both Outgroup and Ingroup proposers. We found a significant cluster (that survived to both FWR and FDR corrections) in the posterior rostral Medial Frontal Cortex (prMFC) when Lo-NCC received an offer by an Ingroup vs Outgroup proposer, regardless of the offer. We also found a significant cluster in the right Fusiform Gyrus (rFG) when Hi-NCC vs Lo-NCC received the offer 4:6 by an Outgroup proposer. The activation of prMFC could due to the perception of monetary loss, while the activation of rFG could explained as the perception of the Outgroup proposer as co-operative when making the quite fair offer 4:6.

Making computational neuroscience research reproducible - A Docker based approach

Felix Z. Hoffmann (Frankfurt Institute for Advanced Studies)
Click here for abstract
Results in computational neuroscience research are often the product of complex numerical simulations. The ability to successfully reproduce these results -- a core requirement that all of research should satisfy -- not only depends on the availability of the research code: In many cases the computational environment consists of a highly specialized toolset that may be difficult or even impossible to replicate. Here I show how Docker, an operating-system-level virtualization tool, can be employed to package the computational environment of a study to make it accessible to others. To demonstrate this concept, I developed a prototype that includes (1) example computations, (2) generated data, (3) documentation in form of an electronic lab notebook and (4) the full computational environment. Published in this form, reproduction of the computational results becomes easily possible, independent of the platform and without the need to meet any dependencies other than Docker. This approach offers a promising perspective for computational neuroscience research as an increased accessibility of the results and the computations that generate them will benefit the transparency and, ultimately, the quality of the research output.

A flexible simulation framework to probe the neural mechanisms of a cognitive task

Antonio Ulloa (Neural Bytes LLC and National Institute on Deafness and Other Communication Disorders (NIH))
Coauthors: Qin Liu, Paul Corbitt & Barry Horwitz (National Institute on Deafness and Other Communications Disorders, NIH)
Click here for abstract
Introduction. There is currently a need for computational tools that combine empirical results from research studies originating from different modalities and organisms into single computational entities. These computational entities should allow the simulation of one or more cognitive tasks and produce electrophysiological, behavioral, and neuroimaging data as obtained in empirical studies. This allows testing and validation of hypotheses concerning how neural elements interact to give rise to task execution. Methods. We developed a simulation environment that allows the computational implementation of cognitive task hypotheses. Our simulation environment encompasses four elements: (a) a neuronal population model that computes brain activity in a cortical column; (b) a structural connection model based on primate neuroanatomy and neurophysiology that indicates how local populations are linked to execute a given task; (c) a structural connection model based on an empirical connectome that provides a "rest of the brain" skeleton; and (d) a forward model that transforms the simulated synaptic activity across the brain into neuroimaging time-series. Each one of these elements is flexible in that one can use any level-appropriate model to represent a given element. Results. We have used our framework in four different research studies: (1) We examined how neural noise from non-task brain regions affect execution of a visual short-term memory task (Ulloa and Horwitz, Frontiers in Neuroinformatics 2016) and, in turn, (2) how task execution changes the intrinsic activity in non-task brain regions (Ulloa and Horwitz, Biorxiv 250894); (3) we investigated the neuronal mechanisms underlying multiple-item working memory tasks (Qin, Ulloa, Horwitz, Journal of Cognitive Neuroscience, 2017); and (4) we implemented a multi-layer computational model to make predictions regarding layer-specific functional MRI during a simulated high-field fMRI experiment (Corbitt, Ulloa, Horwitz, submitted). Conclusions. Our brain simulation framework combines data from non-human neuroanatomy and electrophysiology, and human tractography and behavior into a single computational entity to test and validate hypotheses concerning how neural elements interact to give rise to specific cognitive tasks. Acknowledgements. This research was funded by the Intramural Research Program of the National Institute on Deafness and Other Communication Disorders.

Maturation Trajectories of Cortical Resting-State Networks Depend on the Mediating Frequency Band

Tal Kenet (Massachusetts General Hospital)
Coauthors: S. Khan1,4,5‡*, J. A. Hashmi1,4‡, F. Mamashli1,4, K. Michmizos1,4, M. G. Kitzbichler1,4, H. Bharadwaj1,4, Y. Bekhti1,4, S. Ganesan1,4, K. A Garel1, 4, S. Whitfield-Gabrieli5, R. L. Gollub2,4, J. Kong2,4, L. M. Vaina4,6, K. D. Rana6, S. S. Stufflebeam3,4, M. S. Hämäläinen3,4, T. Kenet1,4 1Department of Neurology, MGH, Harvard Medical School, Boston, USA. 2Department of Psychiatry MGH, Harvard Medical School, Boston, USA. 3Department of Radiology, MGH, Harvard Medical School, Boston, USA. 4Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, USA 5McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, USA 6Department of Biomedical Engineering, Boston University, Boston, USA ‡equal contribution
Click here for abstract
The functional significance of resting state networks and their abnormal manifestations in psychiatric disorders are firmly established, as is the importance of the cortical rhythms in mediating these networks. Resting state networks are known to undergo substantial reorganization from childhood to adulthood, but whether distinct cortical rhythms, which are generated by separable neural mechanisms and are often manifested abnormally in psychiatric conditions, mediate maturation differentially, remains unknown. Using magnetoencephalography (MEG) to map frequency band specific maturation of resting state networks from age 7 to 29 in 162 participants (31 independent), we found significant changes with age in networks mediated by the beta (13-30Hz) and gamma (31-80Hz) bands. More specifically, gamma band mediated networks followed an expected asymptotic trajectory, but beta band mediated networks followed a linear trajectory. Network integration increased with age in gamma band mediated networks, while local segregation increased with age in beta band mediated networks. Spatially, the hubs that changed in importance with age in the beta band mediated networks had relatively little overlap with those that showed the greatest changes in the gamma band mediated networks. These findings are relevant for our understanding of the neural mechanisms of cortical maturation, in both typical and atypical development.

Familiarity and implicit memory

ASIYA GUL (Wilfrid Laurier University)
Coauthors: Jeffery A. Jones
Click here for abstract
At present, there is a great controversy regarding how familiarity operates in order to support recognition. Despite much work regarding familiarity, it is unclear whether familiarity is an expression of explicit memory or implicit memory. In our present study, we tested several hypotheses regarding the cognitive processes reflected in the frontal FN400 ERP potentials that is elicited in the time window (300-500 ms) and fluency, which is linked to perceptual fluency and elicited at left parietal sites during the time window (200-400 ms) post-stimulus. One, the FN400 component reflects familiarity elicited by the stimulus that emanates from recent exposure or repetition. Two, the FN400 component reflects the associated conceptual implicit memory initiated by the stimulus, which may or may not emanate from recent exposure. Three, fluency effect is independent of old/new effect and it reflects perceptual fluency elicited by the stimulus. Four, whether there is an association between the FN400 effect elicited during the first exposure (during the encoding phase) and during test phase (recognition). The present study aims to extend the Gul & Jones (2017) findings using different stimuli but same methodology i.e. meaningless novel stimuli (fractals) were used instead of the pictures of common objects (Gul & Jones, 2017). Our ERP results suggest that the fluency effect was independent of the repetition; however, FN400 effect- driven by repetition and/or conceptual priming, correlated with the behavioral indicators of recognition whereas the recollection effect was absent. Thus, our behavioral and ERP results suggest that neural correlates of conceptual implicit memory process can influence the decisions driven by explicit memory. Moreover, fluency effect plays a strong role in setting up the encoding strategies and together with FN400 effect these two processes support recognition.


Discussions can freely continue under the hashtag #brainTC.

KEYNOTE Interactions between Neuroscience Problems and Technical Innovations in Noninvasive Studies of Human Electrophysiology

Matti Hämäläinen (Harvard Medical School, Massachusetts General Hospital)
Sometimes dismissed as a fishing expedition, Exploratory Data Analyses or Data-Driven Approaches can a valuable source of information for gaining new insights to brain function and for generating ideas for new experiments, testable hypothesis, and analytical tools. In many occasions, exploration of MEG and EEG data without an initial bias towards pre-meditated hypothesis has given raise to new findings, which have been later confirmed with additional experiments or even invasive studies in animal models. The new tools developed to answer the pertinent questions can in turn stimulate new ideas for studies and thus the interaction of neuroscience problems and technical innovations can extend the territory accessible by noninvasive studies of human brain function.

Individual Differences in Choice, Subjective Valuation, and Dopamine Signaling as Potential Markers for Drug Abuse Risk

Christopher Smith (Vanderbilt University)
Coauthors: Linh C. Dang (Vanderbilt), Amanda Elton (UNC Chapel Hill), Gregory R. Samanez-Larkin (Duke), Charlotte A. Boettiger (UNC Chapel Hill), David H. Zald (Vanderbilt)
Click here for abstract
People vary in their risk for drug abuse. Delay discounting of rewards, and subjective responses to drugs of abuse, have both been related to increased substance use risk. Here, I will present work showing delay discounting behavior in humans is modulated by age, estradiol, and putative prefrontal dopamine signaling (assessed via genetics) and could serve as a useful intermediate phenotype for alcohol use disorder risk in adults. I will also discuss how individual differences in dopamine signaling measured with Positron Emission Tomography relates to delay discounting behavior, subjective responses to d-amphetamine, and genetic variation. The importance of considering individual differences when investigating dopamine-dependent cognitive and affective processes will also be addressed. These findings have implications for personalizing dopaminergic treatments for a variety of psychiatric diseases.

What can neuroscience reveal about #charity giving?

Jo Cutler (University of Sussex)
Coauthors: Dan Campbell-Meiklejohn (University of Sussex)
Click here for abstract
Much research has been done using neuroscience techniques, in particular fMRI, to investigate why and how people give to charities (Cutler & Campbell-Meiklejohn, in prep). However, this is often not communicated to those working in fundraising or elsewhere in the charity sector. The broad, global audience of #brainTC provides a valuable opportunity to combine key findings from the research with implications for those involved with charities. The first study which used fMRI to understand giving found overlapping activity in the striatum, a key region for reward processing, both when people received money and gave it to charities (Moll et al., 2006). Once the involvement of a certain region in giving has been established, research can study what factors increase or decrease activation. This has been used in relation to the striatum to separate different motives for giving, which would not be separable by looking at behaviour alone (Harbaugh et al., 2007; Kuss et al., 2013). Neuroscience can also be applied to study particular tendencies in charitable giving, such as the ‘identifiable victim’ effect – where people give more to single beneficiaries particularly if they know more about them. This has been linked to stronger reward-activation during donations to identifiable victims, suggesting people feel even better about giving when they know more about who is receiving the donation (Genevsky et al., 2013). Neuroscience can also reveal differences in why people decide to give or why some people might be more generous than others. For example, differences in the brain’s connections during giving may reveal whether someone is generous because they feel empathy or a sense of reciprocity. People also showed differences in these motivations depending on how generous or selfish they generally are (Hein et al., 2016). In conclusion, neuroscience can contribute to understanding charitable giving by revealing motivations and differences between people which may not be visible through looking at their behaviour or asking them why they give. It suggests that donating is rewarding and that different levels of reward are related to aspects of campaigns which increase donations. By communicating the results of research we can help fundraisers improve their ability to support those in need and also ensure donors feel great about the positive impact they make by giving.

PET imaging in motion - a new type of brain scanner

Julie Brefczynski-Lewis (West Virginia University)
Coauthors: Chris Bauer (WVU), Alexander Stolin (WVU) Nanda Siva (WVU), Thorsten Wuest (WVU), Xiaopeng Ning (WVU) Jinyi Qi (UC Davis), Sergei Dolinsky (GE Global Research), Todd Danko (GE Global Research) Mark Muzi (UW), Paul Kinahan (UW), Bijoy Kundu (UVA), Stan Majewski (UVA)
Click here for abstract
Data from our wearable PET scanner prototype presents a promising step towards a novel neuroimaging technology. Our scanner consists of a ring of PET detectors co-registered with the patients’ heads to maintain motion tolerance. For 9 participants, we measured F18-FDG activity specific to leg motor cortex during a locomotion task in which they were standing upright and walking in place (17.5% compared to whole brain p<0.01, no significant increase in resting state, and within field-of-view motor task-related elevations were also observed in supplementary motor area and precuneus cortex). Furthermore, in a participant with a prosthetic leg (hip to foot), leg motor cortex activation was limited to representation of the intact leg. To our knowledge, this is the first demonstration of PET scanning during an upright motor task. Advantages of our system include functionality at very low dose; to elaborate, since the detectors are close to the head, the FDG dose in our study was only ~10% of the dose use for a Standard PET diagnostic scan. Combined with advances in FDG delivery and dynamic imaging techniques, we anticipate being able to increase temporal resolution and thus combine multiple tasks into one scan eliminating the need for a separate baseline scan in future studies (see Villien et al. 2014, Hahn et al. 2016). Upright motion-enabling PET scans would have a low to negligible-risk dose and be amenable to longitudinal imaging for monitoring of skill acquisition, neural insult recovery, and other neurological development studies. Another advantage of PET imaging is the ability to scan the whole brain, including deep brain structures which are not observed with wearable technologies like Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS), and even has the potential of being combined with these technologies. In addition, a multitude of different radioactive ligands exist that target not only metabolism, but also other neurotransmitter- based systems, such as inflammatory processes and even genetic expression. This will allow our technology to contribute to mechanistic models related to natural behavior, mental health, and neurological disorders such as addiction and stroke during actual behavioral performance. Free movement allowing gestures, eye-contact and postures as well as combination with virtual reality that can bring any environment to the lab, may allow research of behaviors never studied before with neuroimaging. Challenges will include supporting the weight of increased detectors for complete brain coverage, for which innovative robotic strategies are being planned. A potential trade-off between sensitivity and degree of mobility may need to be considered resulting in multiple configurations to address specific imaging needs.


Discussions can freely continue under the hashtag #brainTC.

KEYNOTE Mapping the spatio-temporal dynamics of vision in the human brain

Aude Oliva (MIT)
Every perceptual and cognitive function in humans is realized by neural population responses evolving over time and space in multiple brain regions. I will describe a brain mapping approach that combines magnetoencephalography (MEG) and functional MRI (fMRI) to yield a spatially and temporally integrated characterization of neural representations during visual perception and memory. Determining the duration and sequencing of processes at the scale of the whole human brain provides insights to develop better tools for diagnosing disorders, or pinpointing impairments as a precursor to therapeutic interventions.

Adaptive User Interfaces: Towards a new generation of Affective Interfaces.

Jaime Andrés Riascos Salas (Federal University Rio Grande do Sul)
Coauthors: Dante Barone & Luciana Nedel (Institute of Informatics - UFRGS)
Click here for abstract
In the last decades, the quick and considerable growth of Artificial Intelligence (AI) and its approaches have influenced several research fields such as economics, engineering, medicine, among others, helping researchers to explore new possibilities into their works. Evidently, the advances in Human-Computer Interaction (HCI) are also depending on AI and Machine Learning (ML), because it opens countless possibilities to enhance the interaction and communication between the machines and humans. However, we need to develop interfaces that are able to adequate themselves based on human's limitations and capabilities, due to the human can not process information as machines do. The typical way of interaction and communication between human and machines are the Graphical User Interfaces (GUIs). Therefore, taking advantage of AI, a step forward is to include smart behavior for adapting the static GUIs to user's preferences and state. Thus, an Adaptive User Interface (AUI) is proposed in order to enhance the human-computer interface, so that the interface becomes more affective due to it takes into account the user's cognitive state. This interface uses information about the amount of data enough to be shown, without overload the user, as well as a relevance weight which represents the relevant information in a given situation (Task-Relevance), set up by the user a priori. In that sense, we can obtain an interface that shows the most important data with the less cognitive load. In this step, we indicate how to get the cognitive load through different psychological tests such as EEG, Eye-Tracking, Electrodermal Activity, NASA TLX, etc. Finally, a possible user case is described where this proposal can be used.

Relationships between age and the brain’s functional connectome in young children

Xiangyu Long (University of Calgary)
Coauthors: Catherine Lebel (University of Calgary)
Click here for abstract
Introduction: It is well known that there is significant development in brain function and structure during infancy and early childhood. However, it is challenging to investigate the trajectory of brain development using non-invasive neuroimaging tools during this period. In the present study, we aimed to investigate relationships between the brain’s functional connectome and age in young children. Materials & Methods: This study used 183 datasets from 72 participants (33 females; age range: 2.0 to 7.0 years old). Resting state functional MRI (rs-fMRI) was obtained. An individual functional connectome matrix (90 x 90) was generated for each participant by calculating the functional connectivity (FC) between each pair of subregions within the Automated Anatomical Labeling template excluding the cerebellum. Negative values were removed from the connectome matrix. The matrix was thresholded at r>0.17 (p<0.01). Graph theory metrics were calculated for each participant’s connectome matrix.A robust linear correlation analysis was performed between age and all nodal and global metrics, as well as the connectivity matrix (i.e., edges of the graph), controlling for sex and handedness. P value for the correlation analysis was defined by a 1000 times permutation test and set at p < 0.01, uncorrected. As a follow-up analysis, we tested a quadratic trajectory for regions with significant age-related changes. Results: Three regions in the right frontal cortex presented consistent changes with age: negative correlations between age and both clustering coefficients and global efficiency, and positive correlations between age and shortest path length. No significant correlations were found between age and other graph metrics. Interhemispheric connections (edges), and anterior intrahemispheric connections showed positive correlations between FC and age, while posterior intrahemispheric connections showed mostly negative correlations between FC and age Conclusions: In the present study, we found the brain regions where the functional connectome metrics were correlated with age during early childhood. Decreased local clustering and increased path length in the right frontal cortex imply a shift from a more local to more connected development pattern, as observed in older children. Decreased nodal efficiency with age has been previously seen in infants, and may reflect local changes in the frontal areas. Decreasing nodal efficiency and increasing path length in frontal areas may reflect growing connections overall, since they are coupled with increasing FC strength within frontal areas. This frontal development likely also underlies the cognitive and emotional development during this time. Future studies linking these age-related changes to behavioral and cognitive development will be able to better elucidate the implications of these changes.

Calcium-dependent molecular fMRI

Benjamin Bartelle (MIT)
Coauthors: Satoshi Okada, Nan Li, Vincent Breton-Provencher, Mriganka Sur & Alan Jasanoff
Click here for abstract
Calcium ions are ubiquitous signaling molecules in all multicellular organisms, where they mediate diverse aspects of intracellular and extracellular communication over widely varying temporal and spatial scales. Although techniques for mapping calcium-related activity at high resolution by optical means are well established, there is currently no reliable method to measure calcium dynamics over large volumes in intact tissue. Here we address this need by introducing a biosensor comprised of magnetic calcium-responsive nanoparticles (MaCaReNas) that can be detected by magnetic resonance imaging (MRI). MaCaReNas respond within seconds to [Ca2+] changes in the 0.1-1.0 mM range, suitable for monitoring extracellular calcium signaling processes in the brain. We show that the probes permit repeated detection of brain activation in response to diverse stimuli in vivo. MaCaReNas thus provide a tool for calcium activity mapping in deep tissue and offer a precedent for development of further nanoparticle-based sensors for dynamic molecular imaging with MRI.

The low dimensional integrative core of cognition in the human brain

Mac Shine (The University of Sydney)
Coauthors: Michael Breakspear (QIMR Berghofer), Olaf Sporns (Indiana University), Russ Poldrack (Stanford University), Rick Shine (Sydney University), Kaylena Ehgoetz Martens (Sydney University), Oluwasanmi Koyejo (Illinois University) and Peter Bell (Queensland University)
Click here for abstract
Introduction The human brain seamlessly integrates innumerable cognitive functions into a coherent whole, shifting with fluidity between changing task demands. To test the hypothesis that the brain contains a core network that integrates specialized regions across a range of unique task demands, we investigated whether brain activity across an array of cognitive tasks could be embedded within a relatively low dimensional, dynamic manifold. Methods We used high temporal resolution 3T fMRI data (TR = 0.72s) from the Human Connectome Project consortium to examine BOLD activity from 200 unrelated individuals across seven unique cognitive tasks. We performed a spatiotemporal principal component analysis on regional BOLD data, concatenated across all seven tasks. We then estimated a time series for each PC (tPC) by calculating the mean activity of the group-level BOLD time series weighted by the regional coefficients associated with each spatial component. To describe the temporal evolution of the global brain state, we next reconstructed the state-space trajectory of the dominant low-dimensional signal. We next used a clustering approach on topics from the NeuroSynth repository (Poldrack et al. 2012) to identify four cohesive ‘topic families’, representing ‘Motor’, ‘Cognitive’, ‘Language’ and ‘Memory’ capacities. We used a weighted average between the topic maps and concatenated BOLD time series data to calculate a similarity index between topic family and the flow of principal components over time. Next, we calculated time-varying functional connectivity from the concatenated BOLD time series and applied graph theoretical analyses to the resultant temporal connectivity matrices. After controlling for task-block effects in each time series, we used a general linear model to examine the relationship between the tPC1-5 time series and time-resolved network architecture. Results We found a dominant, low-dimensional neural signal (Cunningham & Yu 2014): the first five PCs accounted for 67.9% of the variance. The first tPC, which explained 38.1% of signal variance across all tasks, reflected a task-dominant signal that was strongly correlated with the overall task block structure across all seven tasks (r = 0.64, p < 0.01). The phase portrait of the tPCs describes the temporal evolution of the low-dimensional signal shared across all behavioral tasks. We observed a clear relationship between the tPC time series and latent cognitive processes – for instance, ‘Motor’ and ‘Cognitive’ functions were jointly separated from ‘Memory’ and ‘Language’ function by tPC1, but were separated from each other by tPC5. Expression of tPC1 was associated with a distributed and integrated network topology with strong connections across specialist modules. Conclusions Our results advance a unique window into functional brain organization that emphasizes the confluence between low dimensional neural activity, network topology and cognitive function.

The Brain Dynamics Toolbox for Matlab

Stewart Heitmann (QIMR Berghofer Medical Research Insitute)
Click here for abstract
The Brain Dynamics Toolbox is an open-source software toolbox for simulating dynamical systems in neuroscience. It is intended for researchers, engineers and students who wish to explore mathematical models of brain function using Matlab. It includes a graphical tool for simulating dynamical systems in real-time as well as command-line tools for scripting large-scale simulations. The toolbox supports the three major classes of differential equations that typically arise in computational neuroscience. Namely Ordinary Differential Equations (ODEs), Delay Differential Equations (DDEs) and Stochastic Differential Equations (SDEs). The user provides the right-hand side of their equation as a Matlab function according to the established conventions of Matlab ODE and DDE solvers. The graphical interface automates the process of calling the solver and plotting the computed solution. This allows the user to interactively explore the dynamics of a custom model with no additional coding effort. The graphical controls can support parameters that range in size from simple scalars to large connectivity matrices. The graphical interface therefore does not preclude models with very large parameter spaces. The display panels themselves are modular by design. The user can call up different types of plots to get the best view of the dynamics. These include time-plots, phase-portraits, space-time plots, mathematical equations and more. The existing panels can also be augmented with user-defined panels. As can the existing solver routines. In all, the design philosophy of the toolbox is to exploit the combinatorial power of simple modules in unlimited combinations.

Arcuate fasciculus lateralization and its relation to pre-reading skills in preschool children

Jess Reynolds (University of Calgary)
Coauthors: Melody N. Grohs, Deborah Dewey & Catherine Lebel (University of Calgary)
Click here for abstract
Introduction During the preschool period, phonological processing skills improve rapidly, and are strong predictors of future reading success. The arcuate fasciculus (AF) is a key white matter tract involved in language. Its developmental trajectories have been linked with reading skills in school-aged children, and increased left lateralisation of the AF is associated with better reading scores in children in the first grade of school, but structural lateralization has not been investigated in this age range. The current study aims to use diffusion tensor imaging (DTI) to extend our understanding of AF lateralization and development in a pre-reading population. Methods Sixty-eight children (30 male) aged 2.9-5.2 years (4.2 ± 0.6 years) were recruited from an ongoing prospective study. 42 children had two scan time points (1.4 ± 0.7 years apart). Imaging was conducted using a GE 3T MR750w scanner and 32-channel head coil. Whole-brain diffusion weighted images were acquired using a single shot spin echo echo-planar imaging sequence, with TE=79ms, TR=6750ms; 30 gradient directions at b=750 s/mm2, and 5 volumes at b=0 s/mm2. DTI data were preprocessed in ExploreDTI (V4.8.6). Semi-automated deterministic streamline tractography was used to delineate tracts, with small manual edits to remove spurious fibres. Number of streamlines, FA and MD were extracted for each tract, and a laterality index (Left-Right/Left+Right) was calculated for each measure. Children’s language abilities were assessed within six months of their first scan using the NEPSY-II Phonological Processing (PP) and Speeded Naming (SN) subtests; both are predictors of later reading abilities. Results Age was positively correlated with FA in the left AF (F=15.388, p<0.001), and negatively correlated with left MD (F=26.513, p<0.001) and right MD (F=16.936, p<0.001). 68% of children demonstrated leftwards lateralization based on number of streamlines. No correlation between age and AF lateralization of any kind was observed. A positive correlation was observed between left FA and SN (r=0.25, p=0.046), and negative correlation between left MD and PP (r=-0.28, p=0.023). Controlling for age, there was a correlation between PP and FA lateralization (r=0.269, p=0.038). Positive correlations between rate of change of right FA between scans and SN (r=0.346, p=0.049) were observed. Discussion Children with more mature patterns of left AF development, and children with faster increases of right FA, performed better on pre-reading assessments. This extends work in older age groups, suggesting that lateralization of the AF is present even in toddlers. These results also suggest that structural lateralization is present prior to the development of functional lateralization, which is developing during this period. It appears that leftwards lateralization confers an advantage for language performance both before reading begins (as seen here), and during reading skill acquisition.