We here provide two versions of our localizer task. Both are implemented in python and require VisionEgg to run. In addition, both require the Statlib library which can be downloaded here (once you download it, put it inside each relevant experiment's folder). [NB May 2012: I now have a more efficient and flexible version of the localizer that I will soon put up here; for now, please email me if you'd like a copy.]
These localizers can be easily modified to vary the block/run timing or other parameters. We also provide two complete sets of materials (with all four conditions: sentences, words, jabberwocky and nonwords) used in Experiments 1-3 in Fedorenko et al. (2010), in case you just want to use our materials.
Please cite Fedorenko, E., Hsieh, P.-J., Nieto-Castañon, A., Whitfield-Gabrieli, S. & Kanwisher, N. (2010) when using the localizers or the materials.
Conceptual justification for the Sentences>Nonwords contrast
One of the most common questions / concerns we hear when we present our work is about the usefulness of our main localizer contrast (i.e., the contrast between sentences and lists of pronounceable nonwords). We here elaborate on why we think this contrast is a good contrast to use for identifying language-sensitive brain regions and how the general method here can be extended in future work to other alternative or complementary localizers if it turns out that they pick out other relevant regions.
There are at least three possible problems with any localizer contrast, including ours:
i. The localizer contrast may be under-inclusive and may thus miss some important brain regions.
ii. The localizer contrast may be over-inclusive and thus include some brain regions that have nothing to do with the targeted mental function(s).
iii. The way the brain is divided up into "regions" by a localizer contrast may be wrong (e.g., maybe some of the nearby regions are actually one region, or maybe some regions contain functionally distinct sub-regions).
Under-inclusivity is not a problem as long as researchers are clear about what mental processes they are targeting. For example, by design the sentences>nonwords contrast excludes sensory brain regions that are critical for language (i.e., the auditory speech perception regions and visual cortices engaged in perceiving visual linguistic input). This is intentional. For any complex mental process it's going to be difficult, if not impossible, to have a single contrast that is going to capture all of the relevant responses. In our case, we are targeting "high-level" linguistic processes, i.e., lexical and combinatorial syntactic/semantic processes. With our contrast we get an extended set of left-lateralized regions in the frontal, temporal and parietal cortices, and these regions have been implicated in high-level linguistic processing in many previous studies. So, are we capturing ALL of high-level language with our contrast? Probably not. But we are capturing a large proportion of it, and if - through rigorous investigations across many studies and labs - we can understand the representations that these regions store, the computations they perform, the time-course of these computations, and the anatomical and functional relationships among these regions (and between these regions and the rest of the brain), then we'll have come a long way even if this doesn't lead us to a complete picture of how language is implemented in the brain.
[An additional point about under-inclusivity: Depending on the particular research question, in addition to the language localizer a researcher may want to include localizers for other brain systems. For example, previous work has implicated regions in the domain-general fronto-parietal network (e.g., Duncan, 2010, TiCS) in some aspects of linguistic processing. As a result, in most of our studies we include not only our language localizer, but also a localizer task that robustly activates the fronto-parietal network (which we showed is almost completely spatially non-overlapping with the language network; Fedorenko et al., 2011, PNAS). This allows us to examine the responses of both sets of regions to various linguistic manipulations and thus to characterize the roles of both sets of regions in language. In studies where we focus on the communicative aspects of language, we include a localizer for theory-of-mind regions (Saxe & Kanwisher, 2003, Neuroimage).]
Over-inclusivity is also not a problem. Our intial work (described in Fedorenko et al., 2010, JNeurophys) was not meant to provide any answers yet about the functional architecture of language. It merely provided us with tools that allow us to find brain regions that are in some way relevant to high-level linguistic processing. As mentioned above, the brain regions that this contrast identifies have all been implicated in linguistic processing in prior studies. Furthermore, these regions pass some "reality check" tests responding in a similar way to linguistic stimuli presented visually vs. auditorily and being relatively insensitive to whether participants are perceiving the stimuli passively vs. performing a memory-probe task after each stimulus. So, this is a good start. Now, across numerous studies we are testing a wide range of hypotheses about these brain regions. These investigations may tell us that a strong response to sentences in some of these regions has nothing to do with language, and that's ok (we are not on a quest to show that all of these regions are language regions; instead, we want to find out what these regions do). On the other hand, we hope that everyone would agree that at least some of the regions that are important to language should show a greater response to meaningful and structured linguistic stimuli than to a control condition like lists of nonwords or backwards speech (even if some of the regions that show this response may have nothing to do with language). The implication is that by investigating regions that respond more to sentences than nonwords we may learn something useful about the architecture of language. It is also worth noting that functionally narrower contrasts (e.g., words>nonwords, sentences>words, or jabberwocky>nonwords) all activate the same network or regions as the broader, sentences>nonwords, contrast (for discussions see Fedorenko et al., 2010, JNeurophys; Fedorenko, Nieto-Castañon & Kanwisher, 2012, Neuropsychologia).
Finally, the question of what constitutes a brain "region", or whether a notion of regions is even useful, is a deep and important one. At least in some cases the notion of regions seems to capture something important about the functional landscape of our brains. The cortical sheet consists of patches containing cells with different structural properties, and boundaries among these patches can be identified both by eye (e.g., Brodmann, 1909), or with quantitative observer-independent methods (e.g., Amunts et al. 1999, J Comp Neurol; the Juelich project). Furthermore, these cytoarchitectonic areas have distinct connectivity patterns, and investigations in animals that combine fMRI and neurophysiological investigations have shown that functional properties of neurons change quite sharply across the boundaries of fMRI-defined regions (e.g., Tsao et al., 2006, Science; Bell et al., 2011, JNeurosci). However, in other cases functional properties may change in a more continuous fashion across the cortex (Pelphrey et al., 2004, Psych Sci; Vinckier et al., 2007, Neuron; Op de Beeck et al., 2008, Nat Rev Neurosci). Here is an excerpt from our 2010 paper that succinctly expresses our views on the issue of "regions":
"... defining fROIs is an effort to carve nature at its joints, that is, to identify the fundamental components of a system so that each can be characterized independently. It would be unlikely if the fROIs described here were the best possible characterization of the components of the language system. More likely, future research will tell us that some of these fROIs should be abandoned, some should be split into multiple subregions, others should be combined, and yet other new ones (derived from new functional contrasts) should be added. We intend the use of language fROIs to be an organic, iterative process rather than a rigid and fixed one. On the other hand, to be most useful, some balance will have to be achieved between flexibility of fROI definition and consistency across studies and labs, as the latter is necessary if fROIs are to enable the accumulation of knowledge across studies."
In order to investigate whether new fROIs may be useful, we always complement our fROI analyses with individual-subject whole-brain analyses, which can help us see structure within our fROIs as well as detect activations outside the borders of our fROIs for the critical manipulations. Furthermore, we have developed an analysis method that enables detecting and examining functionally heterogeneous subsets of voxels within the fROIs (drop me an email if you want to know more).
SNLoc (2 condition localizer)
This is the simplest 2-condition version (sentences and nonwords).
- Download SNLoc
This localizer includes 4 runs (each lasting 7 min 44 sec), but in most healthy adult subjects 2 runs are sufficient for defining subject-specific fROIs.
Some additional details:
Each trial is 4.8 sec long (each word (n=8) appears for 350 ms; 350*8=2800, fixation (300), probe (1350), fixation (350)).
There are 5 trials per block; consequently, each block is 24 sec long.
There are 16 experimental blocks per run.
Each run also contains 5 fixation blocks (16 sec each), so the total run duration is 464 sec.
There are two versions of counterbalancing of conditions (these correspond to "Timing_VersionN.txt" files where N=1 or 2), corresponding to the -c parameter.
The -a parameter determines which subset of the stimuli is presented in each run and has values between 0 and 3.
Here is a version of this localizer implemented in DirectRT by Elinor Amit and Warren Winter (Harvard University).
SWNLoc (3 condition localizer)
This is the 3-condition version that includes the words condition, in addition to the sentences and nonwords conditions.
- Download SWNLoc
This localizer includes 6 runs (each lasting 5 min and 36 sec), but in most healthy adult subjects 3-4 runs are sufficient for defining subject-specific fROIs.
Why might you want to use the 3-condition version of the localizer instead of the basic 2-condition version? The basic answer is that in the 3-condition version you can estimate the response to the words condition, and this may be relevant in some cases for better understanding the response elicited by the conditions in your task of interest (i.e., you can learn something from where the response to your condition(s) of interest falls relative to the words condition). Also, you can use the narrower functional contrasts (sentences > words and words>nonwords) as the localizer contrasts (see also the Parcels section).
Some additional details:
Each trial is 4.8 sec long (each word (n=8) appears for 350 ms; 350*8=2800, fixation (300), probe (1350), fixation (350))
There are 5 trials per block; consequently each block is 24 sec long.
There are 12 experimental blocks per run.
Each run also contains 3 fixation blocks (16 sec each), so the total run duration is 336 sec.
There are three versions of counterbalancing of conditions (these correspond to "Timing_VersionN.txt" files where N=1, 2 or 3), corresponding to the -c parameter.
The -a parameter determines which subset of the stimuli is presented in each run and has values between 0 and 5.
Materials (for all four conditions)
See Fedorenko et al. (2010; Appendix A) for details of how the materials were created.
- Download the set used in Experiment 1 in Fedorenko et al. (2010)
- This set includes 12-word/nonword long sequences
- Download the set used in Experiments 2 and 3Vis in Fedorenko et al. (2010)
- This set includes 8-word/nonword long sequences
- All words/nonwords in this set are mono-/bi-syllabic
Other language localizers
We are working on developing other localizers for language-sensitive cortex, including some that target narrower aspects of linguistic processing. We will make these available once we think they are good enough.
If you have a localizer for some aspect of language that you would like to share with the research community, drop me a line, and I can put it up on this website, or provide a link to your website.