Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Hum Brain Mapp. 2013 Oct; 34(10): 2624–2634.
Published online 2012 Apr 16. doi: 10.1002/hbm.22087
PMCID: PMC4034344
NIHMSID: NIHMS585998
PMID: 22505340

Decoding the representation of numerical values from brain activation patterns

Abstract

Human neuroimaging studies have increasingly converged on the possibility that the neural representation of specific numbers may be decodable from brain activity, particularly in parietal cortex. Multivariate machine learning techniques have recently demonstrated that the neural representation of individual concrete nouns can be decoded from fMRI patterns, and that some patterns are general over people. Here we use these techniques to investigate whether the neural codes for quantities of objects can be accurately decoded. The pictorial mode (nonsymbolic) depicted a set of objects pictorially (e.g., a picture of three tomatoes), whereas the digit‐object mode depicted quantities as combination of a digit (e.g., 3) with a picture of a single object. The study demonstrated that quantities of objects were decodable from neural activation patterns, in parietal regions. These brain activation patterns corresponding to a given quantity were common across objects and across participants in the pictorial mode. Other important findings included better identification of individual numbers in the pictorial mode, partial commonality of neural patterns across the two modes, and hemispheric asymmetry with pictorially‐depicted numbers represented bilaterally and numbers in the digit‐object mode represented primarily in the left parietal regions. The findings demonstrate the ability to identify individual quantities of objects based on neural patterns, indicating the presence of stable neural representations of numbers. Additionally, they indicate a predominance of neural representation of pictorially depicted numbers over the digit‐object mode. Hum Brain Mapp 34:2624–2634, 2013. © 2012 Wiley Periodicals, Inc.

Keywords: number representation, parietal cortex, fMRI multivoxel pattern analysis

INTRODUCTION

How numbers are mentally represented has been an enduring question in psychology because of its educational and cross‐cultural significance, and many different approaches have attempted to address this issue. After decades of interesting behavioral studies [Barth et al., 2003; Dehaene and Akhavein, 1995; Moyer and Landauer, 1967; Naccache and Dehaene, 2001] of number representation, neuroscience methods are increasingly being used to provide information about the neural representation of numbers. Neuroimaging studies [Cohen Kadosh and Walsh, 2009; Dehaene, 1996; Eger et al., 2003; Libertus et al., 2007; Nacache and Dehaene, 2001; Pinel et al., 2001] have increasingly converged on the possibility that the neural representation of numbers may be decoded from brain activity, particularly in parietal cortex. In other semantic domains, recent applications of multivariate analysis methods to fMRI data have succeeded in decoding the stimulus features represented in primary or midlevel sensory cortices [Formisano et al., 2008; Haynes et al., 2005; Kamitani et al., 2006], the cognitive states associated with object categories, such as houses [Cox and Savoy, 2003; Hanson et al., 2004; Haxby et al., 2001; Haynes and Rees, 2006; O'Toole et al., 2007], tools and dwellings [Shinkereva et al., 2008], goal intentions in a navigation task [Rodriguez, 2010], children's mental states while solving algebra equations [Anderson et al., in press] and detecting individual memories [Rissman et al., 2010]. More recently, it has been demonstrated that the neural representation of individual concrete nouns can be decoded from fMRI patterns [Just et al., 2010; Mitchell et al., 2008], and that these patterns are common across people. Additionally, it has also been shown that individual numbers presented as either dots or digits can be decoded from brain activity [Eger et al., 2009]. However, there exists little connection between the two. This study examines whether the neural codes for quantities of objects can be decoded from fMRI neural patterns, and if so, whether these patterns underlying quantities are common across objects and participants.

Numbers can be expressed in different notational forms (e.g., the digit 3, the number word three in spoken or written form, or a nonsymbolic set of three dots). Whether there is a common neural substrate underlying an abstraction of a given number across these notations is a topic of considerable debate [Cohen Kadosh, 2009; Dehaene et al., 1998]. Some neuroimaging studies have indicated bilateral parietal regions to be associated with an abstract or notation‐independent representation of numbers [Dehaene et al., 1998; Fias et al., 2003; Libertus et al., 2007; Nacache and Dehaene, 2001], while others have shown parietal hemispheric asymmetry in the representation of numbers depicted in different notations [Ansari, 2007; Cohen Kadosh et al., 2007; Notebaert et al., 2011; Piazza et al., 2007]. So far, there is no conclusive evidence concerning a common neural representation of numbers across different input forms.

The primary goal of this study was to investigate whether it is possible to decode from neural patterns the cardinality of the set of semantic objects (i.e., how multiple quantities of objects are represented in the brain in contrast to single objects [Just et al., 2010; Mitchell et al., 2008] displayed in novel ways [nonsymbolically or pictorially, as a picture of three tomatoes or as a combination of a symbolic digit (say, 3) and a picture of an object (e.g., a tomato)]. Other goals of the study were to investigate whether these number‐evoked neural patterns were common across objects and across participants and finally, whether the neural representation underlying a given quantity of an object is common across different input modes.

MATERIALS AND METHODS

Participants

Ten right‐handed adults from the Carnegie Mellon community (three males), mean age 25.5 years (SD, 2.27; range, 21–30 years), participated and gave written informed consent approved by the University of Pittsburgh and Carnegie Mellon University's Institutional Review Boards. All participants were financially compensated for the practice and fMRI data collection sessions.

Experimental Paradigm

Quantities of objects were presented in two visual modes. The pictorial (nonsymbolic mode) depicted a given quantity of objects pictorially (e.g., a picture of three tomatoes), whereas the digit‐object mode depicted a combination of a digit (e.g., 3) with a picture of an object. Three quantities 1, 3, and 5 and four objects dots, hammer, tomato, and car were used in the stimuli for each presentation mode and, therefore, there were a total of 24 quantities of objects across the two presentation modes. The objects (except dots) were chosen based on their relevance to three semantic factors (manipulation, shelter, and eating) from Just et al. [2010] study. To unconfound number from associated lower‐level stimulus parameters like spatial position differences between quantities in the pictorial mode, each of the three quantities 1, 3, and 5 were presented in four different spatial configurations (different spatial locations for the quantity 1), i.e., each quantity had four different spatial arrangements as there were four objects. Figure Figure11 provides an example of two out of four different ways of presenting the quantity 5, i.e. 5 cars vs. 5 hammers”. The two presentation modes (pictorial and digit‐object) were presented in separate runs or blocks with each block containing 12 quantities of objects (1, 3, and 5 of dots, hammers, cars and tomatoes). Each block of these 12 quantities of objects (for both the pictorial and digit‐object presentation modes) was presented six times (using six different random permutation orders of the 12 items within each block).

An external file that holds a picture, illustration, etc.
Object name is HBM-34-2624-g001.jpg

A schematic diagram of the experimental paradigm. A: Pictorial mode of presentation. B: Digit‐picture mode of presentation.

Participants were given the same instructions for the two modes. To ensure that each participant had a consistent set of properties to think about, he or she was asked to generate a set of properties for quantities of objects prior to the scanning session [Shinkareva, 2008]. However, nothing was done to elicit consistency across participants. Specifically, they were instructed as follows:

“In this task you will see small quantities (1, 3, or 5) of objects (tomatoes, hammers, cars, or dots) on the screen. The study investigates how consistently you can think about a given quantity of various types of objects. You will be shown each display several times, and we would like you to think about the same properties of a given quantity of objects each time you see it. The image will be on the screen for 3 seconds, and you can think of properties such as: What does this quantity of objects look like? How do you interact with the object in this quantity? For what purpose is this quantity of this object used?

For example, if you see a picture of five tomatoes, then you may think about properties like these:

  • Appearance: Five tomatoes in a plastic box in the grocery store

  • Interaction: Carrying them

  • Purpose: Making a salad

In some of the displays, rather than seeing a picture of 5 tomatoes, you will simply see the digit 5 and a picture of a tomato, and again you should think of a set of properties for that quantity of the object”.

Each stimulus was presented for 3 s, followed by a 4 s rest period, during which the participants were instructed to fixate on an X displayed in the center of the screen. There were six additional fixation/rest periods, 31 s each, distributed across the session, to provide a baseline measure of activation. A schematic diagram of the paradigm is shown in Figure Figure11 . Although, the timing of the stimulus presentations used in the study makes it similar to a fast event related design, the treatment of the hemodynamic response was different from those of conventional event related designs. Specifically, only 4 s of data were used from each item. These 4 s consisted of the four images starting at 4 s after stimulus onset and ending 4 s later (at a TR of 1), capturing the peak of the activation response in this paradigm. This experimental paradigm and the 4‐s interval were chosen based on previous neurosemantic fMRI studies [e.g., Just et al., 2010; Mitchell et al., 2008; Shinkareva et al., in press] that attempted to optimize classification accuracy. Since the stimuli were presented in a different random order in each block, there should be no systematic sequential dependencies that the classifier can learn from HRF overlap between consecutive stimuli. Thus, the results should be unbiased and, at worst, they may underestimate the accuracy with which the neural representations can be classified. It is always possible that longer intervals between successive stimuli would have reduced overlap between the activation responses to successive stimuli and thereby increasing the classification accuracies.

fMRI Procedure

Functional images were acquired on a Siemens Allegra 3.0T scanner (Siemens, Erlangen, Germany) at the Brain Imaging Research Center of Carnegie Mellon University and the University of Pittsburgh, using a gradient echo EPI pulse sequence with TR = 1,000 ms, TE = 30 ms, and a 60° flip angle. Seventeen 5‐mm thick oblique‐axial slices were imaged with a gap of 1 mm between slices. The acquisition matrix was 64 × 64 with 3.125 × 3.125 × 6 mm3 voxels.

fMRI Data Processing for Machine Learning

Initial data processing was performed with Statistical Parametric Mapping software (SPM2, Wellcome Department of Imaging Neuroscience, London, UK). The data were corrected for slice timing, motion, and linear trend, and were temporally smoothed with a high‐pass filter using a 190 s cutoff. The data were normalized to the Montreal Neurological Institute (MNI) template brain image using a 12‐parameter affine transformation without changing voxel size. The images brain volume was parcellated into regional definitions derived from the Anatomical Automatic Labeling (AAL) system [Tzourio‐Mazoyer et al., 2002].

The initial machine learning classifications were attempted using voxels from only one cortical region at a time, as well as from all of the regions combined. These initial analyses showed that although the highest classification accuracies were obtained using voxels from anywhere in the brain, the accuracies were only a few percentage points lower when voxels from only parietal regions were used. Because parietal regions have previously been centrally implicated in several neuroimaging studies associated with number representation [Cohen Kadosh et al., 2007; Cohen Kadosh and Walsh, 2009; Dehaene, 2003; Eger et al., 2003, 2009 Libertus et al., 2007; Nacache and Dehaene, 2001; Pinel et al., 2001], the main analyses reported below focus on only the parietal lobe.

The percent signal change (PSC) relative to the fixation condition was computed at each voxel for each item presentation. To get mean PSC for individual items or trials, the mean of four images or four acquisitions acquired once per sec within the 4s window from 4s to 7s after stimulus onset was computed and it was divided by the mean fixation image and converted into PSC. For each item or trial, the mean PSC of these four images (relative to mean fixation image) acquired within a 4 s window provided the main input measure for the classifiers' training and testing. Since, each trial or item presentation was 3 s long, the window for the preceding trial was captured during the rest period (or empty inter stimulus interval) that followed it. These methods are similar to other machine learning fMRI studies [Just et al., 2010; Mitchell et al., 2008; Shinkareva et al., in press]. The mean PSC data for each item presentation were further normalized to have mean zero and variance one, to equalize the between‐participants variation in exemplars.

Machine Learning Methods

Classifiers were trained to identify cognitive states associated with thinking about the quantities of objects presented in the two modes using the evoked pattern of functional activity (mean PSC). Classifiers were functions f of the form: f: mean_PSC → Y j, j={1, …,m}, where Y j was either three quantities (1, 3, or 5) or two modes (pictorial or digit‐object), where m was either 3 or 2 accordingly, and where mean_PSC was a vector of mean PSC voxel activations, as described above. To evaluate classification performance, trials were divided into training and test sets. To reduce the dimensionality of the data, relevant features (voxels) were extracted from the training set prior to classification (see Feature Selection, below). A classifier was built from the training set using the selected features and was evaluated on the left‐out test set, to ensure unbiased estimation of the classification error.

Classification

A Gaussian Naïve Bayes (GNB) pooled variance classifier was used [Mitchell, 1997]. It is a generative classifier that models the joint distribution of a class Y (e.g., quantities or modes) and attributes (voxels), and assumes the attributes X1,…,Xn are conditionally independent given Y. The classification rule is:

YargmaxyjP(Yyj)ΠinP(Xi|Y=yj),j=1,2,...,m

In this experiment, all classes were equally frequent. Rank accuracy was used to evaluate 3‐class or 2‐class classification. Classification results were evaluated using k‐fold cross‐validation described below. To evaluate the significance of obtained rank accuracies, we performed random permutation tests (10,000 permutations) for each type of classification and reported accuracies with P < 0.05.

Feature Selection

Features (voxels) were selected from the training set (training presentations), selecting the 120 parietal voxels whose vector of response intensities to the set of stimulus items were the most stable over the set of four training presentations. A voxel's stability was computed as the average pairwise correlation between pairs of the four vectors of response intensities in a training set for within‐participant analysis. Our previous studies indicated that 120 voxels typically are sufficient to obtain accurate classification of semantic representations [Just et al., 2010].

To visualize the degree of commonality of the locations of stable voxels across participants, averaged stability maps were computed by first creating masks for the stable voxels that were common in at least two cross‐validation folds with a cluster size of five contiguous voxels in each participant. Then these masks were averaged across participants showing only those voxels that were common in at least two participants with a minimum cluster size of 12 contiguous voxels Such maps were created for the two modes of presentation separately.

Cross Validation

Cross‐validation procedures were used to obtain mean rank accuracies of the classification within each participant. In each fold of the cross‐validation, the classifier was trained on the labeled data from four of the six presentations or blocks and then tested on the mean data of the remaining two left‐out blocks. This N‐2 cross‐validation procedure resulted in 15 possible cross‐validation folds, and the main data consist of the mean rank accuracy of the classification averaged across the 15 folds. Thus training and test sets were independent [Mitchell et al., 2004].

Multiple Participant Analysis

Data from all but one participant were used to train a classifier to identify the data in the left‐out participant. The mean of six presentations of each item was computed for each participant separately. Feature selection identified the voxels within the mask of the parietal region whose responses were the most stable over the set of nine participants in the training set. The 120 most stable voxels were selected, where a voxel's stability was computed as an average pairwise correlation between its 12‐ quantities (1, 3, and 5 within dots, hammer, tomato, and car) activation profiles across nine participants in the training set. The same 120 voxels obtained from the training set were used to test the left‐out participant. This process was repeated so that it reiteratively left out each of the participants.

RESULTS

Overview

The findings demonstrated the ability to accurately decode individual numbers from neural activation patterns in both the pictorial and the digit‐object modes, with much higher classification accuracy obtained in the pictorial mode. Additionally, a classifier trained on activation patterns evoked by quantities in one mode could to some degree classify quantity representations in the other mode. Third, in the pictorial mode, a classifier trained on the neural patterns evoked by a quantity of an object (say, three tomatoes) could accurately classify quantity representations evoked by other objects (such as three hammers). Fourth, in the pictorial mode, it was possible to accurately classify quantities from a given participant's activation data even if the classifier was trained exclusively on the data from other people. Finally, in the pictorial mode, classification errors that confused similar quantities (e.g., confusing 3 with 5) were more frequent than errors that confused dissimilar quantities (e.g., confusing 1 with 5).

Quantity Classification Within Each Presentation Mode

When the classifier was trained and tested on exemplars of the three quantities (1, 3, and 5 irrespective of objects) within only the pictorial mode, the mean rank accuracy for quantity classification was 0.81, with all participants' classification accuracies lying far above the chance level1 of .55 (this was the P < 0.05 level obtained for 10,000 random permutations). Further exploration of individual quantity classification in the pictorial mode demonstrated rank accuracies to be 0.88, 0.78, and 0.77 for quantities 1, 3, and 5 respectively with no reliable difference in classification accuracies for quantities with the exception of 1 being higher than both 3 and 5. In the digit‐picture mode, the mean rank accuracy was 0.66, reliably lower than in the pictorial mode (t(9) = 3.79, P < 0.001), but still above chance level for all but one participant, as shown in Figure Figure2.2. Thus the main classification of quantities in the pictorial mode was very high, whereas it was much lower but still generally above chance level in the digit‐picture mode. Note that for any given quantity, there were four different spatial configurations, which suggests that high classification accuracy in pictorial mode is not based on spatial positioning differences among the different quantities.

An external file that holds a picture, illustration, etc.
Object name is HBM-34-2624-g002.jpg

Within‐participants quantity classification for the two modes of presentation.

Cross‐Mode Quantity Classification

When the classifier was trained on exemplars of quantities from the pictorial mode, the mean rank accuracy for classifying quantities in the digit‐picture mode was 0.56. When the classifier was trained on exemplars of quantities from the digit‐picture mode, the mean rank accuracy for classifying quantities in the pictorial mode was 0.59. The poor classification across presentation modes suggests that the neural representation of quantity in parietal areas is primarily mode‐specific, at least for the two presentation modes used here.

Presentation Mode Classification

When the classifier was trained on exemplars of the two modes (irrespective of quantities and objects), it was possible to accurately classify which of the two presentation modes the participant was viewing, with a mean accuracy of 0.83 (SE = 0.02). (The accuracies for all ten participants were above the chance level of P < 0.05). This would not be at all surprising if occipital lobe voxels were among the features. However, it is interesting to note that parietal representations retain some footprints of the presentation modality, consistent with the finding above of a lack of cross‐mode classification ability.

Cross‐Object Quantity Classification

In the pictorial mode, when the classifier was trained on exemplars of quantities of just one object, it was possible to classify quantities of the other three objects with a mean rank accuracy of .73 (SE = 0.02) (chance level of P < 0.05 is 0.59), indicating some meta‐object neural representation of numbers in the pictorial mode. Additionally, successful generalization of neural patterns across different objects with different spatial configurations for any given quantity suggests that the discrimination of individual quantities was based on quantity differences rather than spatial positioning differences [When the classification was within the same object as the classifier was trained on, the mean accuracy was 0.80 (SE = 0.03)].

In the digit‐picture mode, there was little evidence of any capability to classify quantities across objects. When the classification was done within the same object as trained on, the mean accuracy was .65 (SE = 0.03).

Numerical Distance Effect

In the pictorial mode, there was evidence of monotonicity in the neural representation of numerical values, indicated by the classifier's confusion errors in attempting to identify a number from its neural patterns. When the correct label was 1 or 5 and the classifier's most probable predicted label was incorrect, this incorrect guess was much more often the proximal number than the distal number (e.g., if the label for 5 was incorrect, the incorrect label was much more likely to be 3 than 1). A paired t‐test between the overall means of types of errors revealed a significant distance effect (Numerical Distance 1: M = 0.79, SE = 0.02; Numerical Distance 2: M = 0.94, SE = 0.02; t (9) = 4.89, P < 0.001).

Between‐Participants Quantity Classification

In the pictorial mode, when the classifier was trained on exemplars of quantities (irrespective of objects) from 9 of the 10 participants, it was able to classify quantities in the 10th, left‐out, participant. This procedure was repeated for all participants. Reliable accuracies were reached for all 10 participants, with a mean accuracy of 0.85 (0.68 represents a chance level of P < 0.05 here), as shown by the filled bars in Figure Figure3.3. However, in the digit‐picture mode, this type of cross‐participant classification resulted in reliable accuracies in only 2 out of the 10 participants (see unfilled bars in Fig. Fig.4).4). Thus there was a large amount of commonality across participants in the neural representation of quantities in the pictorial mode but not in the digit‐picture mode.

An external file that holds a picture, illustration, etc.
Object name is HBM-34-2624-g003.jpg

Between‐participants quantity classification for the two modes of presentation.

An external file that holds a picture, illustration, etc.
Object name is HBM-34-2624-g004.jpg

Within participants stability maps (averaged across participants) indicating stable voxels in at least two or more participants. A: Pictorial mode of presentation. B: Digit‐picture mode of presentation.

Within Participants Stability Maps for the Two modes of Presentation

The two presentation modes produced stable voxels in both same and different locations, as shown in Figure Figure44 and Table Table1.1. In these stability maps, some of the voxels may be coding for quantities while others may be coding for the semantic properties of the objects. objects.3,3, ,44

Table 1

Distribution of stable voxels in the Parietal region; A: pictorial mode of presentation; B: digit‐object mode of presentation

No. of VoxelsRadiusMNI Coordinates
x y z
A. Labels and coordinates of cluster centroids in pictorial presentation mode
Left Precuneus127−11−5316
Left IPS10817−25−7737
Right IPS871227−7737
Right inferior parietal36740−4253
Left inferior parietal125−29−4553
Right superior parietal17522−6157
B. Labels and coordinates of cluster centroids in digit‐object presentation mode
Left Inferior Parietal, Superior parietal and IPS16221−37−5246
Right Inferior Parietal17643−4051

Minimum cluster size is 12 voxels and inclusion of at least two participants.

The primary difference between the two modes was the hemispheric asymmetry. The pictorial mode produced stable voxels bilaterally (i.e., in both left and right parietal regions), whereas the digit‐picture mode produced many more stable voxels in left parietal areas. The voxels that were common across the two modes were mostly located in left parietal regions—superior parietal and (SPL) left intraparietal sulcus (IPS), with only a few common voxels in the right inferior parietal (IPL) regions. Most of the right parietal stable clusters obtained in the pictorial mode were in IPS and SPL.

The location of the stable clusters in right parietal and some in left (IPS and SPL) corresponded with the coordinates that previously have been implicated in numerical processing [Cohen Kadosh et al., 2007; Dehaene et al., 2003]. None of the clusters in bilateral IPS or SPL corresponded to the locations implicated in the representation of concrete nouns [Just et al., 2010]. However, some of the clusters in left IPL in the digit‐picture mode corresponded with brain locations associated with the manipulability of concrete objects [Just et al., 2010]. Thus, right parietal area might contain representations of numbers mainly depicted as concrete instances (or nonsymbolically presented), whereas left parietal regions (IPS and SPL) may have some representations for numbers independent of mode of depiction (in addition to having representations for manipulability in left IPL regions).

DISCUSSION

This study demonstrated that quantities of objects were accurately decodable from neural activation patterns in parietal regions. Moreover, it showed, for the first time, that the brain activation patterns evoked by quantities were common across different objects and across participants, at least in the pictorial‐mode.

Some of the results of the current study can be related to Eger et al.'s [2009] findings, but several fundamental methodological differences should be noted. First, the numerical stimuli used in two studies were different: the current study depicted small quantities (1, 3, and 5) of semantically meaningful objects (cars, tomatoes, hammers, and dots), as well as presenting a combination of a digit and a picture of an object, whereas Eger et al.'s Expt 1 presented larger quantities (4, 8, 16, and 32) of only dots, and Expt 2 presented larger quantities (2, 4, 6, and 8) of dots and digits. Second, this study asked participants to think about properties of quantities of objects [Just et al., 2010; Mitchell et al., 2008], whereas in Eger's study subjects were instructed to just keep the quantity in mind (digit or dots) and then judge whether a subsequently presented second quantity was numerically smaller or larger than the first quantity.

The commonalities between our results and Eger et al. include findings of (1) more accurate identification of nonsymbolically depicted quantities (dots or other objects) than symbolic stimuli (digits); (2) some degree of overlap between the nonsymbolic and symbolic quantity representations; and (3) greater similarity of neural representations of two quantities that are numerically close to each other in the case of pictorially depicted quantities. What the current study found for the first time was that the brain activation patterns evoked by quantities in the pictorial mode were largely common across different objects (indicating the existence of some object‐independent quantity representation). That is, a classifier trained on the neural patterns evoked by a quantity of one set of objects (say, tomatoes) could accurately classify quantity representations evoked by other objects (such as hammers, cars and tomatoes). A second important novel contribution of our study was the finding in the pictorial mode of the commonality across participants of the neural representation of quantities, indicating some universality of the neural coding of small numbers.

Neural Representations of Numbers

The novel findings of common neural patterns associated with the representation of individual numbers across different objects and different participants signify the presence of object‐independent and participant‐independent neural codes for numbers in the pictorial mode. The object‐independent representation of a number suggests that there are number‐specific representations in our brains generalizeable across different objects. The finding of participant‐independent quantity representation is consistent with previous findings that much of the neural representation of objects is common across people [Just et al., 2010; Shinkareva et al., 2008]. These findings contribute to the growing realization of how similarly human brains implement neural representations for shared concepts. How this commonality arises from some combination of biological and experiential factors is an important question that can in some degree be addressed with current neurodevelopmental brain imaging techniques.

Current theories suggest that quantities depicted as a number of distinct instances (pictorially, in this study) are developmentally fundamental, compared to symbolically conveyed quantities [Ansari, 2008; Dehaene, 1997; Piazza et al., 2007; Verguts and Fias, 2004]. Perhaps it is due to this fundamental nature that the neuronal substrate coding for “three‐ness” or “one‐ness” is common across participants. Unlike quantities expressed as separate instances, digits are cultural artifacts and learned later in life. It is possible that variability in environmental and biological factors during learning gives rise to larger individual differences in digit representations. Consequently, the neural codes for individual numbers were found to be more variable across participants in the digit‐picture mode in contrast to the shared representation of a pictorially depicted quantity.

Interestingly, some studies have also suggested that nonsymbolic quantities (pictorially depicted) are represented more coarsely in comparison to symbolic (digit‐picture mode in this study) quantities [Ansari et al., 2008; Piazza et al., 2007]. On the basis of this idea of representational differences, the neural substrate underlying individual numbers might be largely overlapping across objects in the pictorial‐mode, but more object‐specific in the digit‐picture mode.

Second, we also found a clear evidence of substantially better quantity classification in the pictorial mode. This is consistent with the findings of an earlier study that investigated the neural representation of symbolic digits and nonsymbolic dots and reported that dots had higher accuracies than digits [Eger et al., 2009]. In this study, classification of pictorially presented quantities was substantially more accurate than quantities presented in the digit‐picture mode. These findings agree with the idea of pictorially presented quantities being more fundamental (or primitive) developmentally and thus, perhaps still richly (more numerous neuronal populations for pictorial‐mode) represented in the brain [Ansari, 2008; Eger et al., 2009]. Another factor that could be contributing to more accurate classification in the pictorial mode than the digit‐object mode is the presence of spatial information in the pictorial stimuli (that might be represented in the parietal area).

Previous behavioral studies have argued that quantification of small quantities in a visual array (1–4) is faster and more accurate in comparison to larger quantities (5 and more) and that they have different underlying cognitive mechanisms [Kaufman et al., 1949; Mandler and Shebo, 1982; Trick and Pylyshyn, 1993]. However, neuroimaging studies have demonstrated that the systems for subitizing and counting small numbers of objects both activate intraparietal regions and that the mechanisms are not separable using imaging methods [Piazza et al., 2002]. The findings of this study using multivoxel pattern analysis also showed no reliable difference in classification accuracies among the pictorially depicted quantities and thus are consistent with the findings of the previous neuroimaging study. This suggests that parietal regions have similarly identifiable representations for all the three quantities investigated (two within and one just outside subitizing range). The classification accuracy of 1 being greater than 3 and 5 may be related to the finding that in macaques the number of neurons responding to the quantity 1 is greater than the number responding to 2, 3, 4, or 5, [Neider et al., 2002]. It is possible that in humans as well as in nonhuman primates there are larger neuronal populations responding to a single entity than to multiple ones.

Third, the study showed only partial commonality of neural patterns between the two presentation modes with poor quantity classification across them. Other studies have demonstrated mostly distinct neuronal populations coding for quantities in different formats [Cohen Kadosh et al., 2007; Eger et al., 2009]. Here we showed that representations for quantities in the two visual modes rely on mostly different neuronal substrates. The locations of the stable parietal voxels that were used for quantity classification in each of the two modes (with voxels in the bilateral parietal regions in the pictorial mode, and mostly in the left parietal regions in digit‐picture mode) are consistent with the idea of differential neural representations underlying quantities in these modes. A closer examination of the common neural substrate between pictorial and digit‐object mode suggests that even though the right parietal region is more responsive to quantities in the pictorial mode, the voxels common to the two modes were mostly located in the right hemisphere (with only a few in the left). It has recently been shown that right IPS becomes, over developmental time, specialized for the representation of both symbolic and nonsymbolic quantities [Holloway and Ansari, 2011].

Previous neuroimaging studies that investigated number representations using different notations (words, symbols, and dots) and adaptation techniques (recovery paradigm) have also demonstrated hemispheric asymmetry [Cohen Kadosh et al., 2007; Holloway et al., 2010; Notebaert, et al., 2011; Piazza et al., 2007]. Specifically, it has been shown that the left parietal regions activated in response to both number words [Cohen Kadosh et al., 2007] and symbolic digits [Cohen Kadosh et al., 2007; Notebaert, et al., 2011] but not for nonsymbolic quantities [Piazza et al., 2007]. A recent neuroimaging study demonstrated that while left parietal regions activated more in response to symbolic quantities than nonsymbolic ones, right parietal regions activated more for nonsymbolic quantities than symbolic quantities [Holloway et al., 2010]. Additionally, left parietal regions have also been associated with developmental changes in mental arithmetic [Rivera et al., 2005] and practice [Grabner et al., 2007; Ischebeck et al., 2007] further indicating that region's importance for processing symbolic quantities. This study (using multivariate analysis) showed that the stable voxels underlying quantity representations for the digit‐object mode were located mostly in the left parietal regions (with some in the right parietal lobule). These findings are consistent with the idea of left parietal regions being specialized for encultured symbolically presented numbers [Ansari, 2007].

While there is robust evidence in support of left parietal regions being specialized primarily for symbolic quantities, the representation of nonsymbolic quantities is less understood with respect to lateralization. The findings of the current study indicate that there is bilateral representation of quantity when presented in the pictorial mode with perhaps more right parietal than left parietal involvement. This is consistent with a recent study showing greater activation in the right parietal region for nonsymbolic than symbolic quantities [Holloway et al., 2010]. Additionally, a developmental study by Cantlon et al. [2006] reported that a common neural substrate for nonsymbolic quantities in preschool children and adults was found in right IPS suggesting that right parietal have more primitive representations for nonsymbolic quantities shared by both children and adults. While these studies showed differential roles of parietal regions for numbers depicted in different notations, we demonstrated (using multivariate techniques) that voxels underlying quantity representations for the two modes, pictorially depicted and digit‐object modes, were differently located in the two hemispheres. Overall, the findings indicate that the left parietal regions might be mostly tuned for symbolic quantities in comparison to right parietal that has greater neuronal representations for nonsymbolic or pictorially depicted quantities (with some overlapping representations across the input modes).

Finally, in the pictorial mode, the neural activation patterns within parietal regions were more confusable for the classifier for similar quantities. This suggests a neural representational overlap between numbers that are numerically close to each other and parallels the behavioral observations of numerical distance effects [Moyer and Landeur, 1967].

CONCLUSIONS

The findings demonstrated the ability to identify individual numbers based on neural patterns, indicating the presence of stable neural representations of numbers. The most novel findings were that the neural representations for pictorially depicted or nonsymbolic quantities were common across people and across the objects investigated. Additionally, this study provided further support for Eger et al.'s findings showing better individual number identification of nonsymbolic or pictorially depicted quantities, only partial neural representational overlap for individual quantities across the two modes and a neural number line with closer quantities having greater neural overlap than more distant ones for the pictorial mode. On the basis of the classification findings and the location of the voxels that were predictive of quantity classification, we conclude that the two hemispheres are differentially involved in representing quantities in the two modes with left parietal mostly tuned for symbolic quantities [Ansari et al., 2007; Holloway et al., 2010] and right parietal to have greater representations for nonsymbolic quantities [Cantlon et al., 2006; Holloway et al., 2010]. However, despite these neural representational differences between quantities depicted in the two modes, there might be some overlapping regions as well and, perhaps, these overlapping regions were mostly located in right parietal areas (with some in left).

While the study provides several novel insights into the semantic representation of numbers, it will be important for future studies to investigate how these insights apply to the representation of a larger set of consecutive numbers expressed in different notations and different input forms and modalities.

ACKNOWLEDGMENTS

The authors would like to thank Justin A. Abernathy for assistance with the data collection, Vladimir L. Cherkassky and Sandesh Aryal for their help with scripting and Dr. Timothy A. Keller for helpful comments on the manuscript.

Footnotes

1Chance level is determined by a random permutation test to calculate cutoff rank accuracy. The variables that are entered to compute the chance level are number of classes, number of folds, number of test items and number of iterations.

REFERENCES

  • Available at: http://www.sciencedirect.com/science?_ob=ArticleURLand_udi=B6WNP-4VNKGV4-3and_user=525223and_rdoc=1and_fmt=and_orig=searchand_sort=dandview=cand_acct=C000026389and_version=1and_urlVersion=0and_userid=525223andmd5=9a7739e3750262cced081535a823888f - bbib2
  • Anderson JR, Betts SA, Ferris JL, Fincham JM.Tracking children's mental states while solving algebra equations. Hum Brain Mapp. (in press). [PMC free article] [PubMed] [Google Scholar]
  • Ansari D (2007): Does the parietal cortex distinguish between “10”, “Ten” and “Ten Dots”? Neuron 53:165–167. [PubMed] [Google Scholar]
  • Ansari D (2008): Effects of development and enculturation on number representation in the brain. Nat Rev Neurosci 9:278–291. [PubMed] [Google Scholar]
  • Barth H, Kanwisher N, Spelke E (2003): The construction of large number representations in adults. Cognition 86:201–221. [PubMed] [Google Scholar]
  • Cantlon J, Brannon EM, Carter EJ, Pelphrey K (2006): Functional imaging of numerical processing in adults and 4‐y‐old children. PLoS Biol 4:e125. [PMC free article] [PubMed] [Google Scholar]
  • Cohen Kadosh R, Cohen Kadosh K, Kaas A, Henik A, Goebel R (2007): Notation‐dependent and ‐independent representations of numbers in the parietal lobes. Neuron 53:307–314. [PubMed] [Google Scholar]
  • Cohen Kadosh R, Walsh V (2009): Numerical representation in the parietal lobes: Abstract or not abstract? Behav Brain Sci 32:313–373. [PubMed] [Google Scholar]
  • Cox DD, Savoy RL (2003): Functional magnetic resonance imaging (fMRI) “brain reading”: Detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage 19:261–270. [PubMed] [Google Scholar]
  • Dehaene S, Dehaene‐Lambertz G, Cohen L (1998)Abstract representations of numbers in the animal and human brain. Trends Neurosci 21:355–361. [PubMed] [Google Scholar]
  • Dehaene S (1996).The organization of brain activations in number comparison: Event‐related potentials and the additive‐factors method. J Cogn Neurosci 8:47–68. [PubMed] [Google Scholar]
  • Dehaene S (1997):The Number Sense: How the Mind Creates Mathematics.New York:Oxford University Press. [Google Scholar]
  • Dehaene S, Akhavein R (1995): Attention, automaticity, and levels of representation in number processing. J Exp Psychol Learn Memory Cogn 21:314–326. [PubMed] [Google Scholar]
  • Dehaene S, Piazza M, Pinel P, Cohen L (2003): Three parietal circuits for number processing. Cogn Psychol 20:487–506. [PubMed] [Google Scholar]
  • Eger E, Sterzer P, Russ MO, Giraud A‐L, Kleinschmidt A (2003): A supramodal number representation in human intraparietal cortex. Neuron 37:719–725. [PubMed] [Google Scholar]
  • Eger E, Michel V, Thirion B, Amadon A, Dehaene S, Kleinschmidth A (2009): Deciphering cortical number coding from human brain activity patterns. Curr Biol 19:1608–1615. [PubMed] [Google Scholar]
  • Fias W, Lammertyn J, Reynvoet B, Dupont P, Orban GA (2003): Parietal representation of symbolic and non‐symbolic magnitude. J Cogn Neurosci 15:47–56. [PubMed] [Google Scholar]
  • Formisano E, De Martino F, Bonte M, Goebel R (2008): “Who” is saying “what”? Brain‐based decoding of human voice and speech. Science 322:970–973. [PubMed] [Google Scholar]
  • Grabner RH, Ansari D, Reishofer G, Stern E, Ebner F, Neuper C (2007): Individual differences in mathematical competence predict parietal brain activation during mental calculation. NeuroImage 38:346–356. [PubMed] [Google Scholar]
  • Hanson SJ, Matsuka T, Haxby JV (2004): Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: Is there a “face” area? NeuroImage 23:156–166. [PubMed] [Google Scholar]
  • Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P (2001): Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425–2430. [PubMed] [Google Scholar]
  • Haynes JD, Rees G (2005): Predicting the orientation of invisible stimuli from activity in human primary visual cortex. Nature Neurosci 8:686–691. [PubMed] [Google Scholar]
  • Haynes JD, Rees G (2006): Decoding mental states from brain activity in humans. Nat Rev Neurosci 7:523–534. [PubMed] [Google Scholar]
  • Holloway ID, Ansari D (2011): Developmental specialization in the right intraparietal sulculs for the abstract representation of numerical magnitude. J Cogn Neurosci 22:2627–2637. [PubMed] [Google Scholar]
  • Holloway ID, Price GR, Ansari D (2010): Common and segregated neural pathways for the processing of symbolic and nonsymbolic numerical magnitude: An fMRI study. NeuroImage 49:1006–1017. [PubMed] [Google Scholar]
  • Ischebeck A, Zamarian L, Egger K, Schocke M, Delazer M (2007): Imaging early practice effects in arithmetic. NeuroImage 36:993–1003. [PubMed] [Google Scholar]
  • Just MA, Cherkassky VL, Aryal S, Mitchell TM (2010): A neurosemantic theory of concrete noun representation based on the underlying brain codes. PLoS ONE 5:e8622. [PMC free article] [PubMed] [Google Scholar]
  • Kamitani Y, Tong F (2006): Decoding seen and attended motion directions from activity in the human visual cortex. Curr Biol 16:1096–1102. [PMC free article] [PubMed] [Google Scholar]
  • Kaufman EL, Lord MW, Reese T, Volkmann J (1949): The discrimination of visual number. Am J Psychol 62:498–525. [PubMed] [Google Scholar]
  • Libertus ME, Woldorff MG, Brannon EM (2007): Electrophysiological evidence for notation independence in numerical processing. Behav Brain Funct 3:1. [PMC free article] [PubMed] [Google Scholar]
  • Mandler G, Shebo BJ (1982): Subitizing: An analysis of its component processes. J Exp Psychol Gen 11:1–22. [PubMed] [Google Scholar]
  • Mitchell TM (1997):Machine Learning.Boston:McGraw‐Hill. [Google Scholar]
  • Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just MA, Newman S (2004): Learning to decode cognitive states from brain images. Mach Learn 57:145–175. [Google Scholar]
  • Mitchell TM, Shinkareva SV, Carlson A, Chang KM, Malave VL, Mason RA, Just MA (2008): Predicting human brain activity associated with the meanings of nouns. Science 320:1191–1195. [PubMed] [Google Scholar]
  • Moyer RS, Landauer TK (1967): Time required for judgments of numerical inequality. Nature 215:1519–1520. [PubMed] [Google Scholar]
  • Naccache L, Dehaene S (2001): The priming method: Imaging unconscious repetition priming reveals an abstract representation of number in the parietal lobes. Cerebral Cortex 11:966–974. [PubMed] [Google Scholar]
  • Naccache L, Dehaene S (2001): Unconscious semantic priming extends to novel unseen stimuli. Cognition 80:223–237. [PubMed] [Google Scholar]
  • Nieder A, Freedman DJ, Miller EK (2002): Representation of the quantity of visual items in the primate prefrontal cortex. Science 297:1708–1711. [PubMed] [Google Scholar]
  • Notebaert K, Nelis S, Reynvoet B (2011): The magnitude representation of small and large symbolic numbers in the left and right hemisphere: An event‐related fMRI study. J Cogn Neurosci 23:622–630. [PubMed] [Google Scholar]
  • O'Toole A, Jiang F, Abdi H, Penard N, Dunlop JP, Parent M (2007): Theoretical, statistical, and practical perspectives on pattern‐based classification approaches to the analysis of functional neuroimaging data. J Cogn Neurosci 18:1–19. [PubMed] [Google Scholar]
  • Piazza M, Pinel P, Bihan DL, Dehaene S (2007): A magnitude code common to numerosities and number symbols in human intraparietal cortex. Neuron 53:293–305. [PubMed] [Google Scholar]
  • Pinel P, Dehaene S, Rivière D, LeBihan D (2001): Modulation of parietal activation by semantic distance in a number comparison task. NeuroImage 14:1013–1026. [PubMed] [Google Scholar]
  • Rissman J, Greely HT, Wagner AD (2010): Detecting individual memories through the neural decoding of memory states and past experience. Proc Natl Acad Sci USA 107:9849–9854 [PMC free article] [PubMed] [Google Scholar]
  • Rivera SM, Reiss AL, Eckert MA, Menon V (2005): Developmental changes in mental arithmetic: Evidence for increased functional specialization in the left inferior parietal cortex. Cereb Cortex 15:1779–1790 [PubMed] [Google Scholar]
  • Rodriguez PF (2010): Neural decoding of goal locations in spatial navigation in humans with fMRI. Hum Brain Mapp 31:391–397. [PMC free article] [PubMed] [Google Scholar]
  • Shinkareva SV, Malave VL, Just MA, Mitchell TM.Commonalities across participants in the neural representation of objects. Hum Brain Mapp (in press). [PMC free article] [PubMed] [Google Scholar]
  • Shinkareva SV, Mason RA, Malave VL, Wang W, Mitchell TM, Just MA (2008): Using fMRI brain activation to identify cognitive states associated with perception of tools and dwellings. PLoS ONE 3:e1394. [PMC free article] [PubMed] [Google Scholar]
  • Trick LM, Pylyshyn ZW (1993): What enumeration studies can show us about spatial attention: Evidence for limited capacity preattentive processes. J Exp Psychol Hum Percept Perform 19:331–351. [PubMed] [Google Scholar]
  • Tzourio‐Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M (2002): Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single‐subject brain. Neuroimage 15:273–289. [PubMed] [Google Scholar]
  • Verguts T, Fias W (2004): Representation of number in animals and humans: A neural model. J Cogn Neurosci 16:1493–1504. [PubMed] [Google Scholar]

Articles from Human Brain Mapping are provided here courtesy of Wiley-Blackwell

Feedback