Þáttagreining og fMRI: Aðstoð óskast

Útdráttur úr Partially Distributed Representations of Objects and Faces in Ventral Temporal Cortex: Alice J. O’Toole, Fang Jiang, Hervé Abdi og James V. Haxby:

Object and face representations in ventral temporal (VT) cortex were investigated by combining object confusability data from a computational model of object classification with neural response confusability data from a functional neuroimaging experiment. A pattern-based classification algorithm learned to categorize individual brain maps according to the object category being viewed by the subject. An identical algorithm learned to classify an image-based, view-dependent representation of the stimuli. High correlations were found between the confusability of object categories and the confusability of brain activity maps. This occurred even with the inclusion of multiple views of objects, and when the object classification model was tested with high spatial frequency ‘‘line drawings’’ of the stimuli. Consistent with a distributed representation of objects in VT cortex, the data indicate that object categories with shared image-based attributes have shared neural structure.

Síðar í greininni kemur lýsing á því hvernig þeir þáttagreindu fMRI-gögn:

The goal of the analysis was to determine the pairwise ‘‘neural discriminability’’ of the object categories using the brain scans collected while a subject viewed different categories of objects. We applied the procedure to the fMRI data from each subject separately (cf. Haxby et al.,
2001) and report the discriminability results averaged over the subjects. Odd and even runs of trials served alternately as the training and testing sets to yield 2 measures of performance for each subject on each pair of object categories. For simplicity, we describe the analysis
for the face–house discrimination. The other object category pairs were treated analogously.

We proceeded as follows. First, half of brain maps from face condition and half of the maps from the house condition (i.e., the training set maps) were submitted to a PCA [Principal Components Analysis]. This provided a multidimensional space of the scans defined by orthogonal axes or PCs [Principal Components; þættir]. These axes are ordered by the amount of variance each explains in the data. This variance includes, but is not limited to, voxel activation changes that are due to changes in the experimental condition. Because PCA was applied to brain scans, individual PCs are themselves interpretable as brain scans that can be projected back onto the anatomy of the subject and viewed [þetta skil ég ekki...]. Figure 1 shows a PC from the neuroimaging data projected back onto the anatomy of a subject.

The next step was to determine the ‘‘positions’’ of individual brain maps in the PCA space by computing their coordinates on each of the PCs. Coordinates represent the similarity of individual brain scans to the PCs. These coordinates can contain information about object category contrasts. Information about a category contrast might, for example, be seen in the opposition of positive versus negative coordinate values for scans from the two categories. Figure 1 shows an example of this kind of PC-based contrast for the face and house categories [Figure 1. Example of a PC
that separates faces and houses (d´ = 3.3). Face area in orange and house area in blue. Intensity indicates the weighting of each voxel on this component.]. Scans taken while this subject viewed houses tend to have negative coordinates on this PC, whereas scans taken while the subject viewed faces tend to have positive coordinates. To illustrate the activation profile represented by this PC, Figure 1 shows the areas that are relatively more activated for faces (orange) versus the areas that are less activated for faces (blue). The reverse pattern occurs for houses, with more active areas in blue and less active areas in orange.

Ég skil bara ekki alveg hvað þau eru að gera hérna. Hvaða breytur eru þáttagreindar? Hvaða þáttabreytur koma út? 

Ég get ímyndað mér að þau meðhöndli hvert voxel ("heilaeiningu", sbr. pixel nema í þrívídd) sem breytu sem tekur virkni voxelsins sem gildi. Þau sýna svo "heilanum" annað hvort hús eða andlit og athuga hvaða gildi voxelin taka hverju sinni. Síðan taka þau helminginn af slíkum skönnunum (runs) og þáttagreina þau með því að henda öllum voxelbreytunum inn í þáttagreininguna. Út úr því koma einn eða fleiri þættir sem lýsa t.d. hversu "andlitslegt" eða "húslegt" tiltekið run var. T.d. gæti komið út þáttabreyta sem fengi neikvæð gildi þegar heilanum var sýnd hús en jákvæð þegar heilanum var sýnd andlit. 

OK. Í fyrsta lagi, hvernig gengur þetta upp: "Because PCA was applied to brain scans, individual PCs are themselves interpretable as brain scans that can be projected back onto the anatomy of the subject and viewed"? Fá þau út jafnmarga þætti og voxelin eru? Og hvernig má túlka þættina sem hnit í þrívíðu rúmi, þ.e. sem staðsetningu í heilanum? Er þetta kannski ekki það sem þau meina? Er ég að misskilja?

Í öðru lagi, hvernig má nota þáttabreyturnar til að skoða runs sem EKKI voru höfð með í þáttagreiningunni (munið að þau þáttagreindu bara helming gagnanna) og spá fyrir, út frá gildum á þáttabreytunum, hvað heilinn var að horfa á (andlit eða hús)? Virkar þáttagreining þannig að það er fundin reikniregla til að búa til þáttabreyturnar, svo þá reiknireglu má nota til að reikna út gildi á þáttabreytunum fyrir gögn sem EKKI voru höfð með í þáttagreiningunni?

Hjálp óskast. 


« Síðasta færsla | Næsta færsla »

Bæta við athugasemd

Ekki er lengur hægt að skrifa athugasemdir við færsluna, þar sem tímamörk á athugasemdir eru liðin.

Innskráning

Ath. Vinsamlegast kveikið á Javascript til að hefja innskráningu.

Hafðu samband