Tuesday, December 29, 2009

So….to assure that I thread the apparent motif of “holy smokies, it has been faaaar too long since I last wrote” appropriately, I will begin this post with yet another apology. This time it has truly been “faaaar too long” since I last wrote, neglecting my last, and probably most meaningful, days at Sussex, a trip to Paris with the one, the only, the Sam LeGrys, a night in Dublin, and an epic plane trip home via Chicago and its nearby hotel rooms: It has been a trip. Goodbyes, on so many levels, and assimilation back into the wild ways of the Western U.S. have both been frequent themes.

It is now impossible for me to recapitulate the minutia that gave way to the wonderful French wanderings that took place and the pangs of heartthrob as farewells were made and people scattered across the world into corners they called home.

BUT I PRESENT THEE WITH:

Some highlights (since lists are easier than full sentences):

1. Tears, full grown Latino men, a Swiss Sussex pen, and friends at 4 in the morning waving goodbye.

2. Sam losing his wallet within 15 minutes of the trip…I know right?

3. Early morning bus trips, total sleep deprivation, watching London wake up to a day unfolding.

4. Flying away from Brighton with clear skies, seeing the white cliffs, the jutting piers, and the ocean/hill dichotomy I grew so fond of.

5. Hating life in the Paris metros, Sam struggling full force against the stupid little plastic ticket doors with our abundance of suitcases.

6. Not finding a taxi ANYWHEARE IN PARIS!!!

7. Finally getting to the hotel, being greeted by a beautiful (and cheap!) room, pink walls, sunny windows, and a foot sink (not for peeing, as I discovered).

8. Notre Dame, the Louvre, Sacré-Cœur, Christmas Markets, the Eiffel Tower at night, cheap wine and good oranges, ordering crepes and not knowing what the heck we were getting, seeing long lost Goucher friends, and general, joyous “getting lost” moments in small streets.

9. Dublin. In general. This is like the chilliest city ever.

10. Pub-bbery and hearty soup.

11. Good Irish music, walking aimlessly, German bratwursts (and then ordering more) me lying that I left a scarf in a church so we could see the inside, illegally.

12. Saying goodbye to a good friend.

Yup. That is probably the most terse, disgustingly void of detail analysis I could possibly provide, but it will have to suffice. I simply could not deal with moving on (AHHH GHANA!) without a little bit of recapping since the last hello. I am almost positive, however, that I will start fervently blogging again once I have far more cultural fodder to chew on. I have a feeling Ghana will be richly chew-able.

Until then, I am just enjoying being home, relishing the cold and mountains, and strengthening myself for yet another adventure (with hopefully a dash of GRE studying, graduate school researching, paper writing, internship applications, and seeing my dear old Dad). As par usual, there is a lot on my plate, but to be honest, I’ve never felt so able to deal with it. I feel like maybe it is just hyperactive positivity manifesting like a virus as a product of all of my neural epiphanies, or maybe it is a more long term development. I am not sure. But I feel so trusting in the fact “what needs to happen, will happen” and am finding myself floating along merrily as the tides shift. And when that happens, there’s not much too complain about. :)

t-12 days.

I’ll be in touch.

Kelly

p.s. more to come. SOON…that is a non-retractable promise.

Friday, December 4, 2009

This is what you should blame for my total lack of blogging. I 1. apologize for my severe incapacity to write more frequently and 2. if you read the entirety of this essay. It is amazing how riveting I found writing 20 pages about neuroimaging techniques- worrisome almost. This week is finals, caked with unbelievable amounts of studying, packing, goodbyes and hellos. And then Paris/Dublin/Denver soon after that. I promise I will write some serious updates once life is slightly less turbulent (although wonderfully so).

I love you all.







The Fusion Point of Mind and Machine:

Can Neuroimaging Tell us Anything about Cognition?

Kelly A. Graves

University of Sussex

Cognitive Neuroscience

Introduction

It is arguable that the invention of neuroimaging, devices crafted with the intent of observing neural activity, has changed the course of human understanding in a way that no other machine is yet to supersede. What was considered to be matters of mere myth, being able to peel open someone’s skull and peer into their inner most cognitions, has been made possible, at least theoretically, by these recent technological advancements. The purpose of this essay is to evaluate this issue, whether or not neuroimaging truly has ruptured the boundaries between myth and reality, giving way to potentially the most potent tool of human investigation, or if the claim, although seductive, might need further consideration.

What is Neuroimaging?

In order to delve into this debate with any kind of accuracy establishing a general definition of neuroimaging, albeit rooted in blatant oversimplification for reasons addressed later, is paramount. According to the Oxford English dictionary, the word “neuroimaging” directly connotes “Imaging of the structure or activity of the brain or other part of the nervous system by any of a variety of techniques.” It is evident that this definition is purposefully broad. The notion of imaging someone’s brain activity can take place in various different ways through various different methodologies, with the common denominator simply being: understanding the elements that comprise neural behavior.

In general, neuroimaging can be divided into two classifications, functional and structural neuroimaging, where the major difference between the two is the notion of a time/space relationship. Functional neuroimaging is a class of techniques that will be the focus of this essay, and provide volumetric, spatially localized measures of neural activity from across the brain and across time; in essence, a three-dimensional movie of the active brain. This category includes such techniques as PET, fMRI, EEG, MEG, NIRSI, and SPECT. Structural neuroimaging, however, primarily examines the physical components of the brain irrespective of time; in essence, a three dimensional picture of the neural meat that gives rise to an active brain. This category includes techniques such as MRI, CT and in some cases TMS. Although a cursory distinction between these two types of neuroimaging is vital for definition purposes, further discussion of individual techniques, as well their potential for measuring cognition, will be addressed.

The Evolution of Neuroimaging Techniques

Although interest in mapping a brain-behavior relationship has dazzled philosophers, psychologists and biologists for decades, true ‘neuroimaging’ did not begin until the early 1900’s. In 1918, the American neurosurgeon Walter Dandy introduced the technique of ventriculography, the process of removing cerebrospinal fluid from the ventricles and injecting filtered air in its place in order to increase visibility of x-ray imaging (Beaumont, 1983). This process, although clearly not ideal for the patients being examined, sparked interest in the possibilities of brain investigation in vivo, which in turn, would open the door to understanding the brain outside the sterile confines of an autopsy operating room.

Although Dandy’s physiological achievement provided considerable insight into the living brain, not to mention earned a Nobel Prize, its ability to communicate information about complex cognitive elements was rather limited. Contemporary neuroimaging techniques, however, have since pushed this boundary, starting with the invention of cerebral angiography in 1949, the advancements of radioactive imaging in the 1960’s and the basis for MRI in the early 1970’s (Lister, 1991). A clear trajectory can be seen in the evolution of mechanical mastery, catalyzing questions about if such methods have crossed the threshold of communicating the depth of human thought.

The Debate

There is no doubt that contemporary neuroimaging has a lot to say about neurological function. To assert that neuroimaging is a fruitless endeavor, incapable of encapsulating anything associated with mental processes, would be an argument richly void of critical evidence--and certainly not the thrust of this paper. However, it is also clear that there are certain things that neuroimaging can communicate better than others; possibly limiting the claim that neuroimaging is able to encompass the richness associated with cognition.

Providing insight into the location, although typically gross estimates, and time, the rate at which a certain neuron or group of neurons fire, are two of neuroimaging’s crowing achievements. But the question then follows: Is there more to a cognition than plotting its coordinates on some kind of time/space grid? Or can being able to pin point where and when a cognition occurs, by default, encompass what a cognition is? And if so, why is patient feedback, almost always vital to analysis? The following section will examine these questions, stemming primarily from epistemological roots, in hopes of outlining the possible theoretical limitations of neuroimaging.

Structure/ Function Mapping

The first point is based on the age old debate of a mind/brain breach. Or in other words, can knowledge of a structure give way to knowledge about its function? Neuroimaging’s goal, by nature, is to analyze local brain structures in hopes of communicating something about their innate or learned function. As Martin Sarter asserts in his essay Brain Imaging and Cognitive Neuroscience: Toward Strong Inference in Attributing Function to Structure: “The potential use of brain imaging for the study of cognitive functions derives from the explicit or implicit assumption that cognitive operations are localizable to focal brain regions or systems” (Sarter, 1996). This notion has been referred to by its adversaries, with a slight sarcastic undertone, as being a “new form of phrenology” in reference to Gall’s mapping of personality traits to bumps on the skull (Simpson, 2005).

Similarly to medical biology, tagging known bodily processes to its appropriate organ can tell us a lot about the process, as well as what happens when the organ performs subpar. But the nucleus of controversy when it comes to matters of the brain, is that the processes appear to be far less corporal than that of a liver or a kidney, igniting much of the “black box” tactics assumed by Behaviorists (Skinner, 1938). Being able to manipulate complex mathematics, write a creative story, or appreciate the aesthetics of a Monet are processes that occur based on the specific structures of the brain, but are far less tangible than those of the rest of the organs, rendering the brain a unique, biological quagmire. In the book The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain, William R. Uttal asserts that this attempt to map cognitions is as meaningless as “a neuroreductionist wild goose chase.” (Uttal, 2000). Sarter goes on to suggest that because of this complexity, mapping the brain’s processes on its structures that are unveiled through the uses of neuroimaging techniques, might not be as easy as one would aspire: “In attempting to map complex functions onto complex structures, there is a considerable likelihood that concepts and models at the functional (cognitive) level, although they may overlap, may not be isomorphic with concepts and models of neural systems or processes” (Sarter, 1996). A lack of function-structure isomorphism can hamper attempts to understand brain-behavioral relationships.

Epiphenomenalism

This assertion is at the heart of an epiphenomenalist, and sometimes holistic, critique of neuroimaging. Epiphenomenalism, although debated within its own philosophical discipline, is essentially the view that the identification of A's mental properties does not provide a causal explanation of A's behavior (Caston, 1997). In terms of cognition, this implies that the product (conscious/unconscious behavior) is categorically different than the parts that support such behavior (the neurological underpinnings). Sometimes this notion gets blended with the idea of holism when being contrasted to reductionism (the idea that the sum can be entirely explained by the parts). Thomas Henry Huxley, who held this view, compared mental events to “steam coming off of an engine” (Huxley, 1874), and William James “as a shadow following a person”(James, 1879), both entities that are reflective, but not bound to the objects that produced them. The notion of cognition being some kind of ethereal steam that cannot be characterized by an analysis of the nuts and bolts of a machine, presents a problem for the function/structure mapping often supported by neuroimaging findings.

Douglas Hofstadter creatively deals with this issue and its implications for brain studies in his Pulitzer Prize winning book “Gödel, Escher and Bach,” in a discussion between three fictional characters comparing cognition to an ant colony:

Crab: I reject reductionism. I challenge you to tell me, for instance, how to understand a brain reductionistically. Any reductionistic explanation of a brain will inevitably fall short of explaining where the consciousness experienced by the brain arises from.

Anteater: I reject holism. I challenge you to tell me, for instance, how a holistic description of an ant colony sheds any more light on it than is shed by description of the ants inside it, and their roles, and their interrelationships. Any holistic explanation of any ant colony will inevitably fall short of explaining where the consciousness experienced by an ant colony arises from.”

Achilles: I think you are still having difficulty realizing the difference in levels here. Just as you would never confuse an individual tree with a forest, so here you must not take the ant for the colony.”(Hofstadter, 1999)

This concept of levels that Achilles brings up in the end, is one of the underlying roots for an epiphenomenalistic approach, and can be further demonstrated by a simple analogy: a balloon. On a microscopic level, there are a set of physical laws that describe the velocity, location, and physical attributes of the particles whirling around in a gaseous form, whereas on a more gross level, there is an entirely different set of physical laws that describe the temperature, volume and pressure of the balloon itself. These two realties, although coexisting, are entirely separate. The notion of volume or temperature, although quite real qualities, mean nothing in the microscopic realm of particle velocity, and visa versa.

In an epiphenomenal analysis of cognition, a similar problem presents itself. What if there are layers, as is the case in laws of physics, in the brain? Perhaps on the neurological level one set of rules apply, whereas on a more gross level (the complex cognitive soup that arises from the neurological hardware), another set of rules apply? It is unclear where the resolution to this issue lies. A strong proponent of neuroimaging’s abilities might assert that no such breach exists, noting a slew of their newest studies directly correlating brain area to function, whereas his epiphenomenal adversary might assert neuroimaging is an irrelevant endeavor, as well as the data it supports, much like Hofstadter’s Achilles and Anteater. Nevertheless, an understanding of the issues posed by an epiphenomenal approach to neuroimaging is, at the bare minimum, important to developing a more critical eye when evaluating their findings.

A Verb or a Noun?

Ludwig Wittgenstein famously advocated that the words we use to express ideas are sometimes insufficient means of communicating concepts- largely due to relativistic interpretation of meaning (Wittgenstein, 1974). For that reason, it is vital to dissect the linguistic notion of cognition to further explore the neuroimaging debate. By definition the word “cognition” can embody two parts of speech: a verb and a noun. It can either indicate “the process of knowing” or “a result of a cognitive process.” This divide (cognition as a process; cognition as an arising entity), is paramount in coming to a conclusion whether neuroimaging can measure it.

As previously discussed, neuroimaging is particularly good at spatial and temporal measurements of neural activity. Take for example a classic PET study conducted by Corbetta et al., where they measured changes in regional cerebral blood flow of normal subjects while they were discriminating different attributes (shape, color, and velocity) of a set of visual stimuli (Corbetta, 1990). In this study, they found that attention to speed, color or shape changed the activity in the extrastriate cortex. What this study did, however, was measure the process of cognition, not that quality of the cognition itself, much like many other experiments in the scientific community today. The data analyzed was not what they patients were thinking about, but the process their brain under went in response to the information being reflected off their retina and to the occipital areas in a given context. In this regard, neuroimaging is an effective means of discerning where certain processes occur in relation to different stimuli, but not the cognitive result of these processes (i.e. what the patient’s thoughts, experiences, and mental manipulations were during due to the mental processes).

Conversely, if Corbetta and his team of researchers wanted to examine not just where and when the occipital lobes reacted to different pictures, but the exact nature of the participant’s thoughts while viewing the shapes and their properties, neuroimaging’s ability might be more limited. For example, if you imaged someone’s brain and found that the extrastriate cortex and the amygdala were particularly active, would that tell you anything about what the person was thinking, without participant feedback? Or would this simply tell you where and when a thought occurred in the brain? You might conclude based on previous research that the person must’ve been thinking about something visually and emotionally salient, but you would never know that the person was actually thinking about an image of their grandmother’s herb garden in the south of France. The missing element in neuroimaging studies, it seems, is not measuring the process of thought, but the quality of the thought itself.

Perhaps this linguistic breach, cognition being both a process and an entity, is where much of the neuroimaging debate is rooted. Given the scientifically substantiated ability of neuroimaging to accurately discern location and time of certain processes in the brain (visual attention in terms of the study discussed), it appears that neuroimaging is a sufficient means of measuring the verb component of the definition: the process of knowing. The noun component, however, the arising product of the processes, is perhaps where neuroimaging falls short, relating back to the epiphenomenal critique discussed earlier. This divide, although an arbitrary linguistic difference, might provide some clarity in the controversy surrounding neuroimaging and its ability (or lack of ability, respectively) to measure ‘cognition.’

From Thoughts to Numbers: Data and its Implications

Unlike the previous section concerning the debate about the measurability of cognition, the issue of data interpretation assumes at least parts of human cognition can be measured, and is much more tethered to a practical, not philosophical, critique. When looking at this debate, it is vital to examine how data is used, as well as whether or not cognitions, once ethereal thought patterns, can be squeezed into numbers, packaged by equations and eventually extracted for meaning. This question focuses less on if cognition should be measured by neuroimaging as previously investigated, and focuses more on how cognition is currently “measured” and gives a cursory analysis of possible issues with the methods employed.

Reverse Inferences: Issues of Deductive Fallacies

The process of deduction, an ancient, philosophical vantage point that jostled the scientific community during the Enlightenment Era and the age of empiricism, is defined as: drawing a conclusion by reasoning based on as set of logically sound statements (Negri, 2001). A generic example of this is shown by the following syllogism:

1. All men are mortal

2. Socrates is a man

3. (Therefore,) Socrates is mortal

The third statement (Socrates is mortal) is what is known as the inference, or as specific conclusion based on the preceding information. The inference is only considered true if all of the preceding statements hold true as well. In neuroimaging, however, there are two branches of inferences that can be made 1. A direct inference and 2. A reverse inference. A direct inference is defined as “if cognitive process X is engaged, then brain area Z is active,” whereas a reverse inference is slightly more complicated, and what Russel Poldrack calls an “endemic of reasoning” throughout neuroscience literature (Poldrack, 2006).

Reverse inference is essentially the process of reasoning backwards. Instead of directly correlating cognitive process X, let’s say language, with the brain area Z, Broca’s Area, it hypothesizes that language is occurring solely based on observing activation in the inferior frontal gyrus. Reverse inference “reasons backwards from the presence of brain activation to the engagement of a particular cognitive function” (Poldrack, 2006). The logic of studies based on reverse inference can be schematically displayed in Figure 1:

[Hypothesis 1] When task T is presented,

brain area A is active

[Hypothesis 2] When cognitive process X is

engaged, brain area A is active

____________________________________

[Inference] Brain activity in area A,

demonstrates the engagement of the

cognitive process X by the task T

Figure 1 (Gomez, 2002).

An example of this is in an fMRI study with rats, where an investigation compared neural activation in pupsuckling (A) with cocaine administration (B). They found that there is more activity in the ventral stratium for task A than for task B. As a product, the authors conclude that pup suckling (A) is more rewarding (X) than cocaine administration (B) (Feris,2005). Element X, the notion of one task being more “rewarding” is the matter of concern here, since it is based on previous research citing activation of the central stratium is categorically related to reward, and therefore, must be in this case too based on post hoc data. In response to this tactic Poldrack concludes: “It is crucial to note that this kind of ‘reverse inference’ is not deductively valid, but rather reflects the logical fallacy of affirming the consequent” (Poldrack, 2006).

This set up is what many critics assert is erroneous in neuroimaging studies, and at the heart of why neuroimaging conclusions are often contested. Originally this form of inference was used to explain occurrences where preexisting data was not present, where a direct inference was, and to some extent still is, always preferred. But now, as Poldrack asserts, “In many cases the use of reverse inference is informal; the presence of unexpected activation in a particular region is explained by reference to other studies that found activation in the same region” (Poldrack, 2006). One can see how the trickling down of misinformation can take place, especially if the previous claims were also postulations made on reverse inferences.

Logistical Issues: Signal Detection, Spatial Localization, Normalization

Another major concern with neuroimaging and the data it produces concerning cognition is not the principals of interpretation, as is the case with reverse inferencing, but the difficulties of acquiring it in the first place. If there is not accurate measurement of what is actually going on chemically or electrically in the brain, it can have massive repercussions. Specific neurons, or a network of neurons, code for very specific information, rendering even a discrepancy of few millimeters salient. With this said, it would be unfair to ignore the technological strides neuroimaging has undergone in the last decade, often minimizing some of problems discussed with techniques such as smoothing, parametric mapping and more carefully constructed scientific frameworks. Nevertheless, a debate about neuroimaging, and whether it can measure cognition, would be incomplete without examining some the pivotal issues in its practice.

Signal detection is probably one of the most basic problems posed when measuring cognitive processes in the brain. Since there are several protective layers (hair, skin, skull, Dura Mater, Arachnoid, cerebrospinal fluid and Pia Mater), between the outside world and the brain, combined with the myopic electrical impulses of nerves, it is difficult to always make an accurate analysis of what is taking place. This is why EEG, although good at measuring temporal information, is often limited in its ability to localize, since it has to tune into the distorted electrical changes from an external source. Because of this, researchers must average data across many trials, rooting through the noise, and applying second hand equations to try and get a clear picture of where exactly neural firing originated. This problem can be eliminated by doing single cell recording, often seen in animal studies, but is rare in humans due to its obvious complications. And is further by the fact only lateral surfaces of the brain can be recorded, temporal lobe activation often being obstructed by air flow through the sinuses, and being unable to measure important subcortical structures like the amgydala.

Although PET and fMRI are not as limited as EEG, since they are able to avoid the external problem, by measuring the metabolic processes internally through blood flow and capillary expansion, there is still an issue of signal detection due to “neural noise.” During experimentation, there are many neural events taking place, not always connected with the experiment. For example, in Corbetta’s study mentioned earlier, a participant might be actively engaged in the discrimination task, but also thinking about how they are hungry, cold, tired, what they did the night before, their first grade teacher, their favorite color- the list is limitless. This is attempted to be avoided by tactful experimental design as can be seen in Peterson et. al’s experiment of subtraction of word processing in the brain (). Similar to EEG, many trials can be averaged and contrasted against trials that vary only in the issue of interest, in order to get a highly educated guess about what makes that task unique. But still, the problem remains: it is hard, and usually peppered with the inability to crisply separate cognitive tasks, to circumnavigate the irrelevant neural activity.

The most common analysis of haemodynamic data is a “mass, univariate” approach, as exemplified by the popular Statistical Parametric Mapping (SPM) software package (Friston et al., 1995), which averages activity into “voxels.” These units of measurement are essential regions of statistically relevant activation as a product of subtraction from preexisting neural activity (). So, unlike common misconception, when presented with a brain image colored in with an array of colors, what you are actually viewing is not brain activity, rather, a series of statistically relevant t-tests averaged across many trials. Although these voxels are a more accurate way of looking at activity in the brain and possibly correlating it to cognitions, several problems still remain:

“Even for functions that are in fact localized to specific neural circuits, these circuits may (a) be diffusely organized or widely distributed; (b) anatomically overlap, or even share common neuronal elements with circuits mediating different functions: or (c) perform different functions depending on the patterns of input activation associated with different cognitive states or contexts. These possibilities would clearly complicate efforts to elucidate the cerebral localization of functions.” (Sarter, 1996)

This issue is further compounded by problems posed by what is known as normalization (Raichle, 1994). In order to compare an image that has been averaged over a series of trials and pumped through a statistical analysis with another image, it must be standardized in some fashion (e.g., unitized) to equate for overall differences in the measured activation across images. Historically, this has been done by stretching the measured brain image over a standard brain image used by all researchers. Although this standardization can dramatically reduce the signal-noise properties of data, it can introduce apparent differences between images that are artifactual. As mentioned, the process of “smoothing” the image attempts to avoid this issue, minimizing the rarefied active locations and maximizing the regions where neighboring activation occurs. Although this might make up for differing neural representation from person to person, what it effectively does it make the findings more generalized in a world where specificity is paramount.

The issues posed by spatial localization, signal detection and normalization are highly relevant to the cognition debate. Although they do not directly prove that cognition can be communicated through neuroimaging, they prove that even if neuroimaging could encapsulate the complexity of human cognitions, there is still a long way to go before exact, undeniable interpretations of data can be made. Maybe with the evolution of technology, these issues will become obsolete, allowing researchers to more directly and accurately tap into cognitive processes of the brain and their location. But for the time being, even if one was to deny the philosophical issues posed in the previous section, assuming neuroimaging is a viable means of communicating cognitions, there are still reasons to believe they are not always able to capture the full picture simply due to logistical issues.

Conclusion

The seemingly innate, possibly species-centric, desire to understand human nature is at the heart of almost every discipline. It can be traced throughout the impressive cocktail of human achievement, ranging from anthropomorphic Greek mythology, to Marxist economic theory, to Freudian psychoanalysis, to Darwinian evolution- the desire to understanding self is a common denominator. It is as if by cracking the code for ‘what it means to be human,’ clarity of our place in the cosmos can be better understood. Consequently, utilizing the complex cognitive systems produced by eons of evolution, humans have finally been able to be freed from the limitations of mere inner musings about self, and develop external machines to assist in the process. These technological extensions of self investigation propose a radical new way of examining humanity- one where its potential rivals its limitations, sparking a debate of utmost importance to human meaning.

Although there is tremendous evidence, coupled with clever, scientific footwork, that suggests neuroimaging is an effective means of communicating information about cognition, as seen, there are reasons to tread cautiously when estimating its power. Problems posed by function/structure discrepancies, potential epiphenomenal byproducts, differentiation between the process and quality of thought, reverse inferencing, and errors in general data interpretation, are only a few of the major reasons why. Obviously there is more to the debate, with much still to be discovered, so to make a clear cut conclusion would not only be assumptions, but inaccurate. However, one thing is evident: neuroimaging has profound potential for measuring cognition, but approached tactfully.

Works Cited

Beaumont, J. Graham. (1983) Introduction to Neuropsychology. New York: The Guilford Press.

Caston, V. (1997) "Epiphenomenalisms, Ancient and Modern," The Philosophical Review 106:

309-363.

Corbetta, M. et al. (1990) Attentional Modulation of Neural Processing of Shape, Color, and

Velocity in Humans. Science 248, 1556–1559.

Ferris C. F. et al. (2005) Pup Suckling is More Rewarding than Cocaine: Evidence from

Functional Magnetic Resonance Imaging and Three-Dimensional Computational

Analysis, J. Neurosci. 25, pp. 149–15.

Friston, K., Ashburner, J., Frith, C., Poline, J.-B., Heather, J., and Frackowiak, R. (1996). Spatial

Registration and Normalization of Images. Human Brain Mapping, 2:165-189.

Huxley, T. H. (1874) "On the Hypothesis that Animals are Automata, and its History", The

Fortnightly Review, n.s. 16: 555-580.

Hofstadter, Douglas R. (1999), Gödel, Escher, Bach: An Eternal Golden Braid, Basic Books.

James, W. (1879) "Are We Automata?" Mind 4: 1-22.

Lister, Richard G. and Herbert J. (1991) Weingartner. Perspectives on Cognitive Neuroscience.

New York: Oxford University Press.

Negri, S. and von Plato, J. (2001). Structural Proof Theory, Cambridge.

"neuroimaging, n.1" The Oxford English Dictionary. 2nd ed. 1989. OED Online. Oxford

University Press. 4 Nov. 2009 .

Poldrack, R.A. (2006) Can cognitive processes be inferred from neuroimaging data? Trends

Cogn. Sci. 10.

Raichle, M. E. (1998). Behind the Scenes of Function Brain Imaging: A Historical and

Physiological Perspective. Proceedings of the National Academy of Sciences 95: 765-772

Sarter, M., Berntson, G. G., & Cacioppo, J. T. (1996). Brain Imaging and Cognitive

Neuroscience: Toward Strong Inference in Attributing Function to Structure. American

Psychologist, 51:13-21.

Skinner, B. F. (1938). The Behavior of Organisms. New York: Appleton-Century-Crofts.

Simpson, D. (2005) Phrenology and the Neurosciences: Contributions of F. J. Gall and J. G.

Spurzheim ANZ. Journal of Surgery. Oxford. Vol.75.6; p.475

Uttal, W. (2000) The New Phrenology The Limits of Localizing Cognitive Processes in the

Brain. New York: MIT Press.

Wittgenstein, L. (1974). Philosophical Grammar., R. Rhees (ed.), A. Kenny (trans.), Oxford:

Blackwell.