02 March 2009

Cerebral Organization

I suppose it makes me wonder, how the human brain is wired, and what the different connections are inside the four different lobes spread across the two different hemispheres. I have a mid-term for my Psycholinguistics class this coming Tuesday, and I have been pre-occupied with models and theories of cognition lately.

For every psycholinguistic phenomenon, there is at least two theories competing for attention. And most of these theories usually span the divide between modular and connectionist viewpoints.

Basically, the difference lies in whether you see different parts of the brain being able to do multiple different tasks, or whether you believe in certain parts of the brain being dedicated to do one thing and just one thing. If you believe in the former, then you can be a connectionist, and if you believe in the latter, then chances are you believe in modular theories of the mind.

I believe I am in a good position to compare these two, given the fact that I took a class on Neurolinguistics last semester, and the professor (who is from the Linguistics Department) was very strongly inclined towards modularity. This semester on the other hand, I am taking a Psycholinguistics class and the professor (who is from the Psychology Department) is very strongly inclined towards connectionism.

The good thing about modularity is that every component of whatever model they describe seems to have a purpose. Take the Dual-Route Model of written word recognition, for example, pioneered by Max Coltheart and his colleagues. They posit a two-route model that would account for how the human being reads and recognizes words. There is the lexical route, which passes through a mental dictionary called the lexicon, and there is also the grapheme-to-phoneme conversion (GPC) route, which is just a collection of rules on how to translate certain letters to sounds. They posit this architecture due to the fact that there are certain types of disabilities such as different versions of dyslexias where people can only read words but not non-words (implying that the lexical route is intact, but the GPC route is damaged), and there are other people that can read words and non-words, but not irregular words (implying that the lexical route is damaged, but the GPC route is intact).

Then, there is the connectionist way of modeling things, such as the Parallel Distributed Processing Model which is pushed forward by James McClelland. One thing that initially intimidated me regarding the connectionist architecture is the fact that it seemed to be so mystical, and there wasn't a concrete distribution of work, since activation is not modular, but a pattern that is spread across the entire network. In word recognition for example, there are three main sections of the network: the orthographic, the phonological, and the semantic section. A word is recognized by having activation spread over the entire network, not just some all-or-nothing access through a lexicon.

If one were studying brain imaging, and say, for example, looking at brain damaged people and studying where their lesion site is, chances are one would prefer the modular view of cerebral organization. One could simply see what the common denominator is. One can simply look at the lesions, overlay the scans of each patient, and see what is the site that is commonly damaged between the different patients. That site should be the locus of the common damage that is exhibited by the patients. Aphasia studies normally are like these.

However, not everyone agrees to this view. Connectionists would retort: Do you really believe that there is just one neuron that is responsible for the representation of one lexical entry?

Over time, I tended to be sympathetic with the connectionist theories over the modular theories. In terms of parsimony for example, the architecture is unified, and they posit the same architecture for every phenomenon to be explained, whether it is speech perception, spoken word recognition, written word recognition, and possibly even discourse processing. I do think that it is a good system that would have the ability to handle the different phenomena that human language and cognition exhibits.

Another good ability that connectionist theories have is the fact that their models have been implemented. There have been countless numbers of Monte Carlo simulations and other computational implementations that these models have seen, which somehow makes the belief in these models stronger. If the numbers show the predictions, then all the better.

Now this brings me to my current task: I need to pick a theory to account for discourse processing in the context of my dissertation. That's why I have been harvesting these papers and seeing what is out there so that I can model what I think is the way the human mind works.

(Embassy of Kyrgyzstan, from my Embassy Row Series)


  1. Thank you for the informative post!

    I believe the 'truth' lies somewhere in between these two divides. Like you, I am more sympathetic to the connectionist model (given a range of 0 to 10 on the C to M scale, I would target 3). The reason being some experiments I have read about have indeed proved some aspects of the modular viewpoint. Can't remember details offhand but some of those were related to the missing corpus callosum, for e.g. Another interesting one that comes to mind is the MRI scanning of Susan Polgar (the first female grandmaster) that showed how her brain uses areas typically used to store long-term memories to store chess moves databases and history instead.

    This is indeed a fascinating area of study and will continue to engage us for many centuries to come! :-)

  2. Mahendra,

    Thanks for dropping by. And thank you for the information you mentioned. They were definitely interesting and I would try to look those up later.