Sunday, May 8, 2016

Motherboard Brain: The Argument For Discussing Thought as a Computer-Like Ladder of Complexity

Introduction

This short essay strives to explain a very simple concept - the idea of a ladder of thought complexity that is metaphorically similar to programming languages. Although the metaphor is used to make the idea accessible, I'm hoping readers can sufficiently abstract to understand the value that such an approach could have.

One

When people use metaphors to explain philosophical arguments, it's usually awful.

You can deploy metaphors for various purposes and in various ways, but I would say the basic application of metaphor is that you're trying to help elucidate a point that someone isn't getting by using a simpler concept (or relationship) that they are already familiar with. So you dilute whatever concept you're trying to explain by "trading" for some in-common, generic version of the concept. Plus, you simultaneously get to load up the concept you're teaching with all of the previous, pre-processed baggage that the learner has associated with your metaphorical bridge. It also limits your thinking because you apply arbitrary bounds, etc. You get the idea.

All that said, the temptation is too strong.

I was listening to "The Partially Examined Life" podcast - an episode on Heidegger (not important for reading this post unless you're not that familiar with Heidegger). In talking about Heidegger they referred back to a top-line explanation of some core Kantian concepts. Namely, and oversimplifying-ly, understanding that our experience is fundamentally limited by our "language" of experience (the senses), such that we cannot be said to have access, via experience, to any "thing in itself." A really accessible example would be to say something like "of the entire light spectrum, we can only see a very narrow band of wavelengths with our crummy human eyes. Because we do not natively see other wavelengths, our un-augmented experience does not include access to the experience of all the light around us - only this small limited band." You can take that idea and abstract it out sufficiently to make a pretty good representation of Kant - that our limited capacity for experience and pre-ordained natural affinities in conceptualization put us fundamentally out of touch with some concept of an object world "out there."

So Kant goes on and has all sorts of fantastic philosophical consequences - but that wasn't what rattled my tree. I started thinking about the common "brains are like computers" metaphor that shows up in pretty much all casual, contemporary discussions about neuroscience by non-experts. I was considering the whole "hardware," "firmware," "software" trichotomy.

If you chose to make a weak metaphor for Heidegger's mission, you could say that he's very interested in the idea of a type of "re-programming." He roughly thinks that somehow an alternative conception and relation to being can be accessed through a "de-programming" of the "programming" that language imposes on the mind. He almost goes full-Tao with it - "The Tao that can be spoken is not the eternal Tao. The name that can be named is not the eternal name." Heidegger is so convinced that you cannot effectively explain what must be done to achieve this re-programming, that he sets out to paint an artistic example that fulfills his goal. Like a type of hypnotic (or de-hypnotic) poetry, we can use Heidegger's approach to access a fundamental layer at which our minds are base-line operational, which has since been cross-woven over by the mental impositions of language.

If you're thinking ahead, you may be able to start to see the loose metaphorical structure I identified.

In the computer you've got the hardware (brain), and then a series of layers of programming languages that range from basic, Assembly-type functionality (underpinning the subconscious constructs that make our reality stable enough to have coherent thinking processes within), all the way up to high-level languages like Javascript (which represent the patterns we weave in conscious thought or consciously present processes).

Two

Now I really hope you're still reading, because thinking about myself reading this, I would be super-pissed at this point. This rough, hardly-a-metaphor is probably making both neuroscientists and computer scientists equally angry. I've managed to bastardize both Kant and Heidegger, out-of-context and irrelevantly introduce Taoism, and say pretty much nothing of any new value.

This is where I try to make up for that:

So what really interested me was not this basic metaphor of programming languages like levels of thought complexity. This doesn't have any special explanatory power or particularly allow for new, higher levels of thought. But, I realized when I was toying around with the metaphor and where to "land" different definitions (Assembly = subconscious, etc.) that there's this missing area in my philosophical thought connected to the idea of "levels" of consciousness.

Heidegger's conception of language as erasing our connection to some type of a priori thought structure pre-supposes that thinking without language is not only possible, but something to address critically and strive for in our practice of thought if we hope to encounter and/or connect with "being." For the purpose of this discussion, I'm going to agree with that point and take it further.

Now, call to mind a type of Kantian mental pre-supposition. The idea that not only are we limited on a sensory plane, but we are pre-programmed to structure fundamental relationships between senses and thoughts. For example, the "causality" notion is in some way part of our brain "firmware." We may be pre-wired in some way to think in terms of causality.

These two points together can help us start to see the idea of staged thought. There may be "levels" of thinking - each one creating a foundation for the next level to be possible.

This brings us to another interesting thought: think of a monk that meditates and achieves a level of focus so powerful that they can light themselves on fire and burn to death without reacting.

If we use the simple idea of "up" and "down" with regards to levels of brain complexity, has this Monk achieved this level of discipline at the conscious level, or at the unconscious level? Have they added a level of complexity that is written "on top" of the existing programming to keep things in check? Or have they dove deep and "re-written" on a "lower" level?

I do not think this is a trivial question, and despite the rapid accessibility and simplification necessary for the metaphor, I think that there is an important approach at work here. I hope I have begun to illuminate why I'm interested in the idea.

If we want to change our behavior - or our access to "truth," or our relationship to "being," or any other philosophical goal that has been established whatsoever - do we need to generate methods of thinking/doing/being that work "up" or that work "down?" Do we need to add levels of complexity, or do we need to strip back and re-write at levels we already experience?

Applying this language to our contributing philosophers, you could say that Heidegger believed in that we need to "think down." We need to peel back the human language layer, go back down to the next level of programming, and go in a different direction. Write a new program in Javascript. Or maybe use C++ (for some reason).

Now if we choose to believe in this oversimplified notion of thinking "up" and "down" that I've presented, you also have another very interesting question: how far down are there types of brain programming that we can modify?

The value to considering these points is clear in my opinion:
  1. Does this notion of thinking up and down help us set a criteria for creating actionable philosophical methods? Help us to pare away methods that take us away from "thinking new thoughts?"
  2. Can pushing forward into this area of thought help us better identify the foundations of consciousness? What is truly immutable? What our actual Kantian limits for conceptualization are?
  3. In following 2 - can this method help us better understand "things in themselves" by way of the negative? By better understanding the true limitations of thought?
  4. Can this method help us broaden our scope of what we perceive to be logical thought and what we believe to constitute relationships/sense perception? Do we have access to alternative "top-line" categories of relationships (akin to causality)?