Friday, August 28, 2009

Stove; Kuhn; Cultural Narcissim

"It is not clear how accurately this represents Kuhn himself. Partly, this is because he just said, `Let’s do history, as it is so much more exciting than boring old logic.’ He does, it is true, state conclusions that seem to require such an argument, such as `There is, I think, no theory-independent way to reconstruct phrases like "really there"; the notion of a match between the ontology of a theory and its "real" counterpart in nature now strikes me as illusive in principle."

The response is contained in this statement; I cannot choose the consequences of my actions.

If my actions have been formed roughly in accordance with what's really there, the consequences will be as expected. Furthermore, were I to form my action in closer accordance, the consequences will become closer to what is expected. What I wish to understand is how an intelligent person can seriously entertain the following idea; that, were I to form my actions exactly in accordance with what's really there, that the consequences matching exactly to my expectations is somehow 'illusive in principle.'

There is a self; I can choose my actions. There is other; I cannot choose my action's consequences. Instead, I must learn of their consequences, and act accordingly; this thing which is learned is knowledge, regardless of any dilapidated definitions that have claimed to be of 'knowledge.'

That the interaction between self and other are not simple the way F=ma is simple does not mean that other is somehow illusory. Rather, it is far past the time our culture grew up and realized that 'Relativity' is a terrible name for E=mc2, and learn from its rigid reality instead of trying to legitimize something that boils down to philosophical narcissism.

Not that anything like that will happen: I'm just sayin.'

Sunday, August 2, 2009

Comments on an Evo Psych Primer

I can't figure out a good introduction, and I'm sure you can do without one.
"The brain is a physical system whose operation is governed solely by the laws of chemistry and physics. What does this mean? It means that all of your thoughts and hopes and dreams and feelings are produced by chemical reactions going on in your head (a sobering thought)."
This is untrue, but of course there's no way for them to know this. You may want to contemplate this and the idea of scientific arrogance next to each other.
"To say that the function of your brain is to generate behavior that is "appropriate" to your environmental circumstances is not saying much, unless you have some definition of what "appropriate" means. What counts as appropriate behavior? "Appropriate" has different meanings for different organisms."
Time to hawk my definition of life: life has goals. (Also time to use a word that isn't 'appropriate.') For a brain to produce intelligent reactions to the environment, it has to figure out first what an intelligent reaction is, explicitly or implicitly.*

*(Logically redundant. The human brain - specifically yours - is not strictly logical, so grammatically necessary.)

The main action of the brain is simply to aid the life-form in continuing to be able to pursue and defend goals. However, this is the action of every organ; typically an intelligent action is one that supports one of the sub-goals. So, instead of "'intelligent' has different meanings for different organisms," "different organisms pursue different goals."

A good philosophical definition makes everything obvious. Also note that the concept 'behaviour' easily generalizes to non-intelligent reactions like floral immune reactions...albeit the whole point of the word is to distinguish brain-reactions from the non-brain kind within biology.

"Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind)."

Scientific arrogance is negligible compared to philosophical arrogance.
Though the fault here is completely misunderstanding the mind/body problem. As far as philosophy is concerned, the information-processing is a physical property, which is probably why cogsci has found that their 'mind' and 'brain' are identical. On the other hand, note that Cosmides and Tooby acknowledge that this is only 'one version' of the problem.
"Principle 3. [...] In other words, our intuitions can deceive us."
I guess my intuition is just really good. When I examine my consciousness to ask how I see, it tells me that it doesn't know. Trying it again to make sure, I just found out it's practically impossible to even direct my awareness at the problem. I can think about what I'm seeing, or I can think about the thoughts these sights give me, but my mind's eye is blind to anything upstream or in between.

Your incompetence at epistemology can deceive you. Your consciousness rarely does.
This is basically religious dogma on the part of scientists - that your intuition is just about useless.
Generally this is because scientists refuse to relinquish their prejudices about what the intuition can do, and therefore cannot acknowledge its limitations and use it for what it is actually good for.

"A basic engineering principle is that the same machine is rarely capable of solving two different problems equally well. We have both screw drivers and saws because each solves a particular problem better than the other. Just imagine trying to cut planks of wood with a screw driver or to turn screws with a saw."

That's what is so amazing about general-purpose computers, actually. Essentially they're math machines, doing simple operations on binary numbers. And yet, they can solve basically any information problem. (Purpose-built circuits are more efficient in their domain, though.)

"To solve the adaptive problem of finding the right mate, our choices must be guided by qualitatively different standards than when choosing the right food, or the right habitat. Consequently, the brain must be composed of a large collection of circuits, with different circuits specialized for solving different problems."

And here's where the above fact comes in. No, it doesn't have to be, but it is more efficient. I suspect that when your general-purpose circuits can properly solve a problem according to some qualitative standard, it sends out the feeling we label 'understanding.' When you understand a goal, you can reason effectively around it. To understand, then, is to apply the proper meaning to various stimuli.

"You can think of each of these specialized circuits as a mini-computer that is dedicated to solving one problem. Such dedicated mini-computers are sometimes called modules. There is, then, a sense in which you can view the brain as a collection of dedicated mini-computers -- a collection of modules."

I have a math module. It sleeps most of the time and takes many seconds to boot up. From a standing start I can barely count. Once it's up, calculus is my bitch.

"(E.g., human color constancy mechanisms are calibrated to natural changes in terrestrial illumination; as a result, grass looks green at both high noon and sunset, even though the spectral properties of the light it reflects have changed dramatically.)"

Mine seem to be dramatically overpowered; it wasn't until nearly adulthood that I noticed that well-lit coloured objects throw colour stains onto nearby objects. It wasn't long after I found out that you can't see colour in the dark - by reading about it. I immediately went into a dark room and had trouble confirming it, because my brain automatically assigned everything a colour, though I suspect it would have been easier if I had a room that wasn't full of familiar objects. I rarely notice the colour of lighting unless I specifically attend to it. For example I had red curtains as a kid and it made my room red during the day when they were closed. I could only tell everything was red when I specifically asked myself about it.

So perhaps my intuition is deceiving me? Perhaps I just think I can see colour, but I'm just fooling myself...well, it's actually highly testable. If a light is turned on in dark/discoloured rooms, am I surprised by the revealed colour? It has happened, three or four times. The system is powerful but does like to guess at things it can't actually know.

"The more crib sheets a system has, the more problems it can solve. A brain equipped with a multiplicity of specialized inference engines will be able to generate sophisticated behavior that is sensitively tuned to its environment. In this view, the flexibility and power often attributed to content-independent algorithms is illusory. All else equal, a content-rich system will be able to infer more than a content-poor one."

Philosophically, I've found the best way of looking at this is that the brain learns things both through the senses, in single organisms, and through evolution, across ancestry. Even with the crib sheets, logical reasoning is necessary to produce true inferences, which means that the privileged hypotheses are essentially just innate lessons.

This transforms the last statement into "Systems with more knowledge can infer more." Good philosophy tends to make everything obvious.

What I'm trying to say here is that if a philosophy is being obtuse, it's probably because it's bad philosophy and you can ignore it as a source of truth. At worst you skip inefficient learning. In general, really, the job of being understood falls mostly to the speaker or writer.

"Having no crib sheets, there is little they can deduce about a domain; having no privileged hypotheses, there is little they can induce before their operation is hijacked by combinatorial explosion."

I guess this answers a question I've had; how did I learn philosophy? I certainly wasn't taught, and I didn't read anything specifically calling itself philosophy. I did, however, read a lot and I paid attention.
The above statement is identical to one I made last post. ("...if-then") Good philosophy ignores evidence until the final stages, because otherwise you get combinatorial explosion. Instead, work from assumptions and simply check later if these assumptions make any sense.

"Machines limited to executing Bayes's rule, modus ponens, and other "rational" procedures derived from mathematics or logic are computationally weak compared to the system outlined above (Tooby and Cosmides, 1992)."

Fascinating that they think this. (BTW, check: compare the rate toddlers learn words to the rate of incoming information. I found the ratio is gargantuan.) Generalize the concept 'machine;' now, science is a machine. So, this idea reflects back; science needs intuition. 'Scientific' findings can indeed be very interesting and very powerful, but for the most part the reach of science is limited. This is the basic reason I keep pointing out this particular flawed dogma in science culture. Until scientists recognize this, there will remain two kinds of scientists; the ones that keep using the data to support things it doesn't actually say (nutritional science), and the scientists who refuse to believe that we can find truth unless some data tells us so first. (New atheists. Also anti-historian sentiment: "Many of these accusations revolve around the idea that we cannot prove anything about the past, so evolutionary claims cannot be verified.")

"experts can solve problems faster and more efficiently than novices because they already know a lot about the problem domain."

Very good. It's more that experts can solve problems at all, though.

"In other words, our modern skulls house a stone age mind."

Evolution can happen much faster than this phrase implies. Civilization has certianly impacted the stone-age template. If nothing else, look at lactose tolerance. Small adaptations are no less likely in the brain.

"For this reason, evolutionary psychology is relentlessly past-oriented."

All knowledge is past oriented. The whole point is to use the past, which we can see, to understand the future, which we can't see without using the past.

"The premises that underlie these debates are flawed, yet they are so deeply entrenched that many people have difficulty seeing that there are other ways to think about these issues."

Cosmides and Tooby are really doing a good job, overall.

After reading this, I'd have to say that I have an EP hypothesis. Specifically, that all humans are endowed with not one but at least two general-purpose learning and reasoning architectures. I call them the 'rational logic system' and the 'emotional logic system' simply because of the way they appear to present results. Basically, one is "I think that" and the other is "I feel like." The first can solve math problems. The second seems primarily interested in causation, using correlation to try to detect it.

I can't think of a good conclusion either, and I think you can do without one. In fact, make up your own, because it will be tailored to you.