Saturday, June 22, 2019

How To: Epistemic Competence

Benefits:
Sensation of extreme power. I physically feel like I'm humming at times. Overwhelm the world rather than the reverse; trivialize most life issues. Know what's going on, what will happen next, and how to affect it should you so desire.

The training can be done literally anywhere and at anywhen. It is easier with good equipment, but 'awake' counts as sufficient equipment. Fits into whatever random fragment of time you end up with.

Drawbacks:
Very high time investment. Training involves repeatedly stabbing yourself right in the ego. Uncomfortably intense contempt for the lesser beings who cannot think. Need to hide your power level unless you enjoy getting witch hunted. Disconnection from untrained thinkers, who either force you to disregard the benefits of your training, or force you to to cynically manipulate them.

---

You are 'taught' how to think critically in exactly the same way you are 'taught' to have a 200+ bench press. You can teach the training process, but the actual training is done to the self.

There are four sets. To unlock the three sets, first master the zeroth set. To achieve competence, repeat the three sets for about 10,000 hours. To achieve greatness, there is a final set. While simultaneously continuing the first three, completing the fourth set took me about 14 years.

---

Zeroth set.

Step 1. Precisely define your terms.
Step 2. Re-generate what you attempted to describe based on your definitions.
Step 3: Check for discrepancies between the logically re-generated description and your observations. Change your definitions so that they describe what you intended to describe.

Repeat these steps until you find no discrepancies.

First set.

Step 1: Observe a situation.
Step 2: Convert it to syllogism, and predict the situation's outcome based on the syllogism.
Step 3: If your prediction was incorrect, then come up with a syllogism that would have predicted the correct outcome, and adjust the erroneous method that came up with your erroneous syllogism, so that you spontaneously generate the right syllogism next time.

Repeat these steps until you stop generating false predictions.

Second set.

Step 1: Observe a situation.
Step 2: Catalogue all of your feelings about the situation. Use those as inputs for your syllogism and prediction. The idea is to try to discriminate as many shades and contours of subjective impression as possible, and use them all.
Step 3: If your prediction was incorrect, assume you mistakenly identified the intuitive markers as something other than what they in fact mark. Change your associations so your subjective impressions predicted what actually occurred.

Repeat these steps until you stop generating false predictions.

Third set.

Step 1: Observe a syllogism that's too big for you to hold in your mind.
Step 2: Try to stretch and hold it anyway.

Repeat until you don't see any syllogisms that are too big.

Zeroth set comments.

Thinking is syllogism. All thinking, even if it appears otherwise. Physical causation is isomorphic to logical implication. Hence, to achieve epistemic competence, train syllogism.

The subtle arts of definition, briefly: put all of your concepts into explicit words so your mind can't accidentally slip without noticing. 

To think at all one must have thoughts. The thoughts have some nature, which defines and is defined by the thoughts. Untrained thoughts equivocate and conflate. First gain awareness of the definition of your own thoughts. Know them exactly.

Definition can be prescriptive or descriptive. However, in either case, you must first apply an arbitrary label.

If you wish to think upon an external phenomenon, the definition is not up to you. It is a description of what's already there. Take an example of the event and label it with your favourite term, then identify all other events that are in the same category and list their properties. You now have a label-category association, which is what a definition is. Check a few extra examples to ensure the boundaries of the category are where they're supposed to be. They won't be, so fix the list of properties and check again. 

Unfortunately once you have a coherent term which doesn't include random detritus or fail to include obvious specimens, it will no longer match the folk definition. Perhaps swap out your favourite term for something more apt, because it has become jargon, and using the original as jargon is predictably confusing. In any case, the label isn't important. What's important is understanding the category. Call it skoobarg if you want, as long as you understand what it means.

If you wish to think upon an internal idea, the definition is wholly arbitrary; however, the logical consequences of the definition are not. You get to attach any non-contradictory set of properties together, and label it with whatever you want. However, you then must list at least a few example of real-world events that fit in the category. 

Look particularly for advents that aren't supposed to fit, such as Diogenes' chicken. The logic is implacable. You must either bite the bullet or change the definition. What is included is not up to you; to change the category you must change the logic, which means changing the definition. Getting a coherent definition to cover the events you want it to cover tends to be impossible; settle for good enough. A coherent definition is far more important than your personal aesthetic hangups. If you have stuff left over that you wanted to think about, get a second definition which covers them and use it as a pair with the first.

A category can be quickly tested for coherence by using the "all X are Y" form a few times. E.g. all fire is hot. We can choose the boundaries of the category "fire" in many ways, but if we start including cold fire or gooey fire, we have an incoherent category.

First set comments.

I find videogames useful, because they are novel, have fast turnaround time between prediction and the actual event, are deliberately designed to be easily intelligible, and it's hard to pretend that you made a correct prediction when you didn't. Imagine this: "I will win this fight." You lose. "I totally predicted I would lose!" It does happen, but not often enough to be a problem.

As a simple example, imagine I'm walking home.
"I am walking home, through an artificial setting."
"Observing modern artificial settings for long periods of time is aggravating."
"I will be walking for a long time."
Prediction: "I will be irritable when I get home."

If I am not in fact irritable on arrival, I have to try to come up with what I did wrong, fix it, then try again as soon as possible.

These verbal proofs have the very useful property of making their domain of applicability explicit. If I include an "I'm walking home," premise, then the conclusion, "I will be irritable at home," although stated in a general form, only applies when I walked home. (Maths have an isomorphic concept, the 'domain' of a function.) Keeping the proof structure in mind along with the conclusion cuts way down on equivocation and similar errors. (Ref: third set.)

If you're tired and need rest, the actual null hypothesis is "I don't know." It is always valid to predict you don't know what will happen, and rest.

Second set comments.

As you might expect, the intuitionistic/instict/hunch style syllogisms are vastly more powerful than the purely verbal kind, and can be applied more widely. However, they are harder to train and rely more heavily on your default abilities, rather than tools. Verbal syllogism can be expanded indefinitely by writing them down, by contrast. I still haven't fully determined which markers mean, "Dunno, don't ask me," because to determine a marker's meaning you need to be able to guess what it means, and your intuition is massively smarter than you are.

As an example, I read reports of medical studies in New Scientist and predicted which ones would replicate, purely on how I felt about the experience of reading these articles. I put in enough effort, and I can now safely predict which studies (of any kind) will replicate, based on how I feel about a 300 word summary.

Intuitive syllogism is particularly useful for figuring out what your body is trying to tell you, and thus, in particular, what you should eat. It is very surprising to me how many markers the brain generates which have no labels and have to be trained. Further, some of the labelled are labelled wrong.

Like videogames, physics and computer programming are extremely useful. You will be 100% certain that you got the right answer, and they will agonizingly humble you, over and over again, until you learn proper humility. "It's a compiler error!" It's not a compiler error. "The answer in the back of the book is wrong!" To their sorrow, future you will understand how it's not.

If you develop the intuitionistic syllogism method, then you learn the intuitive markers for hubris, and thus be able to calm down and humbly question rather than try to fight and show dominance.

Don't shy away from predicting things that should be scientifically impossible to predict. Partly, scientists haven't trained their critical thinking, and say dumb things or things designed to make you listen to them rather than yourself. Partly, I have often successfully predicted stuff I 100% definitely have no way of knowing, and there's no reason to think you can't do the same. For example, I sometimes predict I shouldn't go to a place, but I go there anyway so I can falsify the prediction. When it isn't falsified, I failed the method, rather than the method failing me.

Third set comments.

Bigger syllogisms hold more information, so they can deal with bigger, more complicated situations, or deal with simple situations at a higher precision. If a proof is complex - the premises themselves have proof structures that ought to be held in mind - it can get huge.

I had no issues stretching to accommodate syllogisms that were essentially isomorphic to reality. On the contrary, the stretching sensation is pleasant, especially when a model can be fully grasped. I can't guarantee you'll have the same experience, though.

I believe this is based on chunking. Chess masters hold board segments or whole board states in their mind at once, rather than trying to memorize each piece individually. Logical masters do the same thing, but unlike chess boards, logical structures are similar across all fields of expertise and inquiry.

You will find that the chunks are addressed by subjective or intuitive markers, which are being trained in the second set. Most of them don't have names so I find it very difficult to describe an example, but let's go anyway.

The chess master doesn't really remember even a board chunk; he remembers that the board chunk feels a certain way, and if he needs a property of the board chunk he remembers what that feeling implies about the chunk. Maybe the one that feels like a twirly skoobarg has a pawn in the upper left. (I told you most of them don't have names.) When there's no queen it feels sububly and if only you have a queen it feels lerdiborg. If you've ever played chess I'm sure you agree. There also a feeling of someone threatening to promote a pawn: distinct ones for when you both have queens, when you both don't, when they do, and when you do, and it changes both in quality and in intensity when they get closer to promotion. Remembering all these markers is hard, which is most of why it's hard to train. I suggest time-travelling back to childhood and starting then. Certainly I benefited a lot from starting very early.

Because there's so many feelings, it would be impossible for them all to have individual names. It would have to be some sort of function, which generates a name on the fly. You would start with the bouba kiki principle and generalize to a full system.

Somewhere along the path to logical grandmastery, one gains what I call philosopher superabilities, such as the ability to detect contradictions in the time between looking at a text and actually reading it. "I don't even know what it says yet and I already know it's wrong."

Fourth set.

Step 1.1: Explicate something you believe.
Step 1.2: Justify believing in it.

Step 2.1: Compare two justifications.
Step 2.2: Reconcile them.

Repeat until you have justified and reconciled every belief you have.

Training your general predictions obviously combs out errors in your beliefs. However, one can go above and beyond, and spend time not predicting per se, but looking for contradictions in your beliefs. As it turns out, it is feasible to have a set of beliefs which is fully consistent end to end, although it is true that the time investment necessary to verbalize every one of your beliefs is immense.

One benefit is that cognitive dissonance is tense, and removing the dissonance removes that tension. Completing a rep of the fourth set relaxes you permanently. 

Of necessity, your observations are a kind of belief. You will find that as contradictions are removed, your analysis or interpretations of your observations lose more and more degrees of freedom. Similarly, you'll find some of your observations are inconsistent, and reconciling them typically means finding one was mistaken. As a result, your beliefs must become more true. If your observations are consistent with each other and your analyses are consistent with each other and your analyses are consistent with your observations, the odds of them being incorrect are minuscule, because they all have to be wrong at the same time. 

From the inside, Set 4.1 and 4.2 feel exactly the same. This is because they are the same, but the untrained can't see the similarity. This is why they're unified into a single set.

For this, I used magazine articles. Scientific American wasn't always full of drooling idiots. They would state their belief, I would state my equivalent belief, and then justify my belief. In practice this is re-creating philosophy from the ground up. Which is why I unashamedly call myself a grandmaster philosopher, despite having spent maybe 20 minutes reading Aristotle in total, mutatis mutandis for every other famous philosopher except Nick Land and Mencius Moldbug.

If the magazine article's statement contradicted mine, I would try to find their contradiction, either internally or with difficult-to-mistake observations. Journalists are dumb, so there's always one to find. Reading cyberspace articles is more challenging; sometimes I was wrong instead.

With good form, the fourth set trains insight. If you falsify both the article and your own original belief, it's natural to go looking for what else you might try believing. With practice, it becomes habitual to, upon reading a statement, think of everything equivalent anyone might believe. With further practice, the noise is pruned, the breadth increases, and eventually random political BS can inspire you to think of wholly novel profound truths.

Stretching out your mind is necessary to complete the fourth set. Often, to reconcile two beliefs requires unifying them into a larger, overarching system. You can no longer load them up individually, but have to load them simultaneously. You will see that laity use ad hoc approximations that are much smaller. Sometimes, even adding together all the ad hoc approximations, across numerous particular domains, is smaller than the overarching system that can generate all the approximations.

If you get tired and need to rest, it is acceptable to believe something without justification, as long as you admit you have no proof. "Yeah, that's my wild ass guess. But I'm sticking to it."

General comments. 

Remember orthogonality. It's useful to spot when factors can vary without affecting each other, like the colour and shape of a toy ball. They are orthogonal considerations.

Remember superposition. Many allegedly complicated ideas are in fact a multiple superposition of a very simple idea with itself.

You may find it necessary to write down your predictions, so you don't 'accidentally' misremember predicting the correct outcome when you actually didn't. My memory is like a divine gift, so I didn't have to.

By inspection, if you make type 1 or type 2 errors the training will be less efficient. However, regardless of your base level of self-deception, you can tell the training is working because the failure rate will decline. Once it has started to noticeably decline you can start working on your level of self-deception using set 1. "I will self-deceive, and think X when in fact Y." Then see what in fact happens, adjust, and try again until you stop being self-deceptive.

In theory, someone undertaking this training could write down all the principles they come up with to generate correct syllogisms. It would help later travellers, but not, I think, very much. Every happy family is alike, but every delusional thinker is crooked in their own way.

Let's do this using the method.

"Reading a list of ways to think correctly would help."
"The list of cognitive biases and fallacies on wikipedia is close enough; a list of how not to think."
Prediction: "Reading wikipedia's lists would make you think better."

Given that the conclusion is false - you can try it if you like, as per the method, it has a nice fast turnaround - then one of the premises must be false; modus tollens. The second is true enough. If the lists are even a little bit accurate, then you ought to learn to think at least a little bit better, and it's absurd to think they're totally useless. Hence, the first must be false.

To properly test this, having adopted the negation of the first premise, I would make a new, independent prediction based on that premise. Ideally with all other premises chosen for being very solid. Try this: "Reading a list of ways to think would not help" & "Many lists of thinking ways already exist" => "I cannot tell who has and hasn't read such a list without explicitly asking." I know of no contradictory evidence, and yes I've intentionally looked.

If this further prediction comes back negative, then I must have made a mistake somewhere in my methodology. I have either failed to include a relevant premise, or accidentally inferred that A => C when in fact A => B, or something of that nature.

The latter I call the ability to think in straight lines, because that's what it feels like. Sometimes your train of thought jumps a track. For example, I might think, "Reading a list of ways to think correctly would help, and therefore I will write a list of ways to think." Jumped a track in the middle there. Like I got distracted and grafted the end of a different premise onto the beginning of the first. It's almost a plausible thing to do, so I may not immediately notice my error. In my case, the result will, for some period of time, be cached rather than recomputed, so I often re-do proofs at increasing intervals. In other words I will autocomplete "Read list," as "therefore, write list," just like you autocomplete "Sun rises," with "East." Note that the latter isn't logically necessary; the autocomplete replaces actual logic processing with simply remembering an association.

Maybe my statement, "It is absurd to think they're totally useless," is in fact false, and I can reverse that and generate new predictions to test it. (I've in fact already done so, more on this in a bit.) Maybe I have equivocated, or misapprehended the situation. Most of this can be discovered by back-generating the premises that would have lead to the correct prediction, which is why that's step 3 in set 1.

I have not specifically tested the assertion "It is absurd to think these articles are devoid of true information." However, I have tested intuitively identical statements, so I'm already familiar with the relevant markers, and they indeed predict that that those articles have sufficient accuracy to be useful. If reading such lists were a useful expenditure of time.

If pursuing physics or programming hasn't stripped you of your superfluous ego, then running the first or second set on yourself and your own behaviour is a good way to jab yourself in the gonads until you fall over in defeat at your own feet. Being able to predict your own behaviour is worth the pain, and I suspect it's a necessary prerequisite for predicting others.

It's also good to note how much of your thinking isn't really you. Almost all of it. It's exactly like programming: you toss your brain some code, and it compiles and runs the code for you. It's hard to inquire about what machine code the compiler actually used.

The request "I should read a list of correct thinking, therefore...?" runs different code depending on whether it has been recently run, but sending that request feels almost identical, and I still struggle with trying to find an equivalent request that will actually re-compute the logic rather than fetching the cached result. Hence, I don't try anymore, but wait for the cache to expire. Indeed the code is so quiet it actually feels like 'you' are doing the thinking. You're not. This is an example of an intuitive marker with a false label.

While a list of correct thinking is not particularly useful, I do believe "The Brain: an Owner's Manual" would be immensely helpful, filled with tips like, "You don't actually do the thinking," and "Caches last for [some function of IQ] seconds." However, for myriad reasons it's not a project I can complete solo. One is I already don't need it. Another is, if there aren't at least two people interested in helping, I predict (both set 1 and 2) nobody would read it, so why would I waste my time?