Sunday, July 7, 2019

Partial List of Specific Thinking Techniques

Currently 29 entries.



List:

1. The true null hypothesis is ignorance.

2. Try to use explicit definitions rather than implicit aliefs.

3. If not possible, remember to work backwards, forming explicit verbal premises that properly imply the intuitive conclusion.

4. More exact, detailed, sophisticated premises support taller inference trees before leaving their domain of validity or collapsing to accumulated errors.

5. Rectify names.

6. State the obvious.

7. Deliberately seek diversity and variety.

8. Charitable interpretation and steelmanning.

 9.  If two things quack like a duck, it's important to check that they both eat pondweed and lay eggs.

10.  Make sure the syllogism itself isn't the only reason to hold a conclusion.

11.  In short, keep going.

12.  When using [Sherlock Holmes style][Solomonoff induction], premises form layers. If a new premise is added, including adding the conclusion from the previous layers, anything previously ruled out must be ruled out again.

13.  Rub everything on everything when you can.

14.  Genuinely listening to another person is a skill that must be trained. If you're listening correctly, their past statements will predict their future statements.

15.  To examine your assumptions, look for places with isometric logic but orthogonal emotions.

16.  In cases of empirical uncertainty, look at all the worst-case outcomes and pick the one that hurts the least.

17.  In cases of value uncertainty, clear your mind, then exploit availability bias.

18.  Repetition is error management.

19.  Look for robust conclusions.

20.  Exploit fragile conclusions for sensitive measurement.

21.  Look for relevant premise sets that fully span all possibilities.

22. Logic is a form of experiment.

23. Repair proofs rather than refuting them.

24. Deliberate chunking.

25. Deliberate re-anchoring

26. Method acting.

27. Recursive pareidolia.

28. Redundant memory creation.

29. Yin yang the initial inference



Full comments:

1. The true null hypothesis is ignorance. Force the evidence to disprove the idea that you don't know what will happen or what a thing is. For example, the null hypothesis is that you don't know if swans are black or white. Nevertheless, you have a prediction: the next swan you see will be white. After some number of correct predictions you'll be forced to admit that you do know, and swans are white.



2. Try to use explicit, verbal definitions rather than implicit aliefs. Recall that in math, we start the page with x=3 and y=bacon. When performing logic correctly, we must do the same thing. Failure to do this causes roughly 90% of disagreements online. "Free will is an illusion." "No, free will is bacon." Both can easily be true statements, because the first defined "free will" as being the creator god, while the second believes "free will" is a kind of pork. More on this in rectified names.



3. If not possible to do 2), remember to work backwards, forming explicit verbal premises that properly imply the intuitive conclusion. Once you've concluded that free will is bacon, then work backwards from bacon to a set of premises that, together, imply bacon. If you haven't made a mistake, in almost all cases the set of premises will verbally describe what you supposed 'free will' is.



4. More exact, detailed, sophisticated premises support taller inference trees before leaving their domain of validity or collapsing to accumulated errors. Hence, it's always profitable to make your premises more exact.
The most valuable thing, as exemplified by theoretical physics, is to plug your syllogistic conclusions back into the syllogism machine as premises, and keep going. However, this is of limited use if your original premises are fuzzy or have a small scope. Many thinkers decry the use of syllogism in real-world settings because untrained thinkers can go up maybe one or two layers before their fully valid conclusions become unsound. However, it's thoroughly possible to have exact enough verbal premises to go up a dozen layers. Don't doesn't imply can't; but then we can't expect too much from untrained thinkers.



5. Although we're trying to use explicit verbal definitions of things, sometimes we screw up the identification, and ultimately all definitions must rest on some intuitive identification. For example, if you try to verbally define 'free will' as 'bacon' it is probable that during your logical calculation you'll accidentally drop back into using your intuitive definition of free will, which will be both inaccurate and an equivocation. For a second example, I define 'property' as 'reasonable expectation of control,' but this may require that 'expectation' need not be verbally defined. Even if it can be verbally defined, its own definition will rely on an intuitive identification; ultimately all definitions are founded like this on intuition.

As such, it's important to rectify names. Describe things accurately and dispassionately. This is why Moldbug frequently makes up words; if all the existing words are emotionally charged, it is strictly unhelpful to use them if an alternative is at all possible. For example, we can avoid saying 'free will' at all and instead refer to 'the ability to be controlled by internal factors to the exclusion of external ones', while this is not rhetorically/politically effective as your eyes will glaze over, it is still more productive as we've avoided almost all possibility of having our assumptions wrongly apprehended. Moreover, it is irrelevant whether 'free will' is in fact this ability or not. If you think it's something else, then say what you think it is and we can discuss that too.

Similarly it's important, where possible, to pick definitions that rest on more solid intuitions. "Wheelbarrow" may ultimately be defined as 'that thing *pointing*' but it has a wide white area where you have to be deliberately obtuse to dispute the wheelbarrowness of the alleged wheelbarrow. As such it's a nice solid intuitive definition. By contrast, things like 'free will' and 'awareness' and 'self' are fuzzy at the best of times. (...I would submit that in my philosophy they're much clearer, but I have no evidence that anyone agrees.)




6. State the obvious.
It helps you find assumptions, and frequently it's not as obvious as you think. The first debater thinks it's obvious that 'free will' is having the capacity of the creator god, and the second thinks 'free will' is obviously bacon. If nobody states the obvious, they can argue indefinitely without ever noticing it's a complete waste of time.
You're stating the correct amount of obvious when you're going too far. As in, literally mentioning that the Sun is bright. If the fact you're stating it isn't confusing everyone because it doesn't need to be stated, then go down a level and state that too until it is.
Also you'll frequently notice you don't know why a supposedly obvious thing is true. The guy who thinks free will = god cannot justify his position; he's just not aware of that fact.



7. Deliberately seek diversity and variety. Echo chambers can be replaced by any one member of the echo chamber without any loss. The rest are, intellectually speaking, simply wasting time and space. Also monotony is boring. For example if I hire six university graduates, in most cases I can immediately fire five of them, because getting six copies of the same submission is a waste of both my time and their time. Put another way, just because you can't think of any objection to your conclusion doesn't mean there isn't one; so go looking for one.



8. Charitable interpretation and steelmanning. Most writers are sloppy at best. As a result it's important not to use 'the dark art of being right' but instead try to work out how what's written could possibly be smart and useful.
Almost all conclusions have limited domains, because we are finite thinkers. Yes, you can focus on all the domain they don't cover, but this is useful to exactly nobody. If they are right in any place or time, then focus on that unless it's explicitly marketed as useful in a domain it doesn't apply to.
It's better to not merely see what's written, but try to work out the Platonic ideal of what's written. To imagine an argument for the same conclusion that's stronger than what the original author came up with. To imagine a version of the conclusion that's more useful than the actual conclusion. People are dumb and need your help, and being inspired is a better way to be than being Snopes.



 9.  If two things quack like a duck, it's important to check that they both eat pondweed and lay eggs. If they don't, we can trace the logic back to our mistake and find out how they differ. This is useful when you see two things that are supposed to be different but aren't, when two things are treated differently but seem the same, and for detecting fine distinctions.
Excuse the tendentious example, but taxation is theft. (Robbery.) They both quack like a duck; it is deviant seizure of wealth without consent through the use of physical violence. We can see they indeed both eat pondweed and lay eggs; an artifact that can't be secured against robbery will not be produced or bought. The perpetrators of the deviancy are in fact deviant personalities and will be reliably corrupt. Reductions in either result in greater peace and prosperity, far in excess of the stolen wealth. Et cetera. (I could think of a more tranquil example, but that would take effort.)



10.  Once you have a syllogistic conclusion, always look for a reason to believe that conclusion apart from and independent of that argument. Including, as far as possible, the premises you used. We can argue that free will is bacon, but then we have to go and see that human autopsies keep finding pig butchery products in the skull. If there is no way for such a thing to happen, it's probable that our conclusion is secretly meaningless.
Climate models show that CO2 is Really a Big Deal, but aside from the models themselves there's no reason to think that's the case. Even if we can't find an error in the models, it's probably because it's hiding, not because it doesn't exist.


11.  In short, keep going. If there are more conclusions to draw or relevant premises to add, then do so until you completely run out. It's very tempting to stop when we find a conclusion we like. Don't do that. We must use the conclusion as a premise for a new argument and see where it takes us. We may not (probably don't) like the conclusion as much as we think we do. For example if I argue that free will is bacon, then I conclude that my brain is tasty and maybe it's understandable that you would like to crack it open and eat some. Perhaps not something I'd like to propagate.



12.  When using [Sherlock Holmes style][Solomonoff induction], premises form layers. If a new premise is added, including adding the conclusion from the previous layers, anything previously ruled out must be ruled out again.
With one set of premises, we may rule out every possibility. This demonstrates we committed a fallacy or failed to include all relevant premises. Adding in new premises will form a new layer, and we can no longer carry over any ruling-out from the previous layer. Once we're down to one possibility, we must use that conclusion as a premise for another layer, and try to rule out everything again.
As a specific example, in the definite finite iterated prisoner's dilemma game, the conclusion=>premise: "We should both defect in every game" has to be fed back into the logic machine. Because cooperating on all games would be strictly better than defecting on all games, then the original premise [it's rational to defect on the last game] must be false; modus tollens.



13.  Rub everything on everything when you can. Don't rule things out logically when you can rule them out empirically. Similarly, do try including lots of apparently irrelevant premises and see if they in fact change things. If you're good at a video game it's easy to find a let's player failing at this technique, such as consistently not using the correct weapon on a boss, simply because it doesn't occur to them to try it. They've incorrectly ruled it out, and you will do the same thing. You're using this one correctly if you often find yourself saying, "This is probably dumb, but I'm going to try it anyway." It's fine to avoid particularly expensive possibilities most of the time.
When you first think you're done, always say, "What else?" This itself is a conclusion and a prediction. If we can't think of anything else, then we conclude there's nothing else to think of. When this is later proven wrong, we can trace it back and try to come up with a way we could have thought of it at the time, and try again until we stop being wrong.



14.  Genuinely listening to another person is a skill that must be trained. If you're listening correctly, their past statements will predict their future statements. Basic bitch [active listening] techniques work fine. The dumb, direct, "Do you agree with statement X?" is also reliable.
If you're bad at listening, they will likely run out of patience with you before they will agree to your summary. That's fine. There's more than one person, so continue to practice on the next person until you've practiced sufficiently. Remember to practice actively; don't hope your intuition osmosis the correct action, but explicitly strategize and change your strategy as seems suitable based on how you failed.



15.  To examine your assumptions, look for places with isometric logic but orthogonal emotions. If you believe a politician is a poopyhead because a newspaper said he's a poopyhead, go see if you can find a paper that says someone you like is a poopyhead. If you can't agree with the latter, then you've probably found an incorrect assumption, and since you now have a specific difference to focus on, you have a lead on where to look for it.



16.  In cases of empirical uncertainty, look at all the worst-case outcomes and pick the one that hurts the least. I call this worst-case analysis. As an example, you might want to buy a lottery ticket. But uncertainty is high, so rather than assuming we're right, we assume we'll be wrong. If we don't buy a lottery ticket and we would have won, then we don't lose anything. If we do buy a lottery ticket and we would have lost, then we're out $5. Because uncertainty is high we can't really do a proper accounting of risk*reward, but we can see that in the worst case, it's being out $0 vs. being out $5, and the correct decision is to not buy.
For a second example, maybe we have a bland dinner or we go to the convenience store and pick up a stack. If we assume it was a good idea to stay home and we're wrong, then we miss out on a momentary pleasure. If we assume it was a good idea to go to the store and we're wrong, we get hit by a car and die. (Possibly.) I'm going with bland food on this one.



17.  In cases of value uncertainty, clear your mind, then exploit availability bias. For example, if you can't choose between the one green car and the other model of blue car, then put them next to each other, then turn around. Think about doge. Doge is fuzzy. Now turn back. Whatever catches your attention first is now your discriminator. You like the blue one's wing mirrors better; then you're done, you're getting the blue one. "But my pros and cons" you already did the pros and cons and it didn't help. Just go with the nice wing mirrors.



18.  Repetition is error management. It's generally a good idea to say things twice in different ways. It's far less likely that your reader will misunderstand both times than only one time. You also might make a typo or similar error. Basically: radio operators repeat themselves for a good reason, and if you're doing anything even remotely difficult, you should too.
Further, it helps refine the message. The two specimens won't quite match. The reader can safely trim out the bits that don't. For example, maybe I say I really like bacon, and I'm very fond of gooey fat. With the first alone you're perhaps thinking of dry, crispy bacon, but with the second you learn that's not the case. (In reality I like both kinds.)
I call this latter bit logical diffraction, because it reminds me of optical interferometers. Any two statements can reach a hilariously higher degree of precision than one statement alone.



19.  Look for robust conclusions. If your conclusion doesn't change with wildly different premises, it's robust. Say you conclude you should save money. This is a very robust conclusion. Say you conclude you want to help out your kids when they're buying a house. But then you don't have kids. Maybe you want to retire early. Maybe you want to donate a whole bunch to a good charity (rare) when you find one. You can refute a long list of premises and the conclusion stays the same.
My conclusion regarding the non-physicality of consciousness is highly robust. In the end I have to write down a specific proof, but in fact there's a wide variety of proofs I could use.



20.  Exploit fragile conclusions for sensitive measurement. If you need to make a fine distinction, then look specifically for a proof that changes wildly if the premises change slightly that applies to your situation. If you want to say free will is bacon, but free will might be ham instead, and this makes a big difference to your next action, then you need a nice fragile proof regarding these things so that you can get a nice clear empirical judgment on the two. Maybe women hate bacon, and thus women will hate free will if it's bacon, but like it just fine if it's ham. With this nice wild swing, you can expose women to free will and look at what happens.



21.  Look for relevant premise sets that fully span all possibilities. Any time you can use an if-then structure that employs if X then Y and if not-X then Z, you're in good shape. The problem is that often the easy-to-use X will have important internal distinctions. Perhaps you want to say, "If it has quarks, it's a boson, and if it doesn't have quarks, it's not a boson," but the purpose for which you're distinguishing quarks and fermions also cares about mass, then you're going to have to make distinctions between the basic quarks and the fancy quarks.



22. Logic is a form of experiment. If you're well practiced and can reason reliably, then premises can be tested via logic. Every conclusion is also the statement: "the premises that imply this conclusion are consistent," which is a testable hypothesis. Certainly, just because we can't find a consistent way to imply the conclusion doesn't mean there isn't one, but we can 1) force reality to dislodge the conclusion that we don't know if there's a consistent set, if it turns out that nobody can produce one.
Mainly I'm saying this because [13. rub everything on everything] applies to forming a premise system, for the purposes of 21) spanning the possibilities. The thing to do is include any vaguely relevant premise, and then reduce them by combining the for-our-purposes identical premises. It bears repeating: make sure your if-then system includes all possible ifs the reality can incarnate. When done properly, while each system will be valid, only one will be sound, and you can eliminate the contradictory premises, and thus learn something new without necessarily having to do any budget-costing experiments.



23. A complex proof that's overall false can often be rescued. (Useful for steelmanning.) Sub-proofs may be sound and the proof can be dismantled, re-using the sound bits in a new proof. For example, there's a fatal error in the proof that the Nash equilibrium of the definite finite iterated prisoner's dilemma is always-defect. However, there's a perfectly valid sub-proof, which shows that individual defections cascade all the way up and down the tree. It's worth noting that the final conclusion in this case was not robust enough to survive contact with a competent analysis.



24. Immediate memory is a matrix with only 3-4 addresses. If you visualize an apple, a pear, a banana and a pineapple, then also imagine a pomegranate, then you will likely forget either the apple or the pear, if imagining the pineapple didn't already overflow the buffer. However, it's possible to intentionally group these into a single address, by imagining, for example, a picture with an apple, a pear, a banana, a pineapple, and a pomegranate all sitting against each other, and have 2-3 addresses left over.
This compounds with automatic, instinctive chunking. Thinking about this entry takes up one of my slots, meaning I have room to imagine three fruits before I struggle to keep them all in mind. Or I can keep in mind this paragraph, the five-fruit still life, a Dalrymple article, and say a castle on a cliff. If I were even more clever I would be able to represent all those as a single image and free up even more slots.



25. When evaluating a theory, you get anchored to the theories you know. Water vs. fish. Most fish aren't aware that non-wet is even a possibility. Your theory may have ten variables but you're only aware of four of them. This will make your life difficult if the error is in the other six. As such it's always worth throwing out another anchor. In other words, read a crazy, out-there theory in the same space. Plausibility isn't the point. The point is to search for unknown unknowns. E.g. the [neanderthals are pretty much us] theory vs. the hyperviolent hairy black neanderthal theory.



26. The confirmation bias may be countered or even harnessed by pretending sufficiently hard that you believe the opposite to what you actually believe. In other words, method acting. For a quick and dirty option, simply imagine in detail what the opposition would see as confirming, then read the evidence in question.
Advanced usage involves deliberately method acting your own beliefs, as deliberate insulation.



27. Pareidolia is the name for seeing faces in things that don't have faces, or more generally for seeing patterns where none exist. However, you may notice that the faceless things really do look (a bit) like they have faces. It's not a problem of seeing patterns that don't exist, it's the problem of being certain about a pattern before sufficient evidence for that pattern exists.
There's a pattern to spurious patterns, though. Aim the pareidolia instinct at that pattern; characterize spurios patterns. The problem is its own solution. When I tried it, it worked immediately and permanently, though I think that was something of a coincidence.



28. Use recursive memory formation, memory-of-remembering, to crystallize information.
My memory isn't any better than anyone elses', but it's nevertheless reliable. I do this by countering drift via remembering things more than once. The simplest is to remember something, and then later remember remembering it, distinguishing the memories by location or time of day or whatever your availability bias throws up. The two memories will drift independently and the odds of them drifting the same way are low. Once the two have drifted a bit and you've used them to correct each other, this will form a third memory of the same information, thus making it almost impossible to forget, to have the memory drift undetectable, or to mislay the address. No more [tip-of-the-tongue]; if you can't locate the first, then try the second.
With practice it seems this starts to be done automatically. The cost appears to be that I read slower than others of similar mental dexterity; the benefit being that I very rarely need to read anything more than once. I infer other processes are also slower than usual.



29. There's a classic cognitive bias experiment where the subjects are told of a rule some cards must follow, relating the front face to the hidden back face, and they're unable to come up with a suitable test to determine whether the cards in fact follow the rule. (They improve if the rule is about underage drinking instead of, for example, numbers and shapes.) In particular, they try to confirm the rule rather than looking for falsifying evidence.
From this, we learn the first step to take upon encountering any new theory or idea. Form one positive prediction and one negative prediction. One thing that must true, one thing that must be false. We can then perform a quick and dirty test of the idea in a vaguely objective and reliable fashion.

3 comments:

TheDividualist said...

You are using "define", "identify" and "describe" interchangeably in 5.

Voegelin had an idea there. He said you must draw a clear line between reality on one side and theory or science on the other side.
Wheelbarrows and dogs do not talk, so you can define your concepts about wheelbarrows and dogs however you want to. But it is a bit different
with concepts about people, about what people do and say, as they themselves talk and use concepts. So we have have two very different kind of concepts:
"concepts of theory or science", which we define, and "concepts of reality", which we can only describe in the sense that there are people out there who say this
and that. That is data, input, part of the reality, part of the verbal landscape, not theory, not science.

So when his students asked him to define communism or fascism he flat out refused to. It is data, input, part of the verbal landscape, that people call themselves
or each other fascists or communists, and some of them mean X, Y, and Z under this, others mean X, Y, and P, and some even X, P, Q. Which means you could say X is apparently a very frequent part of what people mean when they talk about fascism or communism, but it is not a definition. It is just describing how the data clusters. You do not get to define reality. You only define terms of science, theory.

This was smart of him because the next question would have been "If Stalinism is not real communism, then real communism was not tried?" and you do not want to go into the trap. You just ask the student to describe the features "realcommunism" differs from Stalinism, and look up if e.g. Catalonia used to have those features or not. It's all data.

So the point is, as a scientist, theorist, you cannot afford to blindly accept the concepts people are using. The problem with concepts is not only just precisely describing the category, but also showing whether it is a valid way to categorize things, whether it cuts reality at its joints.[1] It is not at all sure if communism or fascism are even useful categories for political science. Voegelin used different categories like "political gnosticism".

To resolve the confusion stemming from concepts of science and concepts of reality, you can either pull a Moldbug, and come up with new terms, or pull a Voegelin, he recommended using what he called "critical clarification", borrow a concept from reality i.e. whatever people are talking about, that seems to cut reality more or less at the joints, and make it more precise.

[1] I am thankful to Yudkowsky for this metaphor. Jim and Yud are very far from each other in thinking but both use this metaphor. It is excellent.

Unfortunately Voegelin did not give us a method for telling reality from science or theory. My ideas: when you point to a wheelbarrow, you are not doing definition, you are doing identification. You can also say "wheelbarrow Article No. 000012" which is glued on it, "person with SSN 3214211", or "Exhibit No. 11". This is identification. Equivalent to pointing. Identification, not definition. It says nothing about that thing, it only identifies it. And then you can go on describing it. But you do not get to define reality. When you are using a grouped concept, and not a singular thing, like "wheelbarrows", "people", "houses", then you have to define it, because it is a concept.

TheDividualist said...

Yet, I know it is not a complete way to tell the difference between reality and science or theory. I mean, the idea here is that things in reality are singular, while concepts of science or theory are groups of things, based on an abstracted away property of them. So you can point to one dog, or say Dog No. 313113, but when you point to ten dogs and two wolves, you have done identification, but you also have to demonstrate that creating this group, category, name, concept is a valid way to cut reality at the joints, and that is where defining concepts is more than just identifying things.

And yet, I am not at all sure reality comes in singles and theory in plurals. Are mountains, hills, waves each a different thing? A thing is a bunch of atoms, it is in itself an abstract concept...

Alrenous said...

It's easy to disagree where the joints are. Especially because Reality per se has no joints, it's all one piece. Two, at most. The system of handles you use to grasp reality has joints, though, between one handle and the next. However, to a well-trained philosopher it's trivial to unbolt the handles and reconfigure them to whatever happens to be convenient at the moment.

Defining communism and fascism is easy. "You want to label X as fascist to prove they do Y. What is Y?" This logically pins down what 'fascism' can be. E.g. they burn books. Thus X is the set of political beliefs that lead to book-burning. When folk refuse to define 'fascism' they do so because they intuitively recognize that they'll end up calling themselves fascist. The philosopher then recognizes that they're dodging the question because it raises issues they can't answer, and notes they just refuted themselves.

Also communism is easy to define because communism == irresponsibility. All and only a fancy way of spending other people's money or blood without being convicted of the crime you have in fact committed.