Confirmation bias has two aspects. First, seeking confirming evidence instead of falsifying evidence. Second, dismissing contrary evidence lightly and accepting favourable evidence lightly.
The first one is easy to counter: intentionally look for falsifying evidence before confirming evidence.
There's an easy path for the second problem, which is to care about the truth more than social signalling. If you're trying to score points through posturing, save everyone some time and admit it to yourself. Alternatively, decide to invest your ego into having true opinions. If you genuinely care about truth, or gain pride primarily from believing true things, then inequitable judgments will iron themselves out with experience.
The next harder path has difficult pre-reqs. I did it by knowing my brain wiring enough to mix and match the plugs. It may work without it, but I can't guarantee anything. Once I came to a particular conclusion, then I would immediately being method-acting that I believed the opposite conclusion, and I plugged the confirmation bias machine into my method-acting. This has the expected result of reversing the polarity - I discounted evidence in favour of my real position and handicapped evidence against it. Eventually the polarity reversal became habit, and then unnecessary as both networks reached equal weight.
Are you familiar enough with your brain wiring to fiddle with it? Do you know how to send reprogramming commands? These techniques, while very useful, are, I suspect, rare. Additionally, lack words for description.
If you can't take the easy paths, the hardest path is to make explicit predictions.
Write them down, and use unambiguous language. "My ideas about X will be considered wrong if Y or Z occur." Similarly, "If I discover that Y or Z is true." While pre-existing knowledge is useless for public prediction, it's fine for private prediction.
Occasionally Y or Z will occur and the disproof will be unsatisfying; don't give up.
First, consider that you may have derived the predictions incorrectly. If so, identify the error and remember to correct at least a dozen other theories for your new methods and re-check them. It's no good to re-affirm the new theory at the expense of three old theories. Allowing inconsistency is right out.
Briefly consider that you may have misread a Q as a Y or Z.
Next, consider a modification of the theory. The theory serves some purpose in your world-model, explaining or predicting some events. If they can be explained or predicted with an alternative that's consistent with the new evidence, it's time to use that.
Finally, it's okay to occasionally use a theory despite solid contrary evidence. If you really can't think of a reason to keep it, and none of these alternatives are satisfying, it will frequently be due to misleading evidence or your own epistemic incompetence. Must acknowledge you're doing this in violation of the formal rules, though.
These hedges are risky, in that they allow confirmation bias to seep back in. However, they are less risky than giving up. Using the 'eat your vegetables' analogy, vegetables that you don't eat because they taste bad are less nutritious than vegetables you do eat, no matter how good the former are in theory. (Speaking literally, don't eat vegetables if you don't have to.)
This final method works by repeated encounters with your bias. By explicitly noting down the predictions and falsification conditions in advance, it is hard to make them logically inconsistent. When contrary evidence arrives, you will dismiss it. Then you will remember - or if necessary read - that it was a falsification condition. This will feel jarring. Repeated encounters will activate your aversion response, and I know no reason it won't be addressed to the confirmation bias. You train yourself to be averse to confirmation bias. Necessarily, this is a slow process requiring many repetitions, and labour-intensive.
Friday, March 11, 2016
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment