Monday, February 28, 2011

Epistemic Expertise II

It would be handy if I could think as thoroughly about a subject I haven't written about as one I have.


In the last eighteen days, I've come to realize that when I say, "New arguments are many order of magnitude rarer than arguments trying to appear new," I mean it utterly emphatically.

The point of evaluating every argument I come across is not, primarily, to have the evaluation of those arguments. I have at least two goals above that; the lesser, to practice logic so as to become more facile; the greater, to learn to recognize an argument worth evaluating.

There's a broadly held myth that peer review is necessary to overcome one's biases. While I cannot verify the causal path, I have every other reason to believe that this belief stems from the fact that peer review is highly effective at countering one's biases.

I hardly feel the need to debunk the myth directly: in this case, to solo-review, pre-evaluate an argument, and make a judgment as to whether it is worth reviewing. Write this down or otherwise ensure you won't mis-remember it later. (Optional: detail a few reasons for the judgment. I find it helpful to intentionally ask myself for features the argument should not contain.) Then, review it anyway and compare the review to your judgment.

This is not a difficult process. It is not arcane. It is not even difficult to invent if you haven't seen it before, provided you're familiar with the idea of objective tests.

Coming back to the original post's thrust, whenever I see a feature of the belief landscape like this myth, I again realize that it is hardly worth paying attention to what the common beliefs are on any particular subject. I find once again that ad populum is, in fact, a fallacy.

Common beliefs are something to be explained, not something to explain with. As a bonus, attempting to explain true common beliefs generally turns up the explanation, "It is true," with details on how it is true.


In case you think the objective review process may not be worth your time: it also teaches recognition of subcategories of 'worthwhile.' I was just trying to learn to see arguments that would increase my personal supply of true beliefs, but I ended up being able to quickly evaluate arguments against any of my goals, at will. Similarly, my habit of predicting particular failures before review has taught me to quickly recognize why, exactly, an argument fails the test.

Having completed that project, I naturally applied it, and was I ever surprised at the incredible saturation level of pointless arguments! I think I need more effective countermeasures against my optimism bias. I don't have a problem with individuals being self-serving, but I would have thought that more would at least try to offer something, to have to argument serve themselves and others, instead of relying on pure trickery. I wonder if this is partially due to ingrained jealous habits?

Though fair warning: this overall evaluation of the argument pool may be premature. I used a shortcut method, one I hope to detail in a future post, which involves looking at a concrete instance of something, asking my intuition for a list of similar instances, and then scanning the list to come up with my percentage estimate. I will be using the (judge)-(review anyway) technique recursively when the opportunity arises, even though this method has been very reliable in the past - after all, without a theoretic framework, I won't know what the failure modes are until I trip over them.

Ironically, despite the fact I'll be summarily rejecting more pieces of writing, methods like this one have allowed me to see more useful arguments. I end up not rejecting some I previously would have. Given a large corpus or a complex argument, the thought of detailed analysis can be lethally offputting. Judgment occurs, after all, whether I've refined the capacity or not. Now it is much easier to scan large volumes of sources I'm unimpressed with, looking specifically for details counter to my expectations. In the past, I have repeatedly been surprised to find that bad quality is not nearly as monolithic as it seems - nor as it is commonly portrayed - despite consistently confirming that most is, indeed, bad.

No comments: