Mainly by example.
(Updates give a hint as to what longer version looks like.)
Trying a thesis statement, to see if it works better than the [b] tag. Works well with goal: balance. Fixing flow later, if pilot project works out.
Point: matching properties.
Defining philosophically, psychological egoism means that 'selfless' acts have all the properties of 'selfish' acts. They are essentially the same thing. Using different words for them is akin to using different words for your left hand as opposed to your right.
Counterpoint: lay usage and communication.
Point: clear thinking.
It's unfair and ineffective to expect the lay philosopher to use words according to their underlying concepts, and as a result if you want to call your left hand a 'harble' instead I'm happy to play along. It doesn't change the fact that a harble is essentially a hand. However, for the philosopher, calling it a harble only makes it harder to think clearly about it.
Counterpoint: selflessness may still be distinguishable from selfishness.
Point: implications of matching properties.
Selfless acts may have additional properties - I don't know, I'd have to check. Right hands additionally are 'usually dominant.' This is no way makes them not a hand. Almost anything I can demonstrate about hands automatically generalizes to harbles. (I use demonstrate very deliberately. The statement is true regardless of whether you think I can demonstrate anything.)
Point: conceptual torture itself twists language.
Calling similar things by different names is what leads to me torturing language. Calling a left hand a harble quite reasonably leads to the incorrect inference about whether harbles can be dominant. Which in turn means that if I want to discuss southpaws, I first have to deal with the impression that the definition, "Southpaws are harble-dominant," is somehow a contradiction.
I would love to simply say "politicians" but I have to use things like "Republibrats and Democan'ts" because I can't otherwise count on the automatic inference that they're basically the same, nor the reverse connection that anything I demonstrate about politicians automatically applies, without reservation, to Democan'ts and Republibrats.
Update: Relevant. (Via.)
Update 2:
Point Zero, put into words during thinking about the third point: Aretae is committing linguistic torture by attacking linguistic torture. The idea is for philosophers not to hot-swap a new concept into a layhuman's word. However, what he objects to in practice is taking the layhuman's concept and working out what it in fact implies. (On request, I can point to specific examples of me doing this, if the coercion example below isn't clear enough.)
A word cannot both imply and not-imply a thing. To follow Aretae's model, the layhuman's meaning has to be changed to mean something else.
'Linguistic torture' is self-contradictory.
Counterpoint: there is a sophist technique very close to what Aretae points to. There's an instantly fatal counter-technique, such as described as word taboo by Yudkowsky.
First point: from my perspective, everyone engages in linguistic torture. It is impossible for you not to. I'm capable of working out what you mean, I do it all the time; the only question is whether you're capable of working out what I mean.
In math, let "X" be whatever. In philosophy, the same variable-reading skill is necessary, because the same variable-setting is necessary.
Second point, wrapping back to first:
I could use a different word for Jim than coercion.
For the sake of the example, I'll use, "moral violence is physical violence (even mild) except that necessary to prevent
First problem; explaining the definition of the thing. First problem is assuming he would even listen. Second problem, he'd likely go, "Oh, you mean coercion," and then import the extensions I was specifically trying to exclude. Third problem, assuming I managed to fix 1 and 2, is understanding what the definitions implies, which I have literally never managed to successfully transmit. Fourth problem, the argument is likely to devolve into arguing about what the definition should be; in the past this has been because the various options import various extensions, and it becomes a proxy argument about the extensions; utterly exasperating and dumbfounding, as the parties agree on which extensions are which and thus must understand what the definitions imply.
Or, you can solve all this by noting that good philosophy does not import extensions. The harder the problem, the more important it is to put all the necessary extensions into the formal intension.
Third point:
E.g. Coercion in fact means what I want it to mean.
"In law, coercion is codified as the duress crime" By contrast, redressing that crime cannot be coercion; following the law can't be a crime. "Such actions are used as leverage, to force the victim", no victim, no coercion. Don't make me /wiki 'victim' to show that locked up felons are not defined as victims. Finally, "Coercion is the practice of forcing another party to behave in an involuntary manner [...] or some other form of pressure or force.", and if you follow the logic through, the intension of 'coercion' covers literally all immoral acts, and does not cover any non-immoral act. Immorality and coercion are exact synonyms; Aretae would of course disagree, but that means he thinks Wikipedia's description is simply wrong. Which in turn means he's trying to linguistically torture coercion (for example) into something other than what most Wikipedia editors think it is.
Jim's using, "Coercion is any intentional act which causes an otherwise involuntary response in another." As above, I have no problems with him using harble, and I play along. My issue is that he gets to use harble but apparently I'm held to a higher standard. Hey kids, what does it imply about me to hold me to a higher standard? Which of these implications would be opposed by the people trying to hold me to these standards?
Tuesday, April 24, 2012
Notes on Libertarian Omnibus
Yar.
Path 1 isn't very convincing, because democrats already believe it. The fundamental human rights are about listing the things that aren't up for a vote. Using specifically your example, we see cries to stop executions regardless of the crime, because democrats think that you can't vote someone's right to life away. As far as I can tell, voting was originally for solving problems that don't have a clear solution. E.g. there's a right answer to how high the tax rate should be. It is zero, but there's no way for most to learn that, so they vote and hope to get close to the right answer, or at least closer.
Path 2:
Prospiracy. Pull the strings specifically in their own favour and damn the consequences to everyone else, often including their own descendents
Path 4:
Governments are absolutely fantastic at updating when it will increase their revenue. Also great at solving the problem of making the ruling elite comfortable, as above.
Path 5:
Schools do not fail at doing what they're actually being paid to do, which is support the rulers. Indeed, I have reason to believe they're getting better, as per path 4.
Path 7:
And we've kind of solved the germs thing.
IMO Acton got the causation backwards. The corrupt get power, and absolutely corrupt get absolute power.
The great is short for the great murderer.
Path 8:
As per my comments on #2, it doesn't look meaningfully different from #2.
If nothing else, actors always need staying power and at most sometimes need any other goal, even if they genuinely value it more.
Path 10:
No matter what you want, more freedom gives you the option to get more of it, by definition. If it isn't giving you more options, it isn't freedom. With the one exception of controlling other people. My presumption is that anyone who votes against freedom hungers to coerce for its own sake.
Path 11:
Seems the same as path 9.
Path 12:
Almost the same as #8 and #2. I should mention that I agree all these paths are independent lines, but they all lead to the same way-station, so...
Path 14:
Having found the good effects from government policies, I'm now actively looking for a program, any program, that is better than the one it replaced. (Except for the rulers.) It's really hard.
Path 15:
Should probably be put next to paths 2, 8, 12, as it is near the demarcation between same and different.
Path 16:
A different perspective on paths 3 and 11.
1: The drug was has in fact worked out great for the cops and prison unions. Massive crime is a side-effect or perhaps necessary precursor to their real goals.
2: That the drug war leads to more crime is not in fact the fault of cops. The real powers in this arena happen to benefit from having lots of people in jail, therefore... I'd very much like to know the names and addresses of those people, incidentally. I'm not entirely sure even the bureaucrats themselves know which ones are the important ones, though.
Edit: 3: Variation on 2. The real powers like having lots of crime - after all, it can't reach into their gated communities and creates a ton of feeling going as "There outta be a law." That lots are in jail is just the cops doing the best they can despite every handicap. That jails happen to be profitable is just a windfall.
Path 17:
Looks to me you made the same mistake. Society doesn't have a treasury. Government has a treasury because it has legitimized violence it can use to extort money.
Path 18:
You can tell trade is good because you're always allowed to trade freely with government's favoured industries. E.g, I haven't checked, but I'm betting what steel tariffs USG imposes depends on whether the automotive industry or the steel industry is currently rich enough to out-lobby the other. So when Ford was ascendant, the tariffs were low, and now they're rising. Even if I happen to be wrong on steel per se, I won't be in general.
Updat
Path 21:
Someone does know the right answer. They even know the right answer for the right reason.
There's no way to tell which one that is without trying it.
The study of history is (among other things) the hypothesis that someone's already tried it.
"What did I miss? Is this categorized well?"
Path 1 isn't very convincing, because democrats already believe it. The fundamental human rights are about listing the things that aren't up for a vote. Using specifically your example, we see cries to stop executions regardless of the crime, because democrats think that you can't vote someone's right to life away. As far as I can tell, voting was originally for solving problems that don't have a clear solution. E.g. there's a right answer to how high the tax rate should be. It is zero, but there's no way for most to learn that, so they vote and hope to get close to the right answer, or at least closer.
Path 2:
Prospiracy. Pull the strings specifically in their own favour and damn the consequences to everyone else, often including their own descendents
Path 4:
Governments are absolutely fantastic at updating when it will increase their revenue. Also great at solving the problem of making the ruling elite comfortable, as above.
Path 5:
Schools do not fail at doing what they're actually being paid to do, which is support the rulers. Indeed, I have reason to believe they're getting better, as per path 4.
Path 7:
And we've kind of solved the germs thing.
IMO Acton got the causation backwards. The corrupt get power, and absolutely corrupt get absolute power.
The great is short for the great murderer.
Path 8:
As per my comments on #2, it doesn't look meaningfully different from #2.
If nothing else, actors always need staying power and at most sometimes need any other goal, even if they genuinely value it more.
Path 10:
No matter what you want, more freedom gives you the option to get more of it, by definition. If it isn't giving you more options, it isn't freedom. With the one exception of controlling other people. My presumption is that anyone who votes against freedom hungers to coerce for its own sake.
Path 11:
Seems the same as path 9.
Path 12:
Almost the same as #8 and #2. I should mention that I agree all these paths are independent lines, but they all lead to the same way-station, so...
Path 14:
Having found the good effects from government policies, I'm now actively looking for a program, any program, that is better than the one it replaced. (Except for the rulers.) It's really hard.
Path 15:
Should probably be put next to paths 2, 8, 12, as it is near the demarcation between same and different.
Path 16:
A different perspective on paths 3 and 11.
"She thought Welfare was a deliberate plot to destroy the black family. I personally don't figure government is that smart."A: they reliably pursue their interests that way and B: if your prospiracy is indistinguishable from a conspiracy, you deserve whatever a conspiracy would deserve. In this case, jail, or at least blistering excoriation. A prospiracy is basically being criminally negligent of avoiding a conspiracy.
"I don't think that the goal of either was to massively increase crime?"I don't have Foseti-level insight into bureaucracies, but here's a couple possibilities.
1: The drug was has in fact worked out great for the cops and prison unions. Massive crime is a side-effect or perhaps necessary precursor to their real goals.
2: That the drug war leads to more crime is not in fact the fault of cops. The real powers in this arena happen to benefit from having lots of people in jail, therefore... I'd very much like to know the names and addresses of those people, incidentally. I'm not entirely sure even the bureaucrats themselves know which ones are the important ones, though.
Edit: 3: Variation on 2. The real powers like having lots of crime - after all, it can't reach into their gated communities and creates a ton of feeling going as "There outta be a law." That lots are in jail is just the cops doing the best they can despite every handicap. That jails happen to be profitable is just a windfall.
Path 17:
Looks to me you made the same mistake. Society doesn't have a treasury. Government has a treasury because it has legitimized violence it can use to extort money.
Path 18:
You can tell trade is good because you're always allowed to trade freely with government's favoured industries. E.g, I haven't checked, but I'm betting what steel tariffs USG imposes depends on whether the automotive industry or the steel industry is currently rich enough to out-lobby the other. So when Ford was ascendant, the tariffs were low, and now they're rising. Even if I happen to be wrong on steel per se, I won't be in general.
Updat
Path 21:
Someone does know the right answer. They even know the right answer for the right reason.
There's no way to tell which one that is without trying it.
The study of history is (among other things) the hypothesis that someone's already tried it.
Friday, April 20, 2012
Psychological Egoism is True and it Matters
If you're truly selfish, you will never act selflessly - it will only be a pretense. If you're truly selfless, then selfishly accruing resources will actually help other people, as you're likely to use them on your not-self.
(This is an expanded version of something Jehu helped me find the words for.)
I take as proven that in any conflict, the less selfish should win, as it causes less collateral damage. If satisfying values is good for one person, then it must be better for more people. Only, the truly selfish will never bow to this principle - by definition, they do not care about anything I might care to use in my argument. Ergo, if in trying to be selfless you let someone else go first, most likely you just harmed more people than you helped.
This can all be derived from psychological egoism, though I do admit the use of 'selfish' in this context can be misleading.
Psychological egoism is true simply because actions follow from goals. You can try to act according to Chappell's definition of selflessness, but that means setting the goal of contradicting your own goals. If you succeed, all it means is that you valued the selflessness goal over other more 'selfish' goals, which means you're still maximizing your own selfish values. (And if you fail it means you succeeded at your other goals.)
What does this imply? This implies that there are those selfless in Chappell's definition. They're the ones that value the values of others as well as their own purely personal values.
When such a person acts selfishly, maximizing e.g. their own warm fuzzies, they help others.
Which in turn means selfishness, in its true philosophical definition, is nothing to be ashamed of. And, as I hope I've shown, anyone who is truly selfish in Chappell's narrow sense will never admit it, as long as selfishness is considered bad.
It is not only pointless to consider selfishness bad, but counter-productive. It it simply a barrier to combating narrow selfishness.
This is not a coincidence. If you were narrowly selfish and wanted to get your natural opponents out of the way, is this not exactly how you'd try to twist them into knots? Certainly, if I ever decide to embrace solipsism, it is exactly what I'm going to do.
Now I'm going to do the same thing from the dark angle.
Assume it is good to be selfish. This proves it isn't good to be selfish - it's a wrong question, not even wrong situation.
If it is good for my opponent to be selfish, it must be good for me to be selfish, by symmetry. It is selfishness that is good, not the particular person being selfish. And I happen to think it would be better if I won. Therefore, their belief in selfishness means that my beliefs imply I should win.
Verification: what if it isn't selfishness per se that's good, but the person? Then I've again proven that selflessness is better. If one person is good, then I just find two people that add up to being more good.
No matter what argument you start with, it proves that every individual should fight unreservedly for the selfish values. If there were an argument that was only an impediment to the narrowly selfish, I would advocate that. But, as above, there cannot be.
(This is an expanded version of something Jehu helped me find the words for.)
I take as proven that in any conflict, the less selfish should win, as it causes less collateral damage. If satisfying values is good for one person, then it must be better for more people. Only, the truly selfish will never bow to this principle - by definition, they do not care about anything I might care to use in my argument. Ergo, if in trying to be selfless you let someone else go first, most likely you just harmed more people than you helped.
This can all be derived from psychological egoism, though I do admit the use of 'selfish' in this context can be misleading.
Psychological egoism is true simply because actions follow from goals. You can try to act according to Chappell's definition of selflessness, but that means setting the goal of contradicting your own goals. If you succeed, all it means is that you valued the selflessness goal over other more 'selfish' goals, which means you're still maximizing your own selfish values. (And if you fail it means you succeeded at your other goals.)
What does this imply? This implies that there are those selfless in Chappell's definition. They're the ones that value the values of others as well as their own purely personal values.
When such a person acts selfishly, maximizing e.g. their own warm fuzzies, they help others.
Which in turn means selfishness, in its true philosophical definition, is nothing to be ashamed of. And, as I hope I've shown, anyone who is truly selfish in Chappell's narrow sense will never admit it, as long as selfishness is considered bad.
It is not only pointless to consider selfishness bad, but counter-productive. It it simply a barrier to combating narrow selfishness.
This is not a coincidence. If you were narrowly selfish and wanted to get your natural opponents out of the way, is this not exactly how you'd try to twist them into knots? Certainly, if I ever decide to embrace solipsism, it is exactly what I'm going to do.
Now I'm going to do the same thing from the dark angle.
Assume it is good to be selfish. This proves it isn't good to be selfish - it's a wrong question, not even wrong situation.
If it is good for my opponent to be selfish, it must be good for me to be selfish, by symmetry. It is selfishness that is good, not the particular person being selfish. And I happen to think it would be better if I won. Therefore, their belief in selfishness means that my beliefs imply I should win.
Verification: what if it isn't selfishness per se that's good, but the person? Then I've again proven that selflessness is better. If one person is good, then I just find two people that add up to being more good.
No matter what argument you start with, it proves that every individual should fight unreservedly for the selfish values. If there were an argument that was only an impediment to the narrowly selfish, I would advocate that. But, as above, there cannot be.
Further on Honest Disagreement
I stated that actually sharing your premises and assumptions is remarkably difficult, and therefore Hanson, Cowen, and Aumann were wrong in some ways. I have been reminded that they are also right in some ways.[1]
I have to agree that disagreements aren't honest, but only because I know how they're not honest.
Most discussions aren't really about what the participants believe.
I mean that tons of very, very common beliefs have been labelled as socially unacceptable. Selfish beliefs. Beliefs about one's superiority, particular or general.
Other kinds of beliefs, such as particularly revealing beliefs; beliefs that make you vulnerable, according to the accepted status game rules, to insults or credibility trashing. Beliefs that make you emotionally vulnerable.
A third kind, slightly different, of beliefs that are too important to allow challenges to. If your lifestyle leans heavily on certain beliefs, and you cannot afford doubt on these beliefs, then it is rational not to make them vulnerable to criticism or questioning.
Taken together, the patterns that ties these beliefs together is that they're the kind you particularly care about, and that saying them out loud is embarrassing. I know, because I do all this too. These kinds of beliefs are usually avoided in conversation at all costs.
And, I should emphasize, this is usually the correct strategy, as I tried to imply above. While some of them are embarrassing due to social constructs, others are embarrassing due to actually being declarations against interest.
Arguing about things you don't really care about is utterly pointless, a complete waste of time. As far as I can tell, anyway, perhaps I've missed something. But I can guarantee nobody learns anything in these 'arguments.' If an argument isn't making me uncomfortable, if the prospect I might be wrong doesn't worry me, I know I'm not arguing because of the belief, and I require myself to have some other good reason, such as practising communication.
Part of the reason I hate abstraction and love concrete examples is that it is so common to cloak these serious, care-about beliefs in layers of abstraction. It sterilizes the argument, makes them safe. Even if you lose convincingly, you can tell yourself it was a loss at the abstraction layer, not on the concrete bits from which it was derived.
However, taken together, I can safely predict that what Hansen or Cowen call meta-rationals will never be noticeable until there's a tribe that gently encourages sharing of embarrassing beliefs. A tribe that forgives the embarrassing trait, that by contrast celebrates the courage necessary to deliberately embarrass yourself. A tribe that doesn't reinforce the aversion response and reiterate that you should be embarrassed about it.
Come to think, this makes me pretty angry. Furious, in fact. You see, many of these embarrassing beliefs are held by ~99.9etc% of the population. These social norms aren't showering contempt on epistemic mistakes, they're showering contempt on being human. Admitting them doesn't mean you're a bad person, they mean you're homo sapiens sapiens and I for one refuse to be embarrassed to be a human being.
[1] I find that it is part of my learning process to criticize what's said before I work out what they were trying to say. Noting and pinning down the errors in their actual words allows me to be comfortable with granting them anything they're right about.
P.S. Yeah I just used Hanson's razor on something written in part by Hanson. Being meta-rational is low status.
I have to agree that disagreements aren't honest, but only because I know how they're not honest.
Most discussions aren't really about what the participants believe.
I mean that tons of very, very common beliefs have been labelled as socially unacceptable. Selfish beliefs. Beliefs about one's superiority, particular or general.
Other kinds of beliefs, such as particularly revealing beliefs; beliefs that make you vulnerable, according to the accepted status game rules, to insults or credibility trashing. Beliefs that make you emotionally vulnerable.
A third kind, slightly different, of beliefs that are too important to allow challenges to. If your lifestyle leans heavily on certain beliefs, and you cannot afford doubt on these beliefs, then it is rational not to make them vulnerable to criticism or questioning.
Taken together, the patterns that ties these beliefs together is that they're the kind you particularly care about, and that saying them out loud is embarrassing. I know, because I do all this too. These kinds of beliefs are usually avoided in conversation at all costs.
And, I should emphasize, this is usually the correct strategy, as I tried to imply above. While some of them are embarrassing due to social constructs, others are embarrassing due to actually being declarations against interest.
Arguing about things you don't really care about is utterly pointless, a complete waste of time. As far as I can tell, anyway, perhaps I've missed something. But I can guarantee nobody learns anything in these 'arguments.' If an argument isn't making me uncomfortable, if the prospect I might be wrong doesn't worry me, I know I'm not arguing because of the belief, and I require myself to have some other good reason, such as practising communication.
Part of the reason I hate abstraction and love concrete examples is that it is so common to cloak these serious, care-about beliefs in layers of abstraction. It sterilizes the argument, makes them safe. Even if you lose convincingly, you can tell yourself it was a loss at the abstraction layer, not on the concrete bits from which it was derived.
However, taken together, I can safely predict that what Hansen or Cowen call meta-rationals will never be noticeable until there's a tribe that gently encourages sharing of embarrassing beliefs. A tribe that forgives the embarrassing trait, that by contrast celebrates the courage necessary to deliberately embarrass yourself. A tribe that doesn't reinforce the aversion response and reiterate that you should be embarrassed about it.
Come to think, this makes me pretty angry. Furious, in fact. You see, many of these embarrassing beliefs are held by ~99.9etc% of the population. These social norms aren't showering contempt on epistemic mistakes, they're showering contempt on being human. Admitting them doesn't mean you're a bad person, they mean you're homo sapiens sapiens and I for one refuse to be embarrassed to be a human being.
[1] I find that it is part of my learning process to criticize what's said before I work out what they were trying to say. Noting and pinning down the errors in their actual words allows me to be comfortable with granting them anything they're right about.
P.S. Yeah I just used Hanson's razor on something written in part by Hanson. Being meta-rational is low status.
Thursday, April 19, 2012
Individualism Rubble
The process of finding and fixing that contradiction seems to have lethally wounded my motivation to write up an article proper. I would like to be better about doing what I say I'd do, though, so you get this.
Another perspective on the same events; I invited reality to kick me in the teeth, and reality obliged.
"Notably, La Wik has only the vaguest idea what it is"
Nope, sorry. I still think it's misleading. I still think it needs to tone the abstraction level down about two thirds. I still think it is anemic and waffling. I think it has other flaws. However, none of this is ignorance, just stylistic flaws, and frankly par for the course on Wikipedia.
"I can't even find a Stanford Encyclopedia entry."
Oh hey, look at that. I particularly remember Aretae using the term, though it also came up before and after.
I also used the concept, if not the label. I'm not sure if that overall reflects well or badly on me.
In fact, reality is so obliging about teeth-kicking I find that being incautious about my pronouncements is a fast and easy way to learn, though requiring a metaphorical chin guard.
Emphasis mine,
I often write to find out how I really think. Sometimes it is pretty surprising. Sometimes, reality is pretty surprised, and has a habit of letting me know.
Another perspective on the same events; I invited reality to kick me in the teeth, and reality obliged.
"Notably, La Wik has only the vaguest idea what it is"
Nope, sorry. I still think it's misleading. I still think it needs to tone the abstraction level down about two thirds. I still think it is anemic and waffling. I think it has other flaws. However, none of this is ignorance, just stylistic flaws, and frankly par for the course on Wikipedia.
"I can't even find a Stanford Encyclopedia entry."
Oh hey, look at that. I particularly remember Aretae using the term, though it also came up before and after.
I also used the concept, if not the label. I'm not sure if that overall reflects well or badly on me.
In fact, reality is so obliging about teeth-kicking I find that being incautious about my pronouncements is a fast and easy way to learn, though requiring a metaphorical chin guard.
Emphasis mine,
"These muddled notions played an important historical role in the development of later evolutionary theory, error calling forth correction; like the folly of English kings provoking into existence the Magna Carta and constitutional democracy."I don't think the process is supposed to work without intentional human intervention, but it seems to work for me.
I often write to find out how I really think. Sometimes it is pretty surprising. Sometimes, reality is pretty surprised, and has a habit of letting me know.
Wednesday, April 18, 2012
Very Short Notes on Honest Disagreement.
By Hanson and Cowen. I failed to record who linked it to me.
It seems to me that it is far from being verified that coming to agreement is as easy as Aumann 76 and the follow up studies indicated. Did they really include everything that is relevant to coming to agreement? How do you know?
By contrast, reversing this single assumption, assuming agreement is a lot harder than we realize, handily fits most of the data.
It seems to me that it is far from being verified that coming to agreement is as easy as Aumann 76 and the follow up studies indicated. Did they really include everything that is relevant to coming to agreement? How do you know?
By contrast, reversing this single assumption, assuming agreement is a lot harder than we realize, handily fits most of the data.
God and Authority vs. Epistemology. Coincidence?
Does God exist?
Has no right answer. Question is Not Even Wrong.
Does [x] have authority?
Has no right answer. Question is Not Even Wrong.
I suspect, no, not a coincidence.
I should check to make sure what I think I think about authority is what I actually think about it.
An authority is someone I should obey. A physical authority commands my actions. An intellectual authority commands my thoughts.
In reality, even if I should obey them, the reason is not authority. A physical 'authority' commands my actions through either through physical coercion or because I promised them something in a contractual exchange. An intellectual 'authority' is only an authority if they have the right arguments, and thus I follow the argument, not the authority.
I should also mention that the reason I suspect they're not a coincidence is because the logical structures feel the same. They feel not-even-wrong in the same way. But I have no explicit words to explain how or why.
Has no right answer. Question is Not Even Wrong.
Does [x] have authority?
Has no right answer. Question is Not Even Wrong.
I suspect, no, not a coincidence.
I should check to make sure what I think I think about authority is what I actually think about it.
An authority is someone I should obey. A physical authority commands my actions. An intellectual authority commands my thoughts.
In reality, even if I should obey them, the reason is not authority. A physical 'authority' commands my actions through either through physical coercion or because I promised them something in a contractual exchange. An intellectual 'authority' is only an authority if they have the right arguments, and thus I follow the argument, not the authority.
I should also mention that the reason I suspect they're not a coincidence is because the logical structures feel the same. They feel not-even-wrong in the same way. But I have no explicit words to explain how or why.
Monday, April 16, 2012
On Seeing the Non-Distinction Between Science and Art
I figured out a better way to put this.
One day I decided to properly settle a question. I'd found one of my critical assumptions, and I was going to bloody damn well question it proper.
And yeah, I got the usual mess of contradictory bullshit. It was something like 30% what I thought, 30% the opposite of that, 20% some third thing, and 20% unadulterated mess.
Thing is, I don't have to try to take a representative sample of my assumptions. I can test them exhaustively. So I did. I still habitually look for new assumptions to add to the pile.
Yes, a lot of them are a mess. Some of them aren't. I also have a class of things which are so obviously right that nobody thinks to question them, even once they think of questioning things.
I have a computer in front of me. To the left is a cup. They're on a table. I have hands. The hands have skin. You're looking at words. Reading them, in fact. You're looking at a display. (Did that make you look away?)
In fact, this subjectively different category is some 99.1% or something of things I know. (After writing this I found an example of the differences between men and women in this category, available on request.) They're not assumptions or beliefs or conclusions, just observations. Objectively, I couldn't at first see any difference, as they're still bits stored in neurons, but it certainly feels different. And people act differently about it. And the information seems to relate differently to evidence and doubt.
So I checked. And they are different.
I am questioning the idea I have a computer in front of me. Why stop there? Why not question whether I'm questioning?
Doesn't that seem bizarre and stupid? Because it is. Hopefully the contradiction is clear - if I can't question, then I wouldn't think to question whether I'm questioning.
There's a layer of 100% certainty at the bottom. Near it, in fact most things in absolute terms, have negligible uncertainty. Only actual insane people get them wrong. If you doubt whether you ate food in the last year...yup. Bonkers. Such people tend to remove themselves from the discussion. By dying of starvation, for example.
The interesting thing about this is that certainty can be transmitted upwards. Some of the class of assumptions that aren't a mess can be absorbed by the observation class. Moreover, having checked every assumption I can get my hands on, these assumptions follow patterns. Some of these patterns are found in the messier assumptions.
For example, the opposition on the not-messy assumptions tend to contradict themselves, often within sentences or paragraphs. But let me do the opposites first.
One of the spurious examples is the 'Aristotle believed it' pattern. It isn't too useful generatively and is of negative use in the cases where Aristotle was wrong.
Having a lot of people agree is not one of these patterns. Yes, it is common among observation-class beliefs. However, that's getting the causation backwards. The primary cause of a belief never being questioned at all is that literally no one disagrees, and so it never occurs to anyone a difference of opinion is possible. One cause having no one disagree, in turn, is that everyone is right. (I have hands. They have skin.) This is not the only cause.
However, what if everyone believes morality exists, because it does? What would that look like? Ditto free will? Ditto the soul? Well, if so, then anyone explaining how they don't exist should contradict themselves, especially if they go into details. What else?
One day I decided to properly settle a question. I'd found one of my critical assumptions, and I was going to bloody damn well question it proper.
And yeah, I got the usual mess of contradictory bullshit. It was something like 30% what I thought, 30% the opposite of that, 20% some third thing, and 20% unadulterated mess.
Thing is, I don't have to try to take a representative sample of my assumptions. I can test them exhaustively. So I did. I still habitually look for new assumptions to add to the pile.
Yes, a lot of them are a mess. Some of them aren't. I also have a class of things which are so obviously right that nobody thinks to question them, even once they think of questioning things.
I have a computer in front of me. To the left is a cup. They're on a table. I have hands. The hands have skin. You're looking at words. Reading them, in fact. You're looking at a display. (Did that make you look away?)
In fact, this subjectively different category is some 99.1% or something of things I know. (After writing this I found an example of the differences between men and women in this category, available on request.) They're not assumptions or beliefs or conclusions, just observations. Objectively, I couldn't at first see any difference, as they're still bits stored in neurons, but it certainly feels different. And people act differently about it. And the information seems to relate differently to evidence and doubt.
So I checked. And they are different.
I am questioning the idea I have a computer in front of me. Why stop there? Why not question whether I'm questioning?
Doesn't that seem bizarre and stupid? Because it is. Hopefully the contradiction is clear - if I can't question, then I wouldn't think to question whether I'm questioning.
There's a layer of 100% certainty at the bottom. Near it, in fact most things in absolute terms, have negligible uncertainty. Only actual insane people get them wrong. If you doubt whether you ate food in the last year...yup. Bonkers. Such people tend to remove themselves from the discussion. By dying of starvation, for example.
The interesting thing about this is that certainty can be transmitted upwards. Some of the class of assumptions that aren't a mess can be absorbed by the observation class. Moreover, having checked every assumption I can get my hands on, these assumptions follow patterns. Some of these patterns are found in the messier assumptions.
For example, the opposition on the not-messy assumptions tend to contradict themselves, often within sentences or paragraphs. But let me do the opposites first.
One of the spurious examples is the 'Aristotle believed it' pattern. It isn't too useful generatively and is of negative use in the cases where Aristotle was wrong.
Having a lot of people agree is not one of these patterns. Yes, it is common among observation-class beliefs. However, that's getting the causation backwards. The primary cause of a belief never being questioned at all is that literally no one disagrees, and so it never occurs to anyone a difference of opinion is possible. One cause having no one disagree, in turn, is that everyone is right. (I have hands. They have skin.) This is not the only cause.
However, what if everyone believes morality exists, because it does? What would that look like? Ditto free will? Ditto the soul? Well, if so, then anyone explaining how they don't exist should contradict themselves, especially if they go into details. What else?
Goal: Debate by Stating All Evidence
To quote myself, "If you can think of a way to be more convincing by presenting evidence against the desired conclusion, please let me know. I'm the only one I can safely say is impressed by such acts." Though, naturally, writing that un-convinced me of it. I can think of several examples of the "It's true that, but..." construction. There's still some problems with how it's carried out but I'll leave that as an exercise for the reader.
It made me realize something else, too. If being balanced is good, why not take the limit? Why not supercharge it?
New Goal: debate by stating all the relevant evidence I know, for, against, and confusing, about the dispute in question. Completely ignore time/length constraints and convenience. See what happens.
(What I'm posting later today also helped, as I made conscious and explicit what I think happens when I examine an assumption and really try to settle it. Specifically how the evidence is distributed across conclusions.)
It made me realize something else, too. If being balanced is good, why not take the limit? Why not supercharge it?
New Goal: debate by stating all the relevant evidence I know, for, against, and confusing, about the dispute in question. Completely ignore time/length constraints and convenience. See what happens.
(What I'm posting later today also helped, as I made conscious and explicit what I think happens when I examine an assumption and really try to settle it. Specifically how the evidence is distributed across conclusions.)
Friday, April 13, 2012
Pondering Confirmation Bias
Defeating confirmation bias is straightforward. At first, I needed a little acting skill, but I found it generally useful to be able to choose beliefs rather than have them forced on me. I can choose the rational choice if I want to be rational, but I don't have to.
There's a reflex response to having a belief, and it is rationalizing it. A key point is that the rationalization process is generally honest, it doesn't just invent fantasy reasons. But, since it is for justifying decisions rhetorically, not rationally, it doesn't bother with the other side, which is useless for its purposes. (If you can think of a way to be more convincing by presenting evidence against the desired conclusion, please let me know. I'm the only one I can safely say is impressed by such acts.)
But nothing prevents using the reflex on two opposing beliefs. Or three, for that matter. By pretending to honestly adopt the opposing view, I get the best opposing rationalizations I can muster, and then I can compare them.
The fact I don't have to adopt either belief makes this even easier. My subconscious fears convincing me of things it thinks are wrong, and will balk at providing rationalizations it thinks are dangerous. Changing my belief-adoption process made almost none of them dangerous. (They still come tagged as suspicious, though.)
The method doesn't have to stop at simply defeating confirmation bias, however. It can be used to fully explore another perspective - to go beyond the objections presented, and possibly learn something.
I suppose for my next trick I need to learn to hold two perspectives in mind at once. As of now, to fully explore multiple perspectives on an article, I'd have to read it multiple times.
There's a reflex response to having a belief, and it is rationalizing it. A key point is that the rationalization process is generally honest, it doesn't just invent fantasy reasons. But, since it is for justifying decisions rhetorically, not rationally, it doesn't bother with the other side, which is useless for its purposes. (If you can think of a way to be more convincing by presenting evidence against the desired conclusion, please let me know. I'm the only one I can safely say is impressed by such acts.)
But nothing prevents using the reflex on two opposing beliefs. Or three, for that matter. By pretending to honestly adopt the opposing view, I get the best opposing rationalizations I can muster, and then I can compare them.
The fact I don't have to adopt either belief makes this even easier. My subconscious fears convincing me of things it thinks are wrong, and will balk at providing rationalizations it thinks are dangerous. Changing my belief-adoption process made almost none of them dangerous. (They still come tagged as suspicious, though.)
The method doesn't have to stop at simply defeating confirmation bias, however. It can be used to fully explore another perspective - to go beyond the objections presented, and possibly learn something.
I suppose for my next trick I need to learn to hold two perspectives in mind at once. As of now, to fully explore multiple perspectives on an article, I'd have to read it multiple times.
Tuesday, April 10, 2012
Climate Epistemology
Verdict: unambiguous fail. (Via.)
I've been assuming that the CO2 greenhouse effect is about preventing energy from escaping into space. But, as I find myself repeating, no radiation in CO2's absorption band is escaping to space already. It is saturated. (Backup link.) Please note this is from Wikipedia. If there's any bias, it is in the other direction. (The bias is pretty funny, check out the picture's caption on the greenhouse gas page.)
I thought, surely they've performed this simple verification of the physics. I must be missing something. Nope.
Now. To make sure I don't perform the same mistake: I've never seen a climate article talking about how climate change is cooling regions of the upper atmosphere. Have you? Please pass it along, if so.
You cannot expect me to believe these guys are taking climate research seriously when they fail these basic, basic checks. (What other simple tests have they failed to carry out, that I haven't thought of because I don't have a thorough survey?)
Conclusion: I don't believe in climate scientists, let alone specific climate science papers.
I've been assuming that the CO2 greenhouse effect is about preventing energy from escaping into space. But, as I find myself repeating, no radiation in CO2's absorption band is escaping to space already. It is saturated. (Backup link.) Please note this is from Wikipedia. If there's any bias, it is in the other direction. (The bias is pretty funny, check out the picture's caption on the greenhouse gas page.)
I thought, surely they've performed this simple verification of the physics. I must be missing something. Nope.
"The serious skeptical scientists have always agreed with the government climate scientists about the direct effect of CO2."Right, but we're not talking escaping to space, though, yeah? It's just taking energy that was absorbed higher up and absorbing it lower down?
"The climate models predict that when the surface of the earth warms, less heat is radiated from the earth into space (on a weekly or monthly time scale)."Haha, nope.
Now. To make sure I don't perform the same mistake: I've never seen a climate article talking about how climate change is cooling regions of the upper atmosphere. Have you? Please pass it along, if so.
"A major study has linked the changes in temperature on the earth's surface with the changes in the outgoing radiation."Outgoing radiation cannot change due to increasing CO2. Especially not in the infrared band. It literally took me less than ten minutes of research to discover this. On La Wik, I emphasize. Apparently, nobody else has thought to check.
You cannot expect me to believe these guys are taking climate research seriously when they fail these basic, basic checks. (What other simple tests have they failed to carry out, that I haven't thought of because I don't have a thorough survey?)
Conclusion: I don't believe in climate scientists, let alone specific climate science papers.
Monday, April 9, 2012
On Ironing Out the Difference Between Art and Science
Two short stories. I made them up, but they're almost certainly not fictional.
Auto engineer goes to his boss, explains that the reason the new car prototype didn't reach new peaks of efficiency is not his fault. His materials weren't good enough, the technology isn't there yet, whatever. The boss buys it.
A doctor goes to his boss, explains that the reason his patient died was that the patient was special. It was a harder case than the one the other doctor had, that, while superficially similar, lived. The boss buys it.
The problem, of course, is that the engineer next door isn't working on a special car. If they get their efficiency up, it becomes hard or impossible for engineer #1 to continue to sell their excuses.
We often say engineering is objective and a science, while medicine is much softer and more of an art.
I think I now understand the actual causal difference - it is in unmistakable facts. Human brain, even untrained brains, have a certain baseline of epistemic reliability. Certain facts and relations it cannot be fooled about. In medicine, there is no direct connection between these facts and the situations on the ground.
You can verify the unmistakable facts with the engineering example. The boss buying it is implausible. Similarly, when you go to make coffee, your coffee grounds are where you think you left them. If someone tries to argue that you don't know where your coffee is while the stuff's in your hand, it just makes you think - and viscerally feel - that they're crazy. The car engineer isn't quite as clear-cut, but it is close enough.
As for the direct connection, that is the idea I'm suggesting should be verified. By understanding the cause of a field being arty, it should be possible to fix the problem and make it sciency.
A list of unmistakable facts - anti-biases or anti-fallacies, if you will - would be helpful, but probably not necessary. The extant categories of 'objective' and 'subjective' fields seem reliable enough to me.
Thinking about it in specific, concrete terms should also bestow a better sense of how and when it is easy to be fooled.
Auto engineer goes to his boss, explains that the reason the new car prototype didn't reach new peaks of efficiency is not his fault. His materials weren't good enough, the technology isn't there yet, whatever. The boss buys it.
A doctor goes to his boss, explains that the reason his patient died was that the patient was special. It was a harder case than the one the other doctor had, that, while superficially similar, lived. The boss buys it.
The problem, of course, is that the engineer next door isn't working on a special car. If they get their efficiency up, it becomes hard or impossible for engineer #1 to continue to sell their excuses.
We often say engineering is objective and a science, while medicine is much softer and more of an art.
I think I now understand the actual causal difference - it is in unmistakable facts. Human brain, even untrained brains, have a certain baseline of epistemic reliability. Certain facts and relations it cannot be fooled about. In medicine, there is no direct connection between these facts and the situations on the ground.
You can verify the unmistakable facts with the engineering example. The boss buying it is implausible. Similarly, when you go to make coffee, your coffee grounds are where you think you left them. If someone tries to argue that you don't know where your coffee is while the stuff's in your hand, it just makes you think - and viscerally feel - that they're crazy. The car engineer isn't quite as clear-cut, but it is close enough.
As for the direct connection, that is the idea I'm suggesting should be verified. By understanding the cause of a field being arty, it should be possible to fix the problem and make it sciency.
A list of unmistakable facts - anti-biases or anti-fallacies, if you will - would be helpful, but probably not necessary. The extant categories of 'objective' and 'subjective' fields seem reliable enough to me.
Thinking about it in specific, concrete terms should also bestow a better sense of how and when it is easy to be fooled.
Thursday, April 5, 2012
Poverty Was on My Mind
I found I wanted to claim that the weak are weak due to oppression. Unfortunately, I had previously claimed that the weak are weak simply because they are. To resolve this, I looked into the details. Lucky for me, in both cases I was thinking about poverty.
Do I think that poverty is caused by the individual, or by their environment? I wonder if I have any other contradictions?
I think even in the freest society there would still be poverty. I also think that there's a lot more poverty now than the minimum. (Optimistically, there would be about half. Pessimistically, one-tenth or less. There's even a tiny possibility that poverty would be an entirely different thing.)
Specifically, I'm sympathetic to the conservative bootstrap theory of poverty. The bum probably could get a job. At the same time, I can't help but notice that the government does many things which hinder the working classes, and they're hardly a lone gun in this arena.
Looking more closely resolved my belief into an objection not to the oppression theory of poverty, but rather to the implicit causation model. The weak are not weak because they're poor. They're poor because they're weak.
The lotto forms a wonderful natural experiment - what happens if you give the weak a lot of money? Do they becomes strong, or do they stay weak?
The universal failure of welfare programs is easier seen this way too. If they're implemented in good faith, they're attempting to reallocate status, not money per se. I discover that giving money away doesn't lower anyone's status.
You can verify the weakness theory of poverty by looking at the kind of stock returns elected officials achieve.
There's probably some level of irredeemable poor. Which is sad, but simply not solvable. Our society tries to lie to them, but I haven't noticed many getting taken in.
For the rest, strength cannot be redistributed. However, I could run interference against interfering busybodies. One of the primary busybodies are other poor. All normal conditions become norms - in a slum, poverty is a norm and anyone violating it risks tribal retribution. Of course this retribution can only be achieved coercively. There's also the 100% effective marginal tax rates.
I have little doubt there are other, subtler forms of keeping the weak in their place.
If the status/envy line of inquiry isn't corrupt, then it follows that the bulk of the populace overall like poverty. It puts someone lower than them on the ladder. They may want to relieve the worst symptoms, but poverty itself? Hardly. Also a second reason the poor hate anyone getting un-poor.
This sentiment will manifest in institution features that entrench poverty, rather than combat it, as it would appear. For example, I understand the Church would take orphans off the street to be trained as clergy. Only, I also understand that clergy took vows of poverty...
Strength cannot be redistributed, but I can see few hard limits on who can decide to develop strength, once everyone else is prevented from ganging up on them. The poor are not oppressively prevented from using their powers, they're prevented from developing them in the first place.
Do I think that poverty is caused by the individual, or by their environment? I wonder if I have any other contradictions?
I think even in the freest society there would still be poverty. I also think that there's a lot more poverty now than the minimum. (Optimistically, there would be about half. Pessimistically, one-tenth or less. There's even a tiny possibility that poverty would be an entirely different thing.)
Specifically, I'm sympathetic to the conservative bootstrap theory of poverty. The bum probably could get a job. At the same time, I can't help but notice that the government does many things which hinder the working classes, and they're hardly a lone gun in this arena.
Looking more closely resolved my belief into an objection not to the oppression theory of poverty, but rather to the implicit causation model. The weak are not weak because they're poor. They're poor because they're weak.
The lotto forms a wonderful natural experiment - what happens if you give the weak a lot of money? Do they becomes strong, or do they stay weak?
The universal failure of welfare programs is easier seen this way too. If they're implemented in good faith, they're attempting to reallocate status, not money per se. I discover that giving money away doesn't lower anyone's status.
You can verify the weakness theory of poverty by looking at the kind of stock returns elected officials achieve.
There's probably some level of irredeemable poor. Which is sad, but simply not solvable. Our society tries to lie to them, but I haven't noticed many getting taken in.
For the rest, strength cannot be redistributed. However, I could run interference against interfering busybodies. One of the primary busybodies are other poor. All normal conditions become norms - in a slum, poverty is a norm and anyone violating it risks tribal retribution. Of course this retribution can only be achieved coercively. There's also the 100% effective marginal tax rates.
I have little doubt there are other, subtler forms of keeping the weak in their place.
If the status/envy line of inquiry isn't corrupt, then it follows that the bulk of the populace overall like poverty. It puts someone lower than them on the ladder. They may want to relieve the worst symptoms, but poverty itself? Hardly. Also a second reason the poor hate anyone getting un-poor.
This sentiment will manifest in institution features that entrench poverty, rather than combat it, as it would appear. For example, I understand the Church would take orphans off the street to be trained as clergy. Only, I also understand that clergy took vows of poverty...
Strength cannot be redistributed, but I can see few hard limits on who can decide to develop strength, once everyone else is prevented from ganging up on them. The poor are not oppressively prevented from using their powers, they're prevented from developing them in the first place.