Tuesday, February 9, 2016

Sophist Hunting: Adams on What-He-Calls-Thinking

I shall idly propose pack-hunting Sophists. I think it would be a fun group activity to hunt down and maul these puny specimens. I'm concerned about self-criticism tolerance: necessarily, the group would need to criticize each other to improve, and moderns seem waaaaaay too thin-skinned for that.

"I don’t believe our brains evolved to give us truth. Our brains evolved to create little movies in which we get to be the stars."
This is something people like me say to easily control the kind of people who regularly read Scott Adams posts and cartoons. It works like all good Sophistry, by simplifying a truth in such a way as to make the target more vulnerable to Sophistry.

This is the reason that training yourself to refute Adam's "Identity-Analogy-Reason" hierarchy is the best lifehack by far. Believing things because they are true is not only immensely powerful but immensely comfortable. Want to quit your cognitive dissonance habit? You can. It's not easy or cheap, but it's not exactly risky. Continuous effort will succeed, period.

Back down a meta level, this one works by comforting the mark who wants to give up the hard work of identifying truth. If you invest in less of it, you'll know less of it. "I'm not being lazy, the job is futile." Problem: the job is not futile. Ultimately, every Sophistry is a lie. If it weren't a lie, it would be called epistemology instead.

Adams soothes his insecurities about his own life by controlling yours, (Trump is probably similar) so he likes to spread ideas that makes controlling you easier.

"For example, Ted Cruz and Richard Dawkins believe totally different things about reality and yet both can use an ATM, shop at a store, and procreate."
Conflation.
Believing in true things doesn't necessarily mean rejecting everything with a fallacy in it. It merely means rejecting that as a reason to believe. Especially on personally-relevant but complex facts, a gut instinct can be more reliable than formal reasoning, especially early in your training. E.g. many studies come out condemning meat, but you just like meat a lot. In the end, it turns out the studies were wrong, and your gut (literally in this case) was right, even if you couldn't figure out how.

This one's just wrong though. Cruz and Dawkins can both use an ATM because they don't believe different things about using an ATM, so there's your lie.

"So your filter on reality need not be related to any actual underlying reality in order to keep you alive. It just has to NOT kill you."
Truth: very poor models of the world are often good enough.
Lie: not killing you is insufficient. The model also has to reproduce.
Behind the curtain: perhaps you want more out of life than successfully reproducing your mental model. You are in competition with other models for these goals, because resources are scarce. Merely not killing you is not enough to succeed.
Truth: if your conscious model is whack enough, the subconscious will assume control by manipulating you, making the model all but irrelevant. E.g. Muslims in fact eat pork and cheat at Ramadan, as long as nobody is looking. The belief-alief distinction is absolutely critical.

Notice how much  more complicated the truth is. As per Moldbug, once an idea is in there, if it wanted to come out it would have done so on its own. Once such a sophistry is planted, it's often too complicated to understand the truth instead, so the sophistry fossilizes there. If nothing else, the hobbit will lose patience with the explanation. And why wouldn't they? It's all talk, it has nothing to do with their job. Thinking is hard, and there's no ROI here.

"At the bottom of the chart we have what I call a Social Filter. That involves two or more people lying to each other in ways that society expects them to lie."
A very sophisticated piece of sophistry. It's really good, you have to admire the craftsmanship.

Virtue signalling. It makes you feel good when someone says they'll pretend your lies aren't lies.
The theory is predictive, which is grit in the gears of Adam's mind-control.
Finally, he misrepresents the theory in a plausible way. The whole point is we don't expect them to lie that way. We don't naturally assume Iowa's election was rigged until proven otherwise.

Notice the almost-Xanatos-gambit nature. If you simply believe him, it's easy. If you buy the form of the theory, it will stop being predictive, meaning you'll slump back into moist robot mode, ceding control to Adams and people like him. (Me, for instance.) Even if you realize the theory's been lied about, you might not talk about it because you, too, realize there's virtue to be signalled here. Not to mention competitive advantage if you can fool one of your competitors.

The problem it's impossible to do a full Xanatos gambit when the truth is your enemy. The truth is eternal, it literally cannot be defeated. Anyone who genuinely wants it can borrow its endless power.

"This is the dumbest and least predictive filter, but the one you see most often because of social necessity."
Truth: it is dumb, and unpredictive, and socially necessary.
Lie: this is in fact the theory at hand.

"One level up from the Social Filter is where the pundits and candidates try to live. This group takes the “high ground” and understands that all candidates are lying during a campaign."
My brain rejected this whole section as noise. How about yours?
"The next level up is the Aspirational/Imaginary Reason filter."
More noise for a bit.
"Do you believe in climate change science? How about the existence of a gender pay gap? The people on both sides are certain the science is with them."
Truth: they believe science is with them.
Lie: this means science is in fact with them.

Small digression...
"I need to add one level to the BOTTOM of the persuasion stack. That level involves arguing about the definition of a word."
Using definitions correctly is one of the most powerful tools for clarity of thought. Naturally, Adams doesn't want you using them.
Truth: When NRO decided to argue about definitions, they capitulated.
Lie: It's not because they were using definitions. It's because they were capitulating, and their style of definition use was a tell. Using definitions requires discipline. NRO is just, like, not discipline. NRO&discipline = false.

Back now, let's use definitions correctly.
"start of social lie —
Science is the best way to understand our reality. The scientific method is the best tool we have for predicting the future."
This is phrased as a forward definition "Science, which we agree what is, happens to predict and understand well." It's a backwards definition. "What predicts and understands well is science." (Also, lie: we agree what science is. Adams must know this is a lie, because it's contradicted by the fact he knows that both climate apologists and vaccine deniers think they have science on their side.)

Obviously, both sides can't be correct the science is with them. It's hardly shocking to propose that this means the science isn't with them. It's perhaps slightly shocking to propose neither side is using science. This does not mean science isn't predictive: it's predictive by definition. You can argue science is not possible if you want, but if you do I'll show you an airplane.

Basically Adams is not your friend. He sure likes to pretend, though. He'd get along well with Al-Ghazali. Spread their ideas too much and you stop being able to build airplanes.

"At the top level, we have the Moist Robot filter. This is the subject of my book, and the basis for my Trump predictions that have been accurate except for one rigged election in Iowa. (And I should have seen that coming.)"
Truth: the persuasion filter is a predictive filter.
Lie: Moist Robot is the basis for his predictions. You do not have to be mind-controlled by Adams if you don't want to be. it is less work, I'll grant.
"Under the Moist Robot filter, persuasion is everything, and free will is an illusion."
Truth: persuasion is important.
Lie: free will doesn't exist.
Free will is probably irrelevant. What matters is agency: can you jump out of that window, right now? Or can you not even make that decision? If you can make that decision, you can make any advantageous decision. If you can't make that decision, it's likely there are many, less pointless decisions you can't make either. Whether it was determined or chosen that you could or could not make these decisions is quite irrelevant to whether they would profit you or not.

The point is to comfort those who are worried about the fact being mind-controlled is the lazy way out. "No, no, don't worry, all that work is futile."
"Reason is an illusion too."
Strait-up contradiction. "I'm highly predictive but predictions are an illusion." Obvious self-serving lie. Graceless and craftless.

"Under this way of thinking, anything that CAN be corrupted already is."
Truth: if it can be corrupted, it is already.
Lie: he predicted this due to the 'reason is illusion' model.
Lie: this model naturally leads to this inference. Not to mention, if reason is an illusion you can't make inferences.

Sunday, January 31, 2016

Exit, Shortest Version

What can become a parasite class will become a parasite class.
Any government without exit can become a parasite class.
Parasite classes raise their costs above their benefits.
Rebellion will come to have a higher expected value than submission, even accounting for loss aversion.
The parasite class will collapse.

Unless parasitism per se is attacked, in other words unless Exit is enshrined and sacralized, the cycle will repeat.

Thursday, January 21, 2016

Applied Prudence-Morality 3: Silent Trade

(Prime, Applied 1, Applied 2.)

ESR claims 'silent trade' is universal objective ethics. It is in fact merely prudence.

The cannon-toting Europeans could take all the goods off the beach and run. The cost is not getting future trades. Even cannibal half-savages don't cooperate with defectors. If the trader thinks before they act, they realize they end up richer if they continue to trade.

ESR also overlooks this:
The silent trade works because the sailors have supremacy at sea, the villagers supremacy on land.
Though that's not exactly true either. The EV of cannon-invasion is the gains from invasion minus the risk of costs from invasion. Meaning war is profitable compared to trade only if war is cheap, even if victory is guaranteed. At that time, the cannon-boats would have had to pay for war themselves, rather than having e.g. you pay for it. Further, they'd then have to mine or find the gold themselves, rather than paying natives to do it for them. Ergo, trade.



Tuesday, January 19, 2016

Fisking Affective Heuristic

Misleading.
I'm doing this to measure how misleading. You can too, but it's done by comparing volume of edits to the original, which means reading the original.

Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.
Start looking for evidence Yudkowsky distinguishes between tuned and untuned heuristics.

This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn't protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock.
...which is how insurance protects things. Further, if the clock is lost, no insurance can rediscover it. For a visual: I can't unsmash Grandfather's clock, I can only buy a new one that looks similar.
And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.
Consider you'd be quite upset if the first clock is lost, and not particularly upset about the second. Would a cool hundred make you feel better? Remember, when stressed, most will take that stress out on others, leading to a cycle of violence. (Partly because cities have high baseline stress - there's a threshold.)

Not that I'm necessarily defending the wisdom of this particular decision, I'm attacking this dismissal of it.
Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.
Yes, maybe you could get away with claiming to have forseen my objection.

Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.  Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.  
About that.
Only 52% could do item AB30901, which is to look at a table on page 118 of the 1980 World Almanac and answer:
According to the chart, did U.S. exports of oil (petroleum) increase or decrease between 1976 and 1978?
Are we sure subjects understand percentages? Given literacy isn't binary, did you check whether subjects who do understand percentages can be bothered to work it out?

The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale.  Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.
Yes, I would predict the same.
I would predict that subjects convert subjective appreciation onto the objective scale. More intense appreciation will result in higher scores. It gets them out of the test the quickest.

Subjects don't think so fuzzily about stuff that will concretely impact them. Try asking for a raise while saying something like, "A raise of [your target] increased productivity by 95% of the difference between no raise and the maximum raise," and see how that works out for you.

Or consider the report of Denes-Raj and Epstein (1994):  Subjects offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl, often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.  E.g., 7 in 100 was preferred to 1 in 10.
About that.
Notice larger number -> stop thinking, because it's worth losing a few bucks now and then to stop thinking earlier.

According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans.
There's a couple things that could be going on here. Naturally, Denes-Raj and Epstein did not ask the necessary discriminating questions.

First, zombies. They knew they're supposed to say the "the probabilities are against me" but don't know what that means.

Second, they do know what it means but it's too much trouble to remember for a few bucks.

Third, a few bucks is less rewarding that playing a more exciting game. Words != aliefs. Working out what they were thinking, based on what they said they were thinking, is highly nontrivial.

You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.
Yudkowsky defines 'rationality' as 'winning.' Those who don't think about probability are having more surviving children than those who do.

Unfortunately a real study would be costly. Have to do a survey of notable life setbacks. Find ones that could plausibly be affected by probability. Then, a longitudinal study, using IQ-matched controls, where one side is taught probability. (Not using common core.) See if the chosen setbacks occur less often. It's not rational to do work when you can get grants to not do any work.

Nonetheless, Finucane et. al. found that for nuclear reactors, natural gas, and food preservatives, presenting information about high benefits made people perceive lower risks; presenting information about higher risks made people perceive lower benefits; and so on across the quadrants.
Subjects are rounding off 'high risk' to 'don't do it' and 'low risk' to 'do it.' Let's do a little search/replace.

"presenting information about do it made people perceive do it; presenting information about don't do it made people perceive don't do it."

A thrilling result.

Ganzach (2001) found the same effect in the realm of finance.  According to ordinary economic theory, return and risk should correlate positively—or to put it another way, people pay a premium price for safe investments, which lowers the return; stocks deliver higher returns than bonds, but have correspondingly greater risk.  When judging familiar stocks, analysts' judgments of risks and returns were positively correlated, as conventionally predicted.  But when judging unfamiliar stocks, analysts tended to judge the stocks as if they were generally good or generally bad—low risk and high returns, or high risk and low returns.
So we've found, at least in this instance, that subjects will use this silly  low-cost algorithm for zero-states zero-cost verbal answers, but not at, for example, their job.

I wonder why caps-r Rationality isn't more popular.



Did you find any evidence Yudkowsky discriminates between a tuned and untuned heuristics?

Some related subjects: thinking rationally vs. haha proles. Emotion vs. logic. Algorithm vs. consciousness.

Since today's economists (except of course the Austrian School) have abandoned the apparently unfashionable concept of causality in favor of the reassuringly autistic positivism of pure statistical correlation, it has escaped their attention that when you stop shooting heroin, you feel awful. 
Can't use autistic correlation to substitute for empathy, either, it turns out.

Monday, January 11, 2016

Update to Progressivism Diagnostically

(Previous.) Intentions determine outcome. If you intend good, good will result. If you intend bad, bad will result. (Everyone is a wizard, and spells don't fizzle.)

So, why [communism has never been tried]? Communism intends good. The USSR's and Pol Pot's and Venezuela's outcomes were bad, ergo, they must have had malign intent. Can't you see the contradiction?

This does not apply to Musrabians, they don't have agency. Only white males have agency.

Since only white males have agency, they rule the world. Since white males control everything, the world is as white males intend. There are bad outcomes, and therefore white males must be full of malice.

Sunday, January 3, 2016

Herds and A Minor Libertarianism Skirmish

I hate the idea of “losing myself” in a crowd.
Do you trust yourself more, or do you trust the crowd more?
It gave us looting and destruction during what started as a protest about the death of a young man in Tottenham
Notably, 'lower status' doesn't mean 'less trustworthy.' Destruction is low status, but you may trust yourself even less than crowds.

Problem: elitism. Elitism is naturally unpopular, and thus ineffective in non-exclusive publications. What if instead of a generic crowd, I could lose myself in a crowd of clones of me? Necessarily, such a crowd would be smarter than me by myself. Crowds are dumb because they're made of dumber people. Dumber than those who write articles fluently, at least, and thus dumber than anyone who's in position to end up telling us that crowds are untrustworthy.
Herd mentality – in all its forms, both ancient and modern – is probably the thing that frightens me most in the world.
It may still be true that crowds can express thoughts dumber than their average participant. I will hold as obvious that a crowd of my clones wouldn't act like any crowd you've seen. How about your clones?
On a mundane level, however, it gives us neither of these; it simply endorses mediocrity and prevents us from thinking
--
Until recently, I believed that the fight for equality would herald a new age of empathy, diversity and understanding.
ProTip: if you want empathy, fight for empathy. Might as well fight for nachos because you want empathy.
 For example, according to most of the Americans that I have met online, to be accepted as a “libertarian” then I have to be in favour of guns.
Err, is this about acceptance or about the law of identity? Yes, if you want to be a libertarian, you can't be haplophobic. That's part of the definition. (Judging by the rest of the article, it seems the thrust is virtue-signalling at the expense of libertarians, not herds per se. Pity.)
 (held by most sane individuals on this side of the Atlantic, including the majority of our own police force)
Sorry dear, they're not sane. Which means neither are you. Luckily sanity is highly compartmentalized. This prevents it from spreading. While it also prevents sanity from spreading, spreading sanity is painful, whereas spreading falsehood generally isn't, so this is normally good.
Another commonly displayed “libertarian” approach that I struggle to respect is the puerile desire to offend
Two probable problems: taking status as binary and 'respecting' vs. 'not persecuting.'
The correct libertarian response to disliking offence given for the purpose of giving offence is to Exit. Cost/benefit that shizzle. Exit with requisite granularity. Maybe suggest increasing granularity to have to throw out less baby with bathwater. If necessary, be libertarian-adjacent.

Second, you are in fact allowed to like people who have unsavoury habits. You know, despite those habits.

She could also mean being forced to put up with it, in which case it's libertarians betraying their own principles.

By the way: cunt, nigger. We can say 'fuck' and 'hell' now. How liberating.
 Maybe I am still that little girl on the edges of the playground
I'm totes gunna signal acceptance of my fallibility. I don't accept it at all, but you know, signaling is so cheap, I just have to!
preening contrarians whose sole function is to cause shock and awe, their tweets a heady mix of clickbait, worthless insults and self-aggrandizement.
Self-fulfilling belief. Think this and it becomes impossible to learn better. It's fine if it's true, but...

So, third probable problem: 'giving offence to etiquette' vs. 'giving offence to those who wish to exert control by being offended.' See my final response to the third last quote. Being deliberately offensive to the politically-offended serves the purpose of driving the faggots off and signalling unwillingness to bow to such cunty tactics.

I hope Williams isn't a feminist. Supposed to be fighting the stereotype of women as prudes.
 

Applied Prudence-Morality: Justice Factory

Injustice: As a system state, wickedness is rewarded and kindness is punished.

Abstract: Prudence morality implies justice and justice implies prudence morality.

Justice, especially in the short term, is not a naturally occurring element. It must be consciously produced and maintained. A justice factory consumes resources to ensure the kind are rewarded. This is necessarily a just act, meaning the justice factory too must be rewarded. As a result, a just society is one where justice is prudent.

The justice factory cannot reward itself because it is not a perpetual motion machine. Appealing to a sovereign for justice is thus imprudent, as it is attempting to create perpetual motion. Sovereignty and justice are independent; the sovereign can reward themselves because they are sovereign, not because they are just. Similarly this is the fundamental reason why Mandate of Heaven cannot obtain at the object level; it is a fantasy, hoping that justice and sovereignty are magically linked, instead of having to be linked by intentional and ongoing effort. If you expect God to do all the work, at least have the decency to pray for it.

Instead the 'just society' is the exactly correct wording. The Justice must be voluntarily rewarded by the society which the Justice maintains justice for, because they are just. The hobbits must worship and sacrifice at a worldly temple. (Typo: wordly temple. Get it?)

The Justice does not necessarily have to be rewarded monetarily. (Though this choice restricts Justices to the independently wealthy.) The Justice can also be rewarded socio-morally. Perhaps Justices have the right to lead rebellions. Not the legal right, you understand. Again, the Justice giving themselves legal rights is a perpetual motion machine: it cannot be done. It is the moral right, or the customary right, or the social right. Would personally name it the philosophical right and call it a day.