Sunday, January 31, 2016

Exit, Shortest Version

What can become a parasite class will become a parasite class.
Any government without exit can become a parasite class.
Parasite classes raise their costs above their benefits.
Rebellion will come to have a higher expected value than submission, even accounting for loss aversion.
The parasite class will collapse.

Unless parasitism per se is attacked, in other words unless Exit is enshrined and sacralized, the cycle will repeat.

Thursday, January 21, 2016

Applied Prudence-Morality 3: Silent Trade

(Prime, Applied 1, Applied 2.)

ESR claims 'silent trade' is universal objective ethics. It is in fact merely prudence.

The cannon-toting Europeans could take all the goods off the beach and run. The cost is not getting future trades. Even cannibal half-savages don't cooperate with defectors. If the trader thinks before they act, they realize they end up richer if they continue to trade.

ESR also overlooks this:
The silent trade works because the sailors have supremacy at sea, the villagers supremacy on land.
Though that's not exactly true either. The EV of cannon-invasion is the gains from invasion minus the risk of costs from invasion. Meaning war is profitable compared to trade only if war is cheap, even if victory is guaranteed. At that time, the cannon-boats would have had to pay for war themselves, rather than having e.g. you pay for it. Further, they'd then have to mine or find the gold themselves, rather than paying natives to do it for them. Ergo, trade.



Tuesday, January 19, 2016

Fisking Affective Heuristic

Misleading.
I'm doing this to measure how misleading. You can too, but it's done by comparing volume of edits to the original, which means reading the original.

Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.
Start looking for evidence Yudkowsky distinguishes between tuned and untuned heuristics.

This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn't protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock.
...which is how insurance protects things. Further, if the clock is lost, no insurance can rediscover it. For a visual: I can't unsmash Grandfather's clock, I can only buy a new one that looks similar.
And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.
Consider you'd be quite upset if the first clock is lost, and not particularly upset about the second. Would a cool hundred make you feel better? Remember, when stressed, most will take that stress out on others, leading to a cycle of violence. (Partly because cities have high baseline stress - there's a threshold.)

Not that I'm necessarily defending the wisdom of this particular decision, I'm attacking this dismissal of it.
Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.
Yes, maybe you could get away with claiming to have forseen my objection.

Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.  Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.  
About that.
Only 52% could do item AB30901, which is to look at a table on page 118 of the 1980 World Almanac and answer:
According to the chart, did U.S. exports of oil (petroleum) increase or decrease between 1976 and 1978?
Are we sure subjects understand percentages? Given literacy isn't binary, did you check whether subjects who do understand percentages can be bothered to work it out?

The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale.  Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.
Yes, I would predict the same.
I would predict that subjects convert subjective appreciation onto the objective scale. More intense appreciation will result in higher scores. It gets them out of the test the quickest.

Subjects don't think so fuzzily about stuff that will concretely impact them. Try asking for a raise while saying something like, "A raise of [your target] increased productivity by 95% of the difference between no raise and the maximum raise," and see how that works out for you.

Or consider the report of Denes-Raj and Epstein (1994):  Subjects offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl, often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.  E.g., 7 in 100 was preferred to 1 in 10.
About that.
Notice larger number -> stop thinking, because it's worth losing a few bucks now and then to stop thinking earlier.

According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans.
There's a couple things that could be going on here. Naturally, Denes-Raj and Epstein did not ask the necessary discriminating questions.

First, zombies. They knew they're supposed to say the "the probabilities are against me" but don't know what that means.

Second, they do know what it means but it's too much trouble to remember for a few bucks.

Third, a few bucks is less rewarding that playing a more exciting game. Words != aliefs. Working out what they were thinking, based on what they said they were thinking, is highly nontrivial.

You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.
Yudkowsky defines 'rationality' as 'winning.' Those who don't think about probability are having more surviving children than those who do.

Unfortunately a real study would be costly. Have to do a survey of notable life setbacks. Find ones that could plausibly be affected by probability. Then, a longitudinal study, using IQ-matched controls, where one side is taught probability. (Not using common core.) See if the chosen setbacks occur less often. It's not rational to do work when you can get grants to not do any work.

Nonetheless, Finucane et. al. found that for nuclear reactors, natural gas, and food preservatives, presenting information about high benefits made people perceive lower risks; presenting information about higher risks made people perceive lower benefits; and so on across the quadrants.
Subjects are rounding off 'high risk' to 'don't do it' and 'low risk' to 'do it.' Let's do a little search/replace.

"presenting information about do it made people perceive do it; presenting information about don't do it made people perceive don't do it."

A thrilling result.

Ganzach (2001) found the same effect in the realm of finance.  According to ordinary economic theory, return and risk should correlate positively—or to put it another way, people pay a premium price for safe investments, which lowers the return; stocks deliver higher returns than bonds, but have correspondingly greater risk.  When judging familiar stocks, analysts' judgments of risks and returns were positively correlated, as conventionally predicted.  But when judging unfamiliar stocks, analysts tended to judge the stocks as if they were generally good or generally bad—low risk and high returns, or high risk and low returns.
So we've found, at least in this instance, that subjects will use this silly  low-cost algorithm for zero-states zero-cost verbal answers, but not at, for example, their job.

I wonder why caps-r Rationality isn't more popular.



Did you find any evidence Yudkowsky discriminates between a tuned and untuned heuristics?

Some related subjects: thinking rationally vs. haha proles. Emotion vs. logic. Algorithm vs. consciousness.

Since today's economists (except of course the Austrian School) have abandoned the apparently unfashionable concept of causality in favor of the reassuringly autistic positivism of pure statistical correlation, it has escaped their attention that when you stop shooting heroin, you feel awful. 
Can't use autistic correlation to substitute for empathy, either, it turns out.

Monday, January 11, 2016

Update to Progressivism Diagnostically

(Previous.) Intentions determine outcome. If you intend good, good will result. If you intend bad, bad will result. (Everyone is a wizard, and spells don't fizzle.)

So, why [communism has never been tried]? Communism intends good. The USSR's and Pol Pot's and Venezuela's outcomes were bad, ergo, they must have had malign intent. Can't you see the contradiction?

This does not apply to Musrabians, they don't have agency. Only white males have agency.

Since only white males have agency, they rule the world. Since white males control everything, the world is as white males intend. There are bad outcomes, and therefore white males must be full of malice.

Sunday, January 3, 2016

Herds and A Minor Libertarianism Skirmish

I hate the idea of “losing myself” in a crowd.
Do you trust yourself more, or do you trust the crowd more?
It gave us looting and destruction during what started as a protest about the death of a young man in Tottenham
Notably, 'lower status' doesn't mean 'less trustworthy.' Destruction is low status, but you may trust yourself even less than crowds.

Problem: elitism. Elitism is naturally unpopular, and thus ineffective in non-exclusive publications. What if instead of a generic crowd, I could lose myself in a crowd of clones of me? Necessarily, such a crowd would be smarter than me by myself. Crowds are dumb because they're made of dumber people. Dumber than those who write articles fluently, at least, and thus dumber than anyone who's in position to end up telling us that crowds are untrustworthy.
Herd mentality – in all its forms, both ancient and modern – is probably the thing that frightens me most in the world.
It may still be true that crowds can express thoughts dumber than their average participant. I will hold as obvious that a crowd of my clones wouldn't act like any crowd you've seen. How about your clones?
On a mundane level, however, it gives us neither of these; it simply endorses mediocrity and prevents us from thinking
--
Until recently, I believed that the fight for equality would herald a new age of empathy, diversity and understanding.
ProTip: if you want empathy, fight for empathy. Might as well fight for nachos because you want empathy.
 For example, according to most of the Americans that I have met online, to be accepted as a “libertarian” then I have to be in favour of guns.
Err, is this about acceptance or about the law of identity? Yes, if you want to be a libertarian, you can't be haplophobic. That's part of the definition. (Judging by the rest of the article, it seems the thrust is virtue-signalling at the expense of libertarians, not herds per se. Pity.)
 (held by most sane individuals on this side of the Atlantic, including the majority of our own police force)
Sorry dear, they're not sane. Which means neither are you. Luckily sanity is highly compartmentalized. This prevents it from spreading. While it also prevents sanity from spreading, spreading sanity is painful, whereas spreading falsehood generally isn't, so this is normally good.
Another commonly displayed “libertarian” approach that I struggle to respect is the puerile desire to offend
Two probable problems: taking status as binary and 'respecting' vs. 'not persecuting.'
The correct libertarian response to disliking offence given for the purpose of giving offence is to Exit. Cost/benefit that shizzle. Exit with requisite granularity. Maybe suggest increasing granularity to have to throw out less baby with bathwater. If necessary, be libertarian-adjacent.

Second, you are in fact allowed to like people who have unsavoury habits. You know, despite those habits.

She could also mean being forced to put up with it, in which case it's libertarians betraying their own principles.

By the way: cunt, nigger. We can say 'fuck' and 'hell' now. How liberating.
 Maybe I am still that little girl on the edges of the playground
I'm totes gunna signal acceptance of my fallibility. I don't accept it at all, but you know, signaling is so cheap, I just have to!
preening contrarians whose sole function is to cause shock and awe, their tweets a heady mix of clickbait, worthless insults and self-aggrandizement.
Self-fulfilling belief. Think this and it becomes impossible to learn better. It's fine if it's true, but...

So, third probable problem: 'giving offence to etiquette' vs. 'giving offence to those who wish to exert control by being offended.' See my final response to the third last quote. Being deliberately offensive to the politically-offended serves the purpose of driving the faggots off and signalling unwillingness to bow to such cunty tactics.

I hope Williams isn't a feminist. Supposed to be fighting the stereotype of women as prudes.
 

Applied Prudence-Morality: Justice Factory

Injustice: As a system state, wickedness is rewarded and kindness is punished.

Abstract: Prudence morality implies justice and justice implies prudence morality.

Justice, especially in the short term, is not a naturally occurring element. It must be consciously produced and maintained. A justice factory consumes resources to ensure the kind are rewarded. This is necessarily a just act, meaning the justice factory too must be rewarded. As a result, a just society is one where justice is prudent.

The justice factory cannot reward itself because it is not a perpetual motion machine. Appealing to a sovereign for justice is thus imprudent, as it is attempting to create perpetual motion. Sovereignty and justice are independent; the sovereign can reward themselves because they are sovereign, not because they are just. Similarly this is the fundamental reason why Mandate of Heaven cannot obtain at the object level; it is a fantasy, hoping that justice and sovereignty are magically linked, instead of having to be linked by intentional and ongoing effort. If you expect God to do all the work, at least have the decency to pray for it.

Instead the 'just society' is the exactly correct wording. The Justice must be voluntarily rewarded by the society which the Justice maintains justice for, because they are just. The hobbits must worship and sacrifice at a worldly temple. (Typo: wordly temple. Get it?)

The Justice does not necessarily have to be rewarded monetarily. (Though this choice restricts Justices to the independently wealthy.) The Justice can also be rewarded socio-morally. Perhaps Justices have the right to lead rebellions. Not the legal right, you understand. Again, the Justice giving themselves legal rights is a perpetual motion machine: it cannot be done. It is the moral right, or the customary right, or the social right. Would personally name it the philosophical right and call it a day.

Tuesday, December 29, 2015

Physics Audit, Brightness Theorem: Solar Furnaces Hotter Than Sol

Today we audit mainstream physics and find it wanting. It is delicious. Some moron establishes the establishment view:
No, you idiot.

Spoiler: forget all the fancy stuff. Get a mirror to reflect sunlight onto the sun. Less energy is escaping, so it has to heat up. What temperature you optically 'see' is a red herring at best. Yes, you can absolutely use mirrors and lenses to make a solar furnace hotter than the sun.

Joules/area/second is (part of) the definition of heat. The above moron is simply contradicting themselves. This is probably why academics love academese so much - if you say something stupid in clear language, then the stupidity is clear. We all say stupid things sometimes, but some of us are more willing to admit it than others.

To think clearly enough to devalue experiment, it's necessary to consider all the factors. For collecting all the considerations, highly mistaken experts are perfectly adequate. I'm only disappointed that "brightness theorem" doesn't seem to have a Wikipedia page, so I can't determine if it's pre-war or post-war science. (I predict it's post-war, I don't think prewar scientists made such glaring mistakes.) For those who prefer optimism to cynicism, it's time to hope the theorem is confined to the lower-class physicists who must resort to writing textbooks, which is why it has no official entry.

1. Are optics really passive?
2. Is area a factor or a non-factor?
3. What's with joule-per-second rates?
4. Can you in fact build a perpetual motion machine with it?
5. The sun has finite size and is difficult to focus onto a small image.
6. Is there an effective temperature limit, which the optics 'see'?

As it turns out, the xkcd forum-goers lgw and Xanthir are correct. Area is a factor, and you can trade radiative intensity for radiative area. Minerva, Quaanol, elasto, Taas, are rationalizing beliefs handed down by sacred, unalterable authority. There's some other people, but they seem too confused to even reliably categorize as on-topic.

If we surround Sol with an ideal ellipsoid Dyson mirror with one focus at Sol and the other at Mercury, removing all the annoying debris, then all of Sol's light will be focused on Mercury, and vice-versa, and they'll reach thermal equilibrium. However, Mercury, necessarily, will be emitting more photons/area/second. Or, equivalently, higher-energy photons. The only way to do this is if Mercury is at a higher temperature. Seems we're done here, but let's try to destructively test it. Some mistaken lines of thought: what if I get it to emit more photons by having more higher-energy molecules, without going over? T is average kinetic energy, so that's a contradiction - I have posited it's hotter while staying cold. Maybe molecules stop absorbing photons, becoming reflective? First, they don't, second, reflection still involves a transfer of energy: Newton 3. One non-mistaken line of thought: if we have two Sol-equivalent lens targets and layer them over each other, the power must be higher, meaning the target has to dump more energy at equilibrium than a target hit by only one. 

It is true that both Sol and Mercury have finite size. It is possible that the inefficiency of the Dyson mirror at, say, Oort cloud-radius would somehow misplace enough photons from Sol to prevent Mercury from getting too hot. After all, it's focused at Sol's core, it can't be focused at all the various points of Sol's surface. But, it seems unlikely. While there's a finite size that Sol's image can be resolved down to, I've resolved such an image and it was smaller than Mercury. Nevertheless, to be certain, I'd have to do math, and I'm lazy. Let's put the mirror at infinity instead, so Sol appears to be a point source. Problem: solved. Sure I've now removed the entire universe as 'annoying debris' and cancelled the latest season of space expansion, but you can do that for cheap in gedankenland.

There are two ideal things preventing this from being a perpetual motion machine. First, to optically focus Mercury's higher-temperature light back onto Sol requires not simply an ideal lens, but a magic lens. No matter where it is, the lens will disrupt the mirror's focus. It would have to be a daemonic lens that dodges into hyperspace when it sees solar photons but comes back when it sees mercurial photons. Second, we can only heat objects smaller than Mercury to something that's hotter than Mercury. Maybe we could heat a small circle of Sol's surface with our daemon lenses, but then we'd have to heat an even smaller circle of Mercury with the resulting increased radiation. We cannot use Mercury to in turn heat Sol hotter than Sol.

That said, because we can so manipulate temperature, we don't need a full Dyson mirror. The temperature of the smaller objects will be a direct function of the angular coverage of the mirror and an inverse function of the area of the object, meaning if we have a smaller mirror or fewer lenses, to get superhot we only need to choose a smaller target. Half efficiency path, half area target. Though notably a half-Dyson-ellipsoid or half-silvered Dyson would be roughly quarter efficiency, since we lose half of the energy on the way in and then half again on the way back. Nevertheless, we're talking engineering here. It's most probably entirely feasible to make a +6000K solar furnace right now with real budgets and real materials.

There's two further non-idealisms. When we focus Sol onto Mercury, entropy must increase - we end up with more photons flying around in the space between them, each with an energy budget. To oversimplify, we're heating the vacuum. For bonus perspective, imagine a non-energy-generating heat blob, alone except surrounded by a perfect mirror. The system won't lose energy because mirror, but its equilibrium temperature will be lower than T0, since the blob has to fill the space between it and the mirror with photons. Even ideal optics aren't passive, they're an implicit heat pump. Non-ideal optics are even worse, since the mirror will heat up and radiate out the back, wasting energy that could be making Mercury hot. Lenses are no better - the idea lens has a thickness of zero atoms. Good luck building that. Even if I had worked out that we could focus light from Sol onto a target and then back at Sol to break things with ideal lenses, it would just mean that real lenses would scatter and absorb more light than we were supposed to be getting out.

Second, to get a perpetual motion machine, not only do we have to get useful work by using Mercury as a heat source and Sol as a sink, we have to get enough useful work out to fission an alpha particle back into four protons, two electrons, and negative two neutrinos. This would require Sol to emit energy while being an infinite-sized 0K heat sink. Slightly impossible. Unlike our passive heat blob, Sol would increase in temperature if surrounded by mirrors. Seeing the sun's surface is not like seeing the hot surface of a passive object, which may be confusing our hidebound expert physicists. If we have to model Sol as an optical 'temperature' to be 'seen,' then the correct quantity depends somehow on the fusion reaction, which is not only hotter than the surface, but hotter than the core. Sadly can't find the wag who noted that you can divide total fusion-to-iron energy in Sol by total system mass. Spoiler: you get a very large number, not 6000K. Thinking about trying to hook a Carnot engine between Sol and Mercury hurts my brain, I used a match as a model instead. As long as I got more work out than radiative energy the match puts in, I can ignore the convection, and declare perpetual motion. (That said the mirror may have to be slightly magic, to avoid soot.) However, I didn't get much work, it was simply a slightly less abstract version of the above. Further, any real engine will have friction losses on the piston.

Finally, there's some points to make about power.
In normal heat conduction, more conductive materials (copper?) will deliver more power. This doesn't mean the target gets hotter, it means it gets hot faster. This is what the brightness theorem imagines would happen with a solar furnace. Simply put, light is not heat. In those terms, a real shocker. Heat doesn't travel in waves. There's no mirror or lens for heat, since there's no wave to guide. There's no heat double-slit experiment. Heat cannot destructively interfere, whereas laser cooling is a thing. Even if you could lens heat, since it's essentially velocity with no net direction, it would fail to be net focused. It would chill in the lens, having a cold one.