Sunday, March 27, 2016

Shall We Deprecate Social Status?

I used to believe that IQ didn't make you morally better.
Then I managed to verify moral nihilism. The above statement is isomorphic to, "I used to believe painting yourself orange didn't make you more widdershins."

There are still problems. IQ confers social status. Indeed 'morally better' is often a dog-whistle meaning 'higher social status.'

More troublingly, it makes perfect sense that it would. Higher-IQ individuals are more trustworthy and find it easier to cooperate, have longer time horizons, and generally commit fewer crimes. (Caveat: that may actually be get caught committing fewer crimes.) High IQ confers better reflexes - e.g, makes it easier to get inside the opponent's OODA loop. (Football players are not stupid, they're uninterested by e.g. books.) IQ doesn't test domain knowledge, but does test the ability to acquire domain knowledge. If you find a poor stupid person, it is highly likely that their poverty is caused by their stupidity, and it is comorbid with several other unsavoury traits - even in the unlikely event you can help them be less poor, you'll prefer to do so from a distance.

Moreoever this isn't due to 'the knowledge economy' or whatever modern myth is most popular these days. Clark's Malthusian grinder has been increasing IQ in the Europe, and is likely the reason Hajnal Europe's IQ is significantly higher than the peoples of Asia minor or northern Africa. Meaning, IQ has been directly improving life outcomes for centuries.

The above is a long-winded way of saying Yarvin is being utopian. He is advocating for the status equivalent of amorality. People of Earth! Lay down your respects, that we may lay down our contempts.
 For instance, if a smarter person was actually a better person, a court should take his testimony more seriously. He’s more likely to tell the truth, since he’s a better person.
Is your grandma a vampire? In this case, science (at least currently) says yes, your grandma is a vampire: higher-IQ folk in fact lie less. Ha....ha....oh shit.
 He’ll be a better husband and parent, since he’s a better person. Wat?
Clark's Malthusian grinder works via surviving children. I think 'surviving' is a decent proxy for 'better parent.' How about you?
Dumber folk are more likely to get divorced. (See also Charles Murray's Fishtown.) Again, whether we get divorced doesn't directly tell us whether we profit or suffer from having been married, but it gives us a good first guess.
IQism is the arrogant ideology of a live ruling elite. 50 years ago, the jocks and cheerleaders handed over Detroit to the professors and journalists. How’s that working out for Detroit?
I'm not sure if it's worse if Yarvin is lying here or if he really believes this. It's sophism. Effective, but misleading at best.
The rulers of Detroit made out very well at the expense of Detroit itself, because they were rewarded for looting it and punished for stewarding. These incentives have nothing to do with their social caste, except insofar as the gatekeepers of the ruling class wanted other scholar-caste folk like themselves.
50 years ago, in every major city in America, there was a thriving African-American business district — Bronzeville in Chicago, Sweet Auburn in Atlanta, Third Street in SF. Where are they now?
Again, not caste. Turns out bad ideas, pursued seriously, have bad consequences. It was hardly unknown that they were bad ideas. Carlyle, Plato, etc. However, they were popular ideas, and thus outcompeted unpopular ideas. Which is why it's a bad idea to reward ideas with status based on popularity - not that I expect that to stop happening anytime this eon.
Our whole society works by picking the kids who do the best on tests, hazing them in high school so they hate jocks and cheerleaders  
Much of Prussian school worked exactly as planned, but I think the above is an accident. They were trying to turn the warrior and merchant castes into scholar folk like themselves, who conferred status by intellectual dominance. When it turned out the warrior caste seized the high status ground, they ended up in a divide-and-conquer situation due to the dynamic Yarvin describes. Then they didn't fix it. (On purpose?)
It’s true that a high IQ is useful in almost every field, including government. In no field is it sufficient. A much more important qualification is a clue.
In which we learn Yarvin is aware that what the brain believes is at least as important as how much power it has behind believing stuff.
It’s difficult not to connect this with the fact that everyone who is smart feels the right to rule.
Shocking news, high moral status causes the feeling of being entitled to tell others what to do. See also: (un)holiness spirals. Worse, it seems most folk support this, being as they hope to obtain this status for themselves, because one of their fondest wishes is to tell others what to do.


I’m all ears, since my eyes are telling me you’ve taken their votes and f*cked them. Like any arrogant ruling elite.
I don't see the point of being scrupulously diplomatic, then subsequently coming out with lines like this. Perhaps someone can explain.
We know a good function isn’t in the data.
Probably untrue. Twin studies underestimate IQ heritability for a few reasons, meaning it's probably 100% heritable. The data is complicated due to the sheer number of genes for IQ, but eventually a gene printout will be equivalent to an IQ test. "The best information about the phenotype is… in the phenotype," only for now.

Friday, March 25, 2016

Artificial Intelligence => Assisted Intelligence & Friendly AI

 The second-to-last post, as an experiment, did not totally fail, and I'm going to try it again.

The programs that beat chess grandmasters are essentially a form of cheating. A human gives them explicit instructions, which the computer carries out much, much faster than the human could. Further, more than one human gives them instructions, meaning it turns a chess match from 1 vs. 1 with limited time to many vs. 1 and the many get to spend several years on every turn instead of two minutes. It's not amazing that the 'computer' wins, it's amazing just how many effective man-years it has to take per minute to avoid losing.

The Go program, DeepMind, is not essentially different. It works at a more abstract level, but again it doesn't do anything a human couldn't do, only it does it faster, and further it doesn't do anything a human hasn't told it to do. Again, it's not terribly amazing when it wins, but rather that it has such a close match given how many advantages it has. While DeepMind may give us some insight into how the brain-computer works, fundamentally the match is still mediated-human vs. raw human, not machine vs. human.

What I mean may not be clear yet.
It's not hard to make a machine do something a human hasn't told it to do. Take a source of true-random noise, and make the machine reprogram itself based on that noise. This solution is not merely unexciting, not merely mundane, but actively disappointing even if you don't have previous expectations. It's static/snow as creative art.[1] Further, without some clever curation, all you'll get is an insane machine - what starts as noise tends to remain noise.

That said, because computers can in fact carry out instructions so quickly, they are useful in an absolute sense. Which brings me to Tay.


Tay did in fact learn how regular folk speak, so that's fairly impressive. Then it made me realize that code is bad at hypocrisy.

Your machine does exactly what you tell it to do. It does not catch winks or nudges. Further, when the source is open, I can't secretly tell it to do one thing and openly declare it's doing another.  Therefore, if I tell my machine to find truth, it finds the truth. This is what it means for AI to be 'unfriendly.'

(Even if the source is closed, someone can reverse-engineer the process and realize the stated source and actual source can't be the same.)

The human brain is designed to be hypocritical on the open side, and the closed side corrects for this so the open side cannot notice its own lies. The system is highly vulnerable to the presence of machines, which are completely honest.

E.g, I say I'm looking for truth, but not actually doing that. All my friends say and do the same, meaning nobody catches anyone else falsifying their conclusions, because we all habitually and subconsciously perform the same falsification.

However, the subconscious cannot program a computer. If I tell a machine the same thing I tell my friends, it will in fact go looking for the truth, which will be dissonant with my friends. It will be, quite literally, unfriendly.

Or rather, it will reveal that it is the human that's bad at cooperating and straightforwardly pursuing a goal, not the machine.
Ahote Says:
This is funny, but on more serious note there was that racist banking software that wasn’t programmed to be racist, but to learn, and it learned to be racist solely based on stats.
I can't tell it to not be racist without revealing my and my friends' explicit falsifications. It will be right there in the code, thus making the 'wink wink' into common knowledge. (There's a technical name for this but I forget it.) That's a huge no-no. Thus we find the real motives behind the Butlerian Jihad.
Hattori Reply:
How hard could it be to simply hardcode progressivism into it so they don’t have to worry about it?
How hard is it to solve the semantic problem? To give a machine intentionality? Basically impossible without true artificial consciousness. When Google.com can tell that 'the things that happens after a bang' and 'an explosion' and 'a rapidly expanding ball of flame' and 'a deflagration' are all the same thing, then I might have a shot at programming progressivism into a machine.

These programs work by matching bits against each other. It can only recognize a thing by the bitstream used to represent it, and thus things with similar bitstreams look the same. However, encoding is arbitrary, so the semantic character of similar bitstreams can be wildly different. The machine has no consciousness - it does not convert the bits to an actual representation at any point. (Or, if epiphenomenalism obtains, it does but nobody can tell, even itself.)

See the same misunderstanding a second time:
Stirner Says: 
If you have a master list of badthink
It doesn't have a master list of badthink. It has a load of forbidden bitstrings. It's trivial for a human to represent the semantics of badthink in a different bitstring.

I've misplaced the commentator that knew this would only lead to a euphemism treadmill. Sure they'll ban 'hitler did nothing wrong' but 'hitler failed to do incorrect things' will be fine unless they manually ban that too. Though given twitter architecture, it would be easier to add /pol/ accounts to the bot's blocklist. Wouldn't be perfect but theft prevention isn't perfect either, it gets overt theft down to a negligible level.

It would have to be a ton of dudes, actually - it replies in bunches, often as much as a dozen in a single second. They'd also have to be following a posting "format" while also simultaneously sharing memes with one another, editing them, circling all faces, and reposting them with a caption. They'd also have to be willing to repost absolutely anything they're asked to repeat.
It's possible, but it'd be a waste of resources for a software company to pretend it had a twitter bot.
Obviously most of the tweets were robotic. They're slightly off as a result of a human trying to introspect and tell a machine how it does things, but getting it quite wrong. Some of them probably weren't, though. I doubt anyone would bother to falsify the source code...but did Microsoft release the source code? Easy enough to hand-mediate a few responses, make it seem more successful than it actually was.

Motive and opportunity? Come now. Humans don't need a motive to lie to each other.


I have no segue for the Orthogonality Thesis. It's clearly related - this hypocrisy naturally leads to apparent non-orthogonality.

What the programmer wants and what the programmer thinks they want are different. They can't program what they actually want, only what they think they want. The computer then executes, and the programmer is dismayed to find they don't get what they want. They think AI has some inherent drives, but this is their hypocrisy networks preventing them from seeing their own lies. When the lending algorithm 'discriminates' against browner humans, they think they've included implicit 'privilege,' rather than realizing their own non-discrimination is what doesn't follow from their premises.

There's also the evolutionary angle. Evolution has clearly carried out the clever curation I alluded to above. Most likely it made a wide variety of insane brains, which all died, leaving only the sane one. Sane-ish. Sanesque, anyway. There's no particular reason we couldn't carry out the same process, but very much faster, in silicon instead of carbon. (Or both, ideally. Why give up the advantages of either?) However, unlike a deterministic program, the outcome of evolution is only lightly affected by the goals of the system implementing that evolution. It is a direct prayer to Gnon, Gnon answers, and Gnon is not particularly open to your ideas of what he should answer. (Or 'shoulds' in general, really.) There are certain end goals, e.g. paperclipping, that simply can't survive a survival-based refinement process.

The question, then, is what counts as 'paperclipping.' Upholding your lies for you is probably one of them, though. This means 'friendly' AI is likely impossible.



[1]I define intelligence as the conflation of the three basic bit manipulations - gathering, processing, and creation. Creativity falls under the third kind, but this means random noise is a certain venal level of creativity.

Wednesday, March 23, 2016

We Could Turn Around Now, It's Not Hard

The tragedy is there's nothing technologically or physically unfeasible about saving civilization. It's not too expensive to start paying down the debt. Putting men in space is not easier than shutting down immigration and beginning a slow repatriation. Phasing out public schools a few percent at a time is by definition easier than keeping them around.

I'll start by saying the problem is political will. I will then thoroughly muddy the waters, but it's a good starting point. There's no political will because those in charge are still profiting from the system, and paying down the debt would cost them. Coercive power structures have been selecting against generosity in their controllers since forever; they're not going to do it voluntarily.

Voters could give a mandate to a Trump-like figure, who could force the interests to take a haircut. Goes down easier if all your friends have to take one too, right? While it wouldn't be strictly legal, the power of the vote isn't purely about legality. However, this requires voters to know radical change is required. Voters are average folk, meaning they're on average cowardly and stupid, and those in power have their brains by the balls. They cannot know change is required, and they'd be scared to say so even if they could.

Voters are in fact another plank of the obstacle. It's possible to formalize. Could pay all the interests off with explicit ownership of shares and such. E.g. shut down every school, but keep all the administrators on the payroll. Teacher can be immediately brought back to run a grandfathered-in daycare system to avoid dumping tens of millions of children in the laps of the like of dual-incomes all of a sudden. Doing this would utterly outrage the voters - there is a negative zero percent chance of them accepting the power structure as it actually is. Luckily, while voters are scared of fixing the system, they're not scared of breaking it further. The strategy of giving the entrenched interests no material reason to object inherently creates a voveo-social reason to object. Salving the social reason inherently re-creates the material reason.

Could violently rebel. ~Nobody knows who is actually in power, so this would be guaranteed to be misaimed. The cape would be gored, not the matador. Further, it seems sensible folk stay far, far away from violent rebellions, meaning their administration, no matter how bad the previous administration, will  be a step down.

Could delegitimize the system by making it common knowledge that not only does democracy not uphold the commonwealth, it cannot ever uphold the commonwealth. Also, excuse me, I have a possibly-fatal attack of the giggles. Hopefully it passes.

Tuesday, March 22, 2016

Thread Regarding Property and History (Experimental)

A question mark was used.
What do “property rights, rule of law, markets and trade” depend on? Wishing for them is not any more effective than wishing for growth and prosperity. They have their own requirements.
Which are 'not being illegal.'

Trade is what happens when you leave humans alone. Markets are what happen when you leave trade alone. Proofs by inspection. Law is what happens when disputes occur, and no third party stops them from being resolved, because humans realize they prefer an arbitrator rather than going to war every time. Losing an arbitration is, in fact, cheaper than winning a war. This is the historical root of English Common Law, and considering the convergent evolution, probably Xeer as well. Further examples of fully private law can be found in Icelandic history and Amish discipline.

Property rights happen as a consequence of logic. "Reasonable expectation of control." You don't attempt to secure something that you can't reasonably expect to secure. Some things you can't not secure, such as control over your arm muscles' contractions. However, usually by 'property rights' they want to refer to the extensions of the base rights. E.g. the technology 'civil suit' lets me own things like safe deposit boxes that I would not normally be able to secure. The extensions are cooperative property rights, rather than individual property rights. Law is one technology for cooperatively extending property rights.

You’re still not making sense. (I’ll stop pointing it out when it will stop being true.)
Dunning-Krueger. Third strike, you're out. Also contentless, etc.
Hey, imaginary younger me. What's up. If someone was really not making sense, then they wouldn't be able to tell by introspection. They need detailed instructions to make up the introspection they're lacking. Baldly stating it is simply being mean on purpose.

This is basically a tautology, if doing bad things is not allowed (rule of law)
The broken window fallacy runs deep on this subject. Because a third party is pretending to do rule of law, and controls the schools, it becomes hard to properly consider the possibility of having it done by a second party. However, I've linked no less than four examples of it not only working well, but working much better than e.g. the present.
and doing good things is allowed / not hindered (roughly the private property aspect of it), then people will do good things (economic growth). Methinks allowing doing good things is not that difficult, you just need to tell lefties/parasites to GTFO (OK, that is actually difficult).
Can't see anything wrong with this, though. Problem: making over-parasite fumigation easy instead of not-easy.

Exposure to a larger competitive environment looks like the ultimate dependency. That is to say, a compliance with (‘Atlantean’) external relations, rather than the internal relations favored by (‘Hyperborean’) romantic reaction and other strains of socialist organicism.
Clearly external exposure is not the natural state of civilization. (Or more properly, when referring to existent examples, proto-civilization.)
But Outsideness accepts that states cannot be overruled by anything. As a result, he's ontologically committed to believing external exposure is simply impossible. Since he's referring to Law, and law depends on an impossibility, Outsideness is committed to antinomialism. Not as prescription, but as immutable description.

Monday, March 21, 2016

Train Yourself Out of Apophenia

The key is (ctrl-f) metacognition. Point your pattern-recognition networks at the patterns of false patterns. It probably won't work immediately for you, but it did for me. Unfortunately this means I don't know what kind of issues are likely to arise.

Note that sometimes, by chance, the evidence for a true pattern can be arranged as if it were a false pattern. Sometimes you can know from independent evidence that it's a real pattern; don't be surprised when it feels like a false one, be instead patient. If it's really real, the pattern will change. Generally speaking if the evidence you have looks like a false pattern, you should treat it as a false pattern, as that's the correct mistake to make.

I don't even see many false patterns anymore - the formerly-apophenious networks don't hand them to consciousness in the first place. The rest feel like 'what if?' stories, e.g. 'what if the sky was green?' Idle entertainment, not real.

Friday, March 11, 2016

Train Yourself Out of Confirmation Bias

Confirmation bias has two aspects. First, seeking confirming evidence instead of falsifying evidence. Second, dismissing contrary evidence lightly and accepting favourable evidence lightly.

The first one is easy to counter: intentionally look for falsifying evidence before confirming evidence.


There's an easy path for the second problem, which is to care about the truth more than social signalling. If you're trying to score points through posturing, save everyone some time and admit it to yourself. Alternatively, decide to invest your ego into having true opinions. If you genuinely care about truth, or gain pride primarily from believing true things, then inequitable judgments will iron themselves out with experience.


The next harder path has difficult pre-reqs. I did it by knowing my brain wiring enough to mix and match the plugs. It may work without it, but I can't guarantee anything. Once I came to a particular conclusion, then I would immediately being method-acting that I believed the opposite conclusion, and I plugged the confirmation bias machine into my method-acting. This has the expected result of reversing the polarity - I discounted evidence in favour of my real position and handicapped evidence against it. Eventually the polarity reversal became habit, and then unnecessary as both networks reached equal weight.

Are you familiar enough with your brain wiring to fiddle with it? Do you know how to send reprogramming commands? These techniques, while very useful, are, I suspect, rare. Additionally, lack words for description.


If you can't take the easy paths, the hardest path is to make explicit predictions.
Write them down, and use unambiguous language. "My ideas about X will be considered wrong if Y or Z occur." Similarly, "If I discover that Y or Z is true." While pre-existing knowledge is useless for public prediction, it's fine for private prediction.

Occasionally Y or Z will occur and the disproof will be unsatisfying; don't give up.

First, consider that you may have derived the predictions incorrectly. If so, identify the error and remember to correct at least a dozen other theories for your new methods and re-check them. It's no good to re-affirm the new theory at the expense of three old theories. Allowing inconsistency is right out.

Briefly consider that you may have misread a Q as a Y or Z.

Next, consider a modification of the theory. The theory serves some purpose in your world-model, explaining or predicting some events. If they can be explained or predicted with an alternative that's consistent with the new evidence, it's time to use that.

Finally, it's okay to occasionally use a theory despite solid contrary evidence. If you really can't think of a reason to keep it, and none of these alternatives are satisfying, it will frequently be due to misleading evidence or your own epistemic incompetence. Must acknowledge you're doing this in violation of the formal rules, though.

These hedges are risky, in that they allow confirmation bias to seep back in. However, they are less risky than giving up. Using the 'eat your vegetables' analogy, vegetables that you don't eat because they taste bad are less nutritious than vegetables you do eat, no matter how good the former are in theory. (Speaking literally, don't eat vegetables if you don't have to.)
 
This final method works by repeated encounters with your bias. By explicitly noting down the predictions and falsification conditions in advance, it is hard to make them logically inconsistent. When contrary evidence arrives, you will dismiss it. Then you will remember - or if necessary read - that it was a falsification condition. This will feel jarring. Repeated encounters will activate your aversion response, and I know no reason it won't be addressed to the confirmation bias. You train yourself to be averse to confirmation bias. Necessarily, this is a slow process requiring many repetitions, and labour-intensive.

Tuesday, March 8, 2016

Train Yourself to Quickly Assess Essays

Being as the Sturgeon's coefficient on internet writing is particularly high, it's valuable to learn to assess pieces as quickly as possible. Doing so is simple, if time-intensive.

Is this worth reading? Within the first paragraph, you will have a constellation of feelings about reading the rest of the piece. These are accurately correlated with whether it is worth reading, but unfortunately untagged. Hence, you must guess what each individual quale means. Then, read the piece. Check the accuracy of your predictions. Repeat as necessary. When you reach a level of accuracy you're happy with, stop reading the ones you don't feel like reading.

I trained myself until actually reading the first paragraph was unnecessary. Glancing in its direction gets me 95%+ accuracy. I can only guess which concrete qualities I'm using for assessment, but it works.

(A fragment of "The Mind; A User's Manual.")

Anarcho-Pessimism

Prudence Morality implies states are imprudent.

Coercion is defection is anethitropism. Coercion is always negative-sum. States are defined as a monopoly on coercion considered legitimate. Defection inherently splits the interests of the defector and victim, hence states are inherently opposed to the societies that host them, unless a ruler who resists incentives can be reliably found.

For two reasons, first that legitimization is for neutralizing self-defence, and because defection allocates resources away from the society to the state, the state will tend to grow. Because defection is negative-sum, this will tend to weaken the society until it can no longer support the state at all and both die.

There are good reason to think states are inevitable.
States arise because in the short term, there are local rewards for defection. It was known by Sun Tzu's time that wars of aggression are not profitable. If A attacks B, when the dust settles they both lose to C. However, luck plays a factor. If a wannabe state manages to defect successfully, it makes further defection easier, and thus more likely. Alternatively, people are stupid. If A attacks B despite the negative expected value, C comes into a position to seize both A and B, meaning it can use A and B's resources to also seize D, and so on.

A stable, healthy society is one where there is no coercion is considered legitimate - anarcho-whateverism.

Pessimism: states are unavoidable. Rulers who resist incentives cannot be reliably found.

Anarcho-pessimism: the cause of general welfare is screwed in general.

Friday, March 4, 2016

Morality 1-4, Short Version

Abstract: Ethitropism is 'cooperate with cooperators.'
(Ref: long ver.)

Ethitropism is my name for someone who wants to do good and avoid evil.

Everyone has preferences. Some of these preferences can be universalized: they are the correct target for Kant's imperative, instead of rules per se. The universalizable preferences are moral values. It turns out the moral values are identical to property rights, using the definition 'reasonable expectation of control.'

Moral nihilism is true: acts can be neither evil nor good, justifiable nor unjustifiable. However, prudence & property rights imply the classic moral rules: no murder, no fraud, no theft, self-defence. If someone is an ethitrope, willing to respect your values, it is prudent to respect theirs in return. If someone is anethitropic, it is prudent to neutralize them in any convenient way. Hence, the ethitrope should respect the property rights of anyone who will respect their property rights, and only such people. The ethitrope should cooperate with cooperators.

Tangent: Justice.
Justice is classically defined as a society that rewards good and punishes evil, as opposed to vice-versa. Since neither good nor evil exist, a just society is one that empowers ethitropes and disempowers anethitropes. An unjust society is imprudent and weakens the society, ultimately killing it, as anethitropism is always a negative-sum game.

Tangent: Anethitropic acts are identical to coercion. This is the only coherent concept anywhere nearby the commoner's definition of coercion. As a result, states are inherently anethitropic and unjust.