Saturday, November 16, 2013

Less Wrong vs. Evidence and Scholarly Virtues

It is not surprising LW went off the rails, because respect for authority and voting are both scholarly anti-virtues. Authority is a time-saving hack for non-scholars, and the only vote that counts is Reality.

Global warming is a better test of irrationality that theism. (No quote necessary.)

Trusting Expert Consensus. (Via: 1, 2.)
"Sometimes I regret not knowing the climate controversy as well as I know the evolution controversy, but what I've seen so far makes me think it's extremely likely that if I did study it in greater depth, doing so would just confirm the evolution-climate change parallel."
Meanwhile, because Reality loves the skeptics,
Carbon emissions have been higher than expected. Measured temperatures have falsified every model. The measurements have simplified the argument - it isn't necessary to think enough to understand how the models could never work. In retrospect, this was inevitable. There is only one way to get any kind of measurement match out of the crude things it pleases climate 'scientists' to call models. Overfitting. Every climate model will reliably diverge in the short term, and due to the pro-warming bias, they will all diverge upward.

Being wrong is not a scholarly anti-virtue. The usual scientific practice is to shower false models with jeers - see phlogiston, etc. This is not only unnecessary, it is counter-productive; a large part of secular anti-consciousness is due to consciousness smelling like vitalism.

Failing to correct, however, is not just an anti-virtue, it is a crime against the laws of epistemology.

Carbonic warming's spectacular flaming crash has undermined not only LW's argument for AGW, but its respect for evolution, and doubtless other fields as well. To be clear: the current correct answer for Hallquist, if asked whether he believes in evolution, is "I don't know." He needs more data or analysis.

The scholarly virtues are deriving things for yourself and voting via experiment. Just as macroevolution follows from things like thermo #2, the impossibility of modelling climate follows from chaos theory. (Have you done the derivations? If not, then your correct response is either "I don't know" or to renounce scholarhood.)

If there was ever time for drastic correction, it is now.

We will not see drastic correct. We probably won't even see slow correction. Even hoping for an embarrassed evasion of the topic feels optimistic to me.

I'm posting this to challenge Reality to prove me wrong. Show me that LW converges on the truth. Perhaps it is I who will have to drastically correct.

Tuesday, November 12, 2013

Two Envelopes Paradox

Just for spice, let's look at academic philosophy getting it right.
"We want to compare the amount that we would gain by switching if we would gain by switching, with the amount we would lose by switching if we would indeed lose by switching. However, we cannot both gain and lose by switching at the same time. We are asked to compare two incompatible situations." 
And yet,
"God help us if, after the fourth round of drinks, someone brings up the two envelopes paradox." (Here, via
La Wik's phrasing needs work, so in my own words:

When first looking at switching, it seems I must switch, as the expected payoff is 5/4 A. However, now I've switched, I can't re-define the amount in the new envelope as A - it's still 5/4 A, and thus switching back must be a loss. At first this feels like I've unjustifiably broken the symmetry - before I pick an envelope, the expected value must be equal - but since I've added an assumption asymmetrically, it would instead be weird if the symmetry remained.



The second version, where I actually open the envelope, also breaks the symmetry. Sadly this one is still best analyzed by throwing all the academics in a lake.

Your daughter has some terrible eye-melting but curable disease, and you are poor. The opened envelope has $10 000 in it. Do you switch? Obviously not - it's not risking five grand, it's risking your daughter.

Your daughter is fine. You're thinking of buying a new car. The opened envelope has $10 000 in it. Do you switch? Obviously so - no matter what, you get a nicer car, we're just haggling about whether you also get to pad your retirement fund.



Seems clear to me. But why the lake? Have fun trying to find even one academic who will make this clear. The real world and applications are low status, don't ye know.

Second, I've tried to respect intuition. Nobody analyzes these things entirely abstractly. It is passed to the subconscious and the subcon uses concrete examples, usually whatever the availability bias spits out. (E.g. think of a cat - no no, not a cat with fur, an abstract cat. [Mine's an adult tabby. {Side view, facing left, tail erect, front right paw (white) lifted, looking a bit surprised.}]) This will naturally lead to very different intuitions based on things like how rich you currently are or how secure you currently feel.

Additionally, expected value is not the whole story, and one other consideration is whether you can absorb the possible loss. Your intuition will consider this and you can't tell it not to without training.



Completely unrelated, this article taught me a new rule of thumb. "Though Bayesian probability theory ..." The translation: "We are now going to talk out of our ass." Any paragraph or section explicitly calling itself Bayesian is not worth reading. Presumably one calling itself frequentist would be just as bad, but I haven't seen one of those yet.

Sunday, November 10, 2013

Self-Reference, Logical Positivism, and Existence

It would appear Reality wanted to spot-confirm my hunch, as per my last post, that respect for self-reference is too high.

Logical positivism has been discredited by feeding itself to itself.
"Logical Positivism is the view: "The only meaningful statements are those that can in principle be verified empirically.  Logical Positivism fell because it cannot itself be verified empirically thus it is meaningless by it's own standard."
Or what I found:
"Therefore, LP is meaningless. I don't know what you mean when you say that meaning can only be ascertained by the possibility of an experiential proof as that statement has no possibility of an experiential proof." [1]


Let LP be f(proposition) = 1 if proposition is meaningful, and 0 otherwise.
Solve for x: f(LP) = x.
Substitute f(proposition) for LP, but proposition = LP, so f( f(LP) )) = x. Et cetera.
In other words, Green, Brown, and everyone who thinks like them have made a critical logical misstep when they concluded that LP self-disproves. Properly appreciated, the argument looks like this:
  • Assume LP
  • biowhjtn4ali;buah.wevkjask;rbhafb
  • Therefore, f(LP) = 0. 
  • Therefore, by contradiction, LP is false. 
This doesn't prove that LP is not meaningless. Similarly, Godel's first incompleteness theorem is true, despite having a faulty proof. A system cannot prove its own consistency, as that would be circular reasoning - and it cannot prove its own inconsistency, either.

I'm not a logical positivist and had to look this stuff up. Instead I believe something, generalized from my formal study of physics, that can apparently be confused with logical positivism. I believe existence is defined by interaction.

As it can be confused with logical positivism, I can be certain LP should have been repaired, not discarded, and as a result this corner of philosophy has been in a blind alley for several decades.



It has come to my attention that philosophers don't understand existence. Let's pretend I can change that.(Again. [2])

Because a system cannot prove or disprove itself, we can be certain that we need an external framework to evaluate any system, and the framework will necessarily be strictly more powerful, as it contains the system in question. The problem is there's a strictly most powerful framework: existence itself.

We can be certain of this even using only diction. Look at how I must start the proof: if a strictly more powerful framework than existence existed... I call this the principle of existence. It seems to have the property of being self-justifying, indeed it seems that a self-justifying framework can be defined as existence.

Though proving this is impossible.

First problem, existence is the most powerful framework. Second problem, since everything is subject to the principle of existence, logic is inherently less powerful than existence. Existence is the framework by which I evaluate logic; it is not valid to use logic to evaluate the statement 'existence is the most powerful framework,' or that 'existence is self-justifying.' (It bemuses me that I can even state or communicate the idea.) Conversely, I can't use logic to invalidate the ideas either, and both these restrictions also apply to evidence.

So, have I tried to put 'existence is interaction' though the wringer of itself? I have. I found it's a bit stupid and felt silly.

Despite this, I will pretend to argue for it. If you can figure out how this relates to the fact that arguing is invalid, kindly let me know. (I'll probably understand eventually, but it would be nice not to have to do it all myself.)

Because it isn't an argument, it isn't unreasonable to hold a contradictory axiom. However, it is unreasonable to think any alternative can be logically defended.



The purpose of truth is prediction. I don't really care about truth per se, I care that when I go to make tea, the tea leaves and hot water are where and how I think they are. The purpose of prediction is to control my own subjective state. (How this invalidates caring about BIVs and the external world is left as an exercise.) For example, being able to experience tea when I want to experience tea. As another, experiencing nourishment so that I don't experience starving to death and thus termination of experience. Truth allows me to predict and thus fulfil these goals.

Something that cannot interact with me cannot affect my goals. If it cannot affect my goals, as far as I'm concerned, it does not exist.

Nevertheless, existence is the ultimately axiomatic axiom. As it's not an argument, this is a suggestive story. (Mmm, tea.)



Ironically, Brown says, "it is the hope and intent of this work that once people come to really understand Hume and the Bullshit Nature of Rational Philosophy, they can start working on an axiomatic philosophy," apparently unable to see that LP only functions as an axiom.

Rack this up as another place where philosophy has gone off into the weeds so deeply that a single individual can outflank the entire academic class. Of course, I don't actually think I have some superhuman insight. I think anyone can do this if they have enough dedication. It's a process of discarding chains more than acquiring skills, and not giving up by concluding it must be impossible. Having a PhD in philosophy is such a massive millstone that even the middlebrow can pass them if they set their mind to it, and lesser qualification are merely lesser millstones to be overcome. 



In this case, I apparently independently re-derived a better version of logical positivism. (I certainly didn't read any Wittgenstein or Russell.) I can tell because my phrasing is different and the logical geometry is just a bit off. Sadly you can't tell - I might have deliberately rephrased it specifically for the purposes of foisting this metric off on you. But, were you to embark on a similar expedition, you know what to look for in yourself.

By the geometry, I mean I don't seem to need empiricism or meaning as concepts. If they appear, they appear as consequences of the principle, rather than having to be injected. Interaction-existence and LP as concepts can't quite be superimposed neatly. I don't use the idea of statements either, and though I tend to have to add that to get any use out of interaction-existence, because it isn't native it becomes modular - if you don't like statements and can come up with an alternative, it works fine.



[1] Careful, Brown's mean free path between contradictions in this piece is maybe 20. Go here if you want to see him inside his domain of expertise. It's the detail view of what Moldbug mentions about models. In the LP piece, roughly all the implications he ascribes to LP are embarrassing misunderstandings. Or sheer mendacity - I found one insult he flings that he later straight-up contradicts the basis of. Unfortunately, this sort of thing seems typical of modern academics.

[2] I like repeating myself, because if I've made a mistake, every repetition increases the likelihood I'll make a different mistake which will show up as a detectable contradiction. It also allows the reader to do logical diffraction if I'm correct: each slightly different iteration has slightly different interpretations, and you can eliminate the possibilities that don't overlap and reinforce.

Tuesday, November 5, 2013

Self-Referential Sentences such as the Liar Paradox

I appear to have a novel solution.

In mathematics, some numbers may be consistently defined recursively, though they must pass convergence tests. In logic, it seems convergence is impossible.

Let f(x) = 0 represent 'this statement is false.' To resolve what it means, I must substitute the statement, 'this statement is false' into 'this statement,' or f( f(x) ) = 0. Trying resolve this new statement, I get f( f( f(x) )) = 0. And so on.

The liar paradox is not a statement. It is nonsense. It can be neither true nor false.



As I've mentioned before, I should not be able to outflank the whole of professional philosophy. Their combined brainpower should find all the solutions my single brain can, just by chance. Their mistakes alone should outweigh my contribution, even if they had systematic bias against it.

Considering that both Godel's incompleteness theorems and the halting problem proof depend on these kinds of non-resolvable statements, I expected scholars to at least address the objection in passing. I expected to find I took the idea more seriously, not that I am apparently the first it has occurred to.

Explaining this bungle demands some strange ideas. Do I have superhuman insight, or are professional philosophers capable of seeing the real solution well enough to avoid it?