I've begun to notice that a person's philosophy is almost entirely determined by their feelings. Feelings which rarely or never change after early childhood. (Concept used mathematically; we don't know what happens in early childhood.)
It would appear that a person adopts the philosophy that most closely matches these feelings.
It could also be closely matching to some other internal, non-logical criteria.
In either case, it would appear that logic is better used to expand the person's existing philosophy, rather than attempting to replace it.
So I would like to do an experiment. Tell me a little bit about your philosophy, and about a problem it has. I will attempt to find a solution to that problem that is acceptable to you.
I believe this is a win-win proposition for you - if I fail, you will at least understand your problem better, while if I succeed, you will have a solution.
Still, this is a brand new technique. It will most likely have problems and I won't succeed on the first try. For example, I may simply try to adapt my philosophy to your problem, which would defeat the purpose.
Of course I'm going to do this anyway, and tell you about it. You should know what I think of a problem, so that you can put what I eventually find for you in context.
I will not attempt to convince you either way. I will only construct a solution, which you may accept or reject on the merits you ascribe to it. Counter-points will be taken as a request for refinement.
Like all technology, philosophy is a tool that should serve the user, not the other way around.
Ask not what you can do for philosophy; ask what philosophy can do for you.
Subscribe to:
Post Comments (Atom)
11 comments:
I can't remember who said it, but someone put it well when they said something like "our precepts, while not necessarily constructed by reason, should nevertheless be defensible by reason".
Be more exacting. What do you mean by my "philosophy"? Moral philosophy, ethical, metaphysical/spiritual, existential, epistemological? Or do you mean just a general life philosophy?
Give me a specific realm of philosophy to ask you about (I'm sure I've run into nuances in all of them, heh).
I agree with Peter, that you need to be a bit more specific about what you mean by "philosophy". Also, you need a better word for "feeling". My feelings for peas, for example, have changed dramatically since I was a child. Maybe "temperament" is what you were looking for.
To Jose Gosdin: What you should do is start seeing other people. You can't see it now, but if she can treat you like that, then she is definitely bad news. In a couple of years you'll look back and wonder what the heck you saw in her, other than a shared taste in burritos. The faster you move on, the better your life will be.
Peter;
Honestly, just pick whatever definition you like, then give me a problem you personally would like solved.
Dan;
Yes, I could more accurately describe the preamble to my experiment. Yet, you seem to know what I'm talking about anyway, so what's the problem?
I shall ask a question pertaining to the issue of which this very post is addressed to!
The philosophical ideals which I see constructed, and which I construct myself, and which seem to be constructed by sound reasoning, are often at odds with what I otherwise believe should be the case.
Rigor, and moral courage, would seem to require that we first develop some sound argument as to the way a life should be lived, and then live our lives by those ideals.
And yet, what we (and I am certain that it is not just I who do this) often do is attempt to construct a sound argument which reinforces the platform that we already believe is correct, or that we want to be correct.
For instance, often I hear moral arguments which are countered with the effect of "But then you've just justified X, you've said that X is acceptable! Obviously, then, your system of morals is flawed."
An assumption of the truth of some statement X is present, and because a "sound" argument contradicts the truth of X, the argument must not be sound.
And yet, intellectual honesty demands that if the argument is coherent and sound, and if it disproves X, then X must not really be true... and yet, it is so tempting.
It is philosophically unjustifiable to say "You have made a sound argument, but the conclusion is absurd! We will not accept it." But it seems personally unjustifiable to admit that the conclusion is not absurd, that it is actually true.
It's quite different for empirical facts. Rational people would have a really hard time going like "I really need the Earth to be flat... let me come up with a deductive argument, so that I can prove that and be able to defend this platform."
But philosophical arguments are different - sometimes the truth of them is accepted, because they are overwhelmingly convincing, and sometimes they are dismissed, because they conflict with background knowledge, prior assumptions, "common sense", etc.
For instance, your proof that consciousness is non-physical is absurd. It is easier for me to know that you have used fallacious reasoning than to actually spot the fallacy itself. I'm sure that I could though, if I went about the trouble of deconstructing the argument and examining it rigorously.
I recently witnessed an argument to the effect that, if you are literally dying from dehydration, and I have an abundance of water, I am under no moral obligation to ease your thirst - to save your life. Oh yes, the author wrote, you certainly have a "right to struggle" for water, a right to try to convince me to give it to you; but I have a right to let you thirst to death.
I read the author's argument - it was actually a valid extension of "property rights". I didn't think twice about it: the guy is an asshole, he doesn't understand morality, he probably deserves to starve to death himself. I don't really care if his argument was sound.
This was a long comment, but the issue is an essential nuance to my life philosophy. When is it acceptable to dismiss an argument outright, uncritically and uncharitably?
The philosophical ideals which I see constructed, and which I construct myself, and which seem to be constructed by sound reasoning, are often at odds with what I otherwise believe should be the case.
You should more precisely define 'otherwise believe.' Are you talking about specific bits of data? Or is it an explanation or some data you reached by non-logical means?
For instance, often I hear moral arguments which are countered with the effect of "But then you've just justified X, you've said that X is acceptable! Obviously, then, your system of morals is flawed."
You can repair this argument by requiring that the moral system be internally consistent - it's likely that it can both justify and condemn X.
If not, look for facts as yet unconsidered, which may contradict the system's basic axioms.
If you still can't find a reason to condemn X, then if you haven't made a mistake, X isn't wrong.
It's quite different for empirical facts. Rational people would have a really hard time going like "I really need the Earth to be flat... let me come up with a deductive argument, so that I can prove that and be able to defend this platform."
Again, we can repair this argument. Assume I have a desirable outcome that requires that I take the Earth to be flat. That means I have an argument that I should act as if the Earth were flat, for some particular purpose, not that I should believe it is. I hope that later I'll find out the real reason I had to act so, but it isn't necessary.
But philosophical arguments are different - sometimes the truth of them is accepted, because they are overwhelmingly convincing, and sometimes they are dismissed, because they conflict with background knowledge, prior assumptions, "common sense", etc.
What this means on the face is that non-logic trumps logic sometimes. Since physics is purely logical, and cannot make up non-logical things, this gets interesting.
For instance, your proof that consciousness is non-physical is absurd.
Unfortunately it is only intricate, and generally a parallel rather than serial argument. I've listed the argument - it should be easy to see if I've used a fallacious form. The structure is resistant to disproving the premises. The only real attack is to show a possibility I haven't considered, and no one has be able to do so.
I read the author's argument - it was actually a valid extension of "property rights". I didn't think twice about it: the guy is an asshole, he doesn't understand morality, he probably deserves to starve to death himself. I don't really care if his argument was sound.
If you unbundle the hard moral obligations from aesthetic or social moral obligations, again we can repair the argument. (Really we need two words for 'moral' here.)
Assuming you can coherently define 'deserve' (I can't) then yes, he does deserve to starve, like so. He has an aesthetic moral obligation to give water. Similarly, we have the hard moral right to ostracize or boycott him if he does not, which is where the social obligation comes in.
This was a long comment, but the issue is an essential nuance to my life philosophy. When is it acceptable to dismiss an argument outright, uncritically and uncharitably?
When you cannot repair it, which is to say never, because it requires both criticism and charity. Another criteria is the converse of the flat Earth repair - if it specifically breaks your goals instead of allowing them. This requires experiment, however, and thus also means 'never.'
Nonetheless, it is impossible in practice to examine every argument you come across. (This is an example of not following philosophy because it breaks a goal. Also, it's an example of facts outside the purview of the original theory. Because life is so intricate, it's hard to come up with really useful general principles - and if they're not useful they're probably not correct.)
This is why I suggested in this post solving a problem instead of the usual way of philosophy. Instead of having to evaluate every claim you come across, one can use philosophy as one uses all other technology - to make your life easier. To solve problems as they come up, rather than to try to know 'the truth' at all times.
Hopefully I've also managed to suggest a few things that I haven't said outright.
I imagine a flock of professional thinkers, each with a different blurb showing their basic axioms and other crucial information. You would get a philosopher like you get a family doctor or a therapist.
Instead of a therapist specializing in children's issues, there would be philosophers specializing in schools, like a Christian philosopher. You would pick one that's not going to challenge things you won't accept challenges to, like axioms.
For your curiosity, I have solved it in an entirely different way. I have trained my instincts to be able to tell an argument that leads to truth from one that leads to ridiculousness, and also I've learned to feel a self-contradiction the way one feels a dangerous situation. Unfortunately I'm not sure exactly how I did this, nor how I'm supposed to convince anyone else that I've in fact done so.
"Unfortunately it is only intricate, and generally a parallel rather than serial argument. I've listed the argument - it should be easy to see if I've used a fallacious form. The structure is resistant to disproving the premises. The only real attack is to show a possibility I haven't considered, and no one has be able to do so."
I didn't mean "fallacious" as in, a fallacious form of argument. I was using the term informally, as one might to refer to any argument that fails to convince.
But it occurs to me that you haven't really used a deductive form at all. You call it a "parallel" form, and I can't really imagine what that means (the premises, rather than being connected to each other, simply converge on the conclusion?).
In any case, maybe you could explain how disproving any of the premises (and I believe this can be done) wouldn't kill the argument? Or reconstruct it in some sort of standard (valid) form?
It's parallel because I have to eliminate many remaining possibilities, which don't directly lead to the next possibility to be eliminated.
Yet, the final conclusion depends on every one of the eliminated possibilities - to realize it, I have to hold all of the impossibilities in my mind at once. It follows from many separate premises, rather from one or two related premises.
Because of the way I've constructed it, contradicting any of the premises leads to the general conclusion regardless, though for different reasons and thus suggests different consequences.
For instance, if it turns out that consciousness is just some kind of equation, like the wave equation, then consciousness still isn't physical. Specifically, qualia do not follow from it, which means that qualia are still outside physics, even though all of consciousness' actions and effects would not be predictable through the model.
This is why it's called the 'hard' problem.
Similarly, I attempt to avoid nonphysical consciousness at every possibility, but inevitably fail.
If I haven't used a deductive form, I would call that fallacious, or at least dishonest.
Well, say for instance the following premises were incorrect:
# There is no action you must experience to perform.
# There is no action you could perform better by experiencing it.
# Ergo, consciousness cannot physically do anything.
How would the conclusion be maintained?
Okay, so lets assume there's some action that you must experience to perform.
What that literally means is that there's a conscious equation, like the wave equation, that I mentioned above.
The difference is that, by assumption, we can somehow prove that you cannot execute this equation except by experiencing it.
So what we have is a magical equation. This equation, and not other equations, has a 'consciousness' constant, which is not zero or something. This number is a magic number that makes things conscious because we say so.
There's no actual difference between this equation and other equations other than this magic number, plus when we stick it in our pocket calculators, the calculator crashes. It's not conscious and can't do the computation. The electrons in the circuit 'know' not to actually stay in the circuits when we try to compute it non-consciously.
Alternatively, consciousness could be not a number, but an operator, like plus (+) or d/dy. Analogous to being not an atom, but a force, an interaction between atoms.
So we discover a new operator, the consciousness operator.
Unfortunately this still means we have magic numbers, because the equation would still be of the general form Ax+By+C=0. That C, or my favorite, that zero, would be a conscious zero, because we used the special 'conscious' plus sign to get to it. Since we can add to C and zero to get any number, (it's arbitrary!) this would literally mean either:
A. We now have to denote 'conscious' numbers from 'unconscious' numbers, to keep track of them. Would we have to generalize complex algebra into x+iy+&z? (There's no way for algebra to generalize like this.) Perhaps they'd behave exactly like normal numbers until you tried to operate with a normal number, and then we'd get madness. Also, this would essentially be a new force that can 'tell' that its acting in a brain. Those brain-electrons would know not to act this way outside brains. (There's probably no room for a new force, either.)
B. All numbers are conscious. Which naturally means everything is conscious. Stuff for everyone!
>
So instead lets assume there's an action you can perform better by experiencing it.
Again, we have an equation of motion where one of the variables is input from consciousness. This variable makes the whole equation more efficient or something when it's not zero.
So we build two robots who, for the sake of argument, play catch. They are identical, except one is conscious. It catches the ball more often. See, because it can 'feel' the ball, not just compute the trajectory, it's more efficient. It would mean it's somehow capable of gathering information it cannot gather, like eddies in the air that nudge the ball. Or it can watch its footing while not looking at the ground. Something like that.
Alternatively we have two circuits, one is conscious. It computes faster. It's electrons can 'tell' it's conscious and thus feel less resistance, or have to perform fewer steps, or see some kind of Lorentz contraction.
>
I hope I was clear about the converse assumption; that we have an equation, which we can run like the wave equation, and produce consciousness.
This is simply not a solution. It passes the buck, but the buck's still around.
Unless, of course, the equation happens to be the equation of nondeterminism, or something else that produces a causal hole in physics. Then we have a qualitative difference on which to base consciousness. But also then my conclusion is the same.
The fact is, the very existence of consciousness necessarily means that it isn't physical. The only question is how, exactly, it manifests.
"So we build two robots who, for the sake of argument, play catch. They are identical, except one is conscious."
Not much time, but I did want to point out a flaw here. You're using an implicit assumption that consciousness is nonphysical (which is what you're trying to show).
If these two robots were physically identical, and consciousness is physical, then either both of them would be conscious or neither of them would.
If they were physically identical, but only one of them were conscious, then of course consciousness would be nonphysical - you've set up an example where it is a necessity.
If consciousness is physical, then the robots cannot be identical. If they are not identical, then the places where one does his computations and the other does his thinking are structurally different . And, if they are structurally different, then why must the one which is set up to be conscious necessarily be inferior to the "unconcious" (non?) one?
To be clear, I do imagine that the one robot has a small brown pack attached at the shoulder, letting it experience the computation in the way that our cerebral cortex does for us.
I did use the word 'except' in my descriptions of identical.
So, assume consciousness is physical. Thus, it is an event with causes and effects. "Causes" can always be seen as costs.
Thus, for consciousness to evolve, plus be retained and expanded, it must have specifically positive effects that outweigh the costs.
Thus I can safely assume that consciousness is superior in some way to unconsciousness.
The thing is we cannot assume any particular benefit, as memory, emotion, attention, and so on have been ruled out. (See my post on the cognitive scientists for a link.)
That, plus the fact that if we do assume a particular benefit, such as better hand-eye coordination, all we have done is assign a particular equation the magic of consciousness. The only evidence to verify this assertion is consciousness itself, which is a test we've already established is useless.
I usually mention here how it also can't be an epiphenomenon, but you've done that for me.
The gedanken experiment is to attempt to discover what possible difference, in particular or in kind, the conscious robot can have, but all we discover is, again, it can have no difference.
Unless you can think of something, of course. Someone suggested that 'either free or confers some advantage' is a false dichotomy. I don't think he managed to back this point up, but I certainly hadn't considered it before.
Yet, we know it does have an difference, because it exists, and our assumption that it is physical is the only one left to challenge.
Also, I can back this up. Happiness is not numerical and can't be totaled or averaged. Thus it isn't physical. (It's not a hormone, for instance, nor a particular pattern of brain activity.)
Examining the mere addition paradox I can show that the only unsupported assertion is that happiness is numerical. I won't here to save space. (Sadly, they no longer mention that it becomes circular, with B worse than A, which is worse than A+...)
Post a Comment