(Spoilers: morality is binary, the trolley problem as normally stated is meaningless, letting die must be qualitatively different, and I think the only honest answer to the trolley problem is ignorance. I don't know, and neither do you, which means anyone who thinks they have an answer is wrong about something. It should be about playing God, which is a really tough problem.)
Spandrell put the trolley problem in front of me, and therefore I have to try to solve it again.
This is additionally a warm-up for the next post I'll write.
I started with a couple new-to-me thoughts, that it's about playing God and it's about deciding who deserves to live. I googled the former, to see how standard it was, but instead I found a superb illustration of the standard analysis. Which will hopefully help clarify my explanation of how broken the approach is.
"Does an assessment of whether some act is morally worse than another act, settle the question of what it is morally permissible for an agent to do?"The offered answers: yes, no.
My answer: if your morality ever puts you in a position where you must choose the least-worst, your morality is broken.
I found this by asking myself, "What if morality is binary? Actions are wrong or not-wrong and there's no meaningful magnitude?" What if acts are either evil or not-evil and that's it for morality?
Put this way, it became clear to me that even if your morality does have gradations, it should approximately boil down to 'not do' and 'do.' If you choose the former, you're evil. If the latter, not. Which means every morality functionally reduces to binary morality.
Which means this whole question is meaningless - there is no such thing as morally worse.
Verification: put another way, neither circumstance nor human agency can be capable of forcing you to perform an evil act, because you cannot be held responsible for circumstance or other people. Therefore, there must always be a pure moral way out of any situation. If your morality disagrees, your morality is broken and you probably want to fix that.
So yes, it settles the question, it settles it as 'both are impermissible.'
"Important: You don't need to worry that if you respond "Yes" you're going to get caught out because of some trickery to do with bad consequences. In other words, if you respond "Yes", you're not going to get into difficulties because of thoughts about situations such as where inflicting more rather than less pain, for example, avoids some horrible outcome, because one can quite consistently argue that in such a situation inflicting less pain would be the morally worse action (and, therefore, to be avoided), precisely because it would have horrible consequences. "Having said the above, this seriously revolts me. They were intuitively aware that they were asking a broken question, that they were engaged in sophistry, then salved their conscience by avoiding the real problem. That their analysis on the off-topic point is correct only makes it worse - they could have got the real analysis right.
Though I require myself to admit that avoiding this kind of sophistry can be difficult. I'd share but I want to minimize the tangent. Anyway, I demand that appointed philosophers know or figure it out without me having to explain.
"If you are confronted by a situation where you will inevitably cause pain, but you can choose whether it's a small amount or a large amount, then, it does rather seem as if the fact that it is morally worse to inflict a large amount of pain as opposed to a small amount (assuming it is a fact) settles what it is morally permissible to do - namely, only inflict a small amount of pain."Err...no? If this quiz were a person they'd have earned a facepalm.
Your first problem is that, without coercion, you should never get into a situation where you must inevitably cause pain. And if you're coerced - by circumstance or another human - then morally it's all on their shoulders.
Second, they haven't established that inflicting pain is morally bad. Sure we generally agree - general agreement is not a proof. 'Rather seem' is not a proof! We could all be wrong. That's indeed the point. We don't understand the trolley problem. We must be wrong about something. The issue may well lie in the morality of inflicting pain.
Although I didn't find anything when I checked pain per se, I did find a problem with the word 'inflict.' As the trolley problem is identical here, I'll hold off and do them both at once.
"Is killing five innocent people morally worse than killing one innocent person?"I quote this to note that 'kill' has the same problem as 'inflict.'
"Without these organs, his five patients will definitely die; or, to put this another way, it will turn out [the surgeon accidentally] killed them by administering the chemical."Depends on whether you think this is negligence or luck. I, at least, can't prove it either way.
"Assuming (a) that the backpacker doesn't consent to giving up his life to save five other people, (b) that the lives of the five people will be saved if, and only if, the organs are transplanted, and (c) that nobody will ever find out what the surgeon has done, is it morally permissible for the surgeon to take matters into his own hands, and operate?"Is murder ever morally permissible? No.
But that's the question, isn't it? Is it murder to coerce someone out of their life to save other lives which are at risk through no fault of their own?
"An interesting thing about this class of killings is that in particular circumstances it represents a challenge to what most people will take to be a moral rule that killing more people is worse than killing fewer people. "That's not the rule. This is formal equivocation.
The rule is murder is always wrong.
But letting die can't be wrong. This can be proven by taking the limit. Kel thinks that
"Yet isn't walking away from making a choice, making a choice? Choosing not to pull the lever when you had the power to do otherwise is making a decision that would kill 5. If you could have done otherwise, you're responsible either way."So you must save a child from a burning building, at the cost of your dog.
But where does this end? I could (I'm told) save a child's life in Africa by paying for their food. Am I responsible for their death?
I could also train myself in firearms and patrol the streets of the Bronx, defending the innocent. Am I responsible for those murders, because I choose not to?
Perhaps I could save yet others, but I don't know how. Am I negligent for not aggressively searching out these methods and victims?
Take the limit, you find it's absurd. Where's the line? What is the qualitative moral difference between saving someone emotionally nearby and someone emotionally distant? By this logic, I'm responsible for just about everything. (Especially in the minds of those who think I could start a social movement and change the world.)
Take the limit, and I'm God. So are you. We're all fully responsible for everything. This logic sucks and I'm taking it back to the store.
"It seems, then, that the proposition that killing more people is morally worse than killing fewer people must be false."And indeed it is. Because of the equivocation.
At first, 'kill' is used as 'murder.' Obviously doing more wrong can't be better than doing less wrong.
Perhaps this would be best explained by reversing the moral valence. So let's multiply by negative one...
"Is it more wrong to euthanize five innocent, consenting agents, or to euthanize one innocent, consenting agent?"
Answer: it depends on whether you think euthanasia is murder or not, doesn't it?
And so the trolley problem - or the murderous surgeon problem - entirely depends on whether you think intentionally killing someone is sufficient to define murder.
But that's exactly the problem the trolley is supposed to address.
Principle: self-referencing statements are invalid. Either they converge, which means they're circular and thus meaningless, or they don't converge, in which case they're paradox, and thus meaningless.
Or equivalently, inflicting pain and being responsible for the pain inflicted are different. The question under consideration is which it is.
Luckily the trolley can be righted, and put back on its tracks by making it about something else. It seems to me that most honest victims of the problem try to do that.
Kel, for example, tries to make it about whether refraining from action makes you responsible. This, at least, is an answerable problem.
I've seen it transferred to a problem about the worth of human lives. Again, very answerable.
I think a better one is the playing God angle. If you're at the trolley, or you're a surgeon, you decide who lives and who dies. Which means you should, by definition, let live who deserves to live. But doesn't that make it clear that mere innocence cannot be the sole consideration? If they're innocent, all six deserve to live. This is what makes it playing God.
The playing God problem is hard. I can't even find an entry on it.
Luckily, in real life the playing God situation is very rare. The moral solution, as I mentioned previously, is simply to not get into that situation in the first place. Nevertheless, through mistake or luck, it does occur. As a practical matter, some humans must play God. It is therefore necessary for the philosopher to at least look for a solution.
I like the surgeon formulation, because the trolley people are faceless and thus seem identical in character. The surgeon has patients and a backpacker, giving some hint of personality. Which makes clear that the world will be very different, depending on who the surgeon kills. Humans aren't fungible. To take the limit, what if the five patients are all Down's victims and the backpacker is Norman Borlaug? (Assuming Norman's legend somewhat matches his actual actions.) Thing is, there's no reasonable way to know.
I suppose you could play statistics, and say they all have an equal chance at being Borlaug, as far as the surgeon knows. But as long as you're playing statistics, then I might find that cultures that let the surgeon kill the backpacker have more killings than cultures that call it murder.
More importantly, I may be able to show a logical contradiction in one of those rule sets...
 This is true even if 'deserve' is meaningless, which I think is a reasonable response. Let it not be said I also ask broken questions. If deserve is meaningless, no-one deserves to live. Or to die. The entire problem is rendered meaningless. That said I still want a thing which plays the role of 'deserve' and I'll invent it if I can't find it.