mindstalk wrote:Your attempt to apply "egoism" to me doesn't seem to match how the term is defined in Wikipedia or the Stanford Encyclopedia of Philosophy.
I'm not necessarily calling you an egoist, I'm just saying that if your answer to this or that question was such and such, you would be an egoist. Anyway... to quote Wikipedia on
ethical egoism:
Wikipedia wrote:Three different formulations of ethical egoism have been identified: individual, personal and universal. An individual ethical egoist would hold that all people should do whatever benefits him; A personal ethical egoist, that he should act in his own self-interest, but makes no claims about what anyone else ought to do, while universal ethical egoists argue that everyone should act in ways that are in their own interest.
I'm using it in at least the second sense, possibly also the first sense (since the claims of the second are included amongst the claims of the first).
Oh, and in regards to "egoistic empiricist", I'm using it in the sense synonymous with solipsism, which is sometimes called egoism.
At any rate, I would say dislikes don't need to be justified, they just are.
But your philosophical opinion on ethics
just is that reflective approval or disapproval of your (and other people's) inclinations. Think about yourself in the third person and ask yourself two questions: "does this person [meaning yourself] like or dislike x [for some x]?", and then, "how do I feel about this person [yourself] liking or disliking x?" If you approve of yourself liking or disliking x, then you feel that those preferences are justified (which is just another way of saying "correct as far as you could tell"); if you disapprove, then you feel that your preferences are unjustified. How you approve or disapprove of your and other people's preferences defines what your ethical position is; if you approve of people (including yourself) caring about other people and disapprove the opposite, then you're an altruist of some variety, and if not, you're an egoist. If you really don't care one way or another, then you're a moral nihilist.
I'm asking "should or should not x be?" and you're replying "x is", which is the answer to a different question entirely. If you answer "I don't care", then you're a nihilist. If you answer "I'd prefer x to be and I'm ok with that, though I can't explain why or convince you to agree", then you're an x-ist.
Reality is what we can touch, or can touch us. What can hurt us, what can eat us, what we can eat, or mate with. The parents who hold us and the children we (well, women) give birth to. Sight and sound are just a heads-up on what's out there to touch; the yogurt you see is there if you can reach out and eat the yogurt. The fact that what you see probably indicates the presence of yogurt is supported by unconscious induction based on previous experience with things that looked like yogurt and turned out to be yogurt.
But that's just pushing the question back a stage; saying that sight is justified by its instrumental connection to touch is just like when you say that altruistic behavior is justified by its propensity to help accomplish your own goals or prolong your life or pass on your genes. Why
should I believe my sense of touch any more than I
should believe my sense of sight? I know, I know, because I'll die if I don't. Why should I care if I die? Obviously, I
do care if I die, and I
do believe my sense of touch; but then, I
do believe my eyes and I
do care about other people for their own sakes. The latter two do not need to be justified in terms of the former two; they are all equally axiomatic to me.
And since someone who disagrees with one's axioms cannot be persuaded by reasoning appealing to them, anyone who disagrees with me on those axioms appears to me to be irrational, and thus insane (by my rational = sane definition of sanity). Now, how to tell whether my axioms are the right ones, or whether I'm the crazy person, is that intractable philosophical problem I've been going on about this whole thread. We
call people who reason in a fundamentally different way than ourselves "irrational", but how can we tell if we're right in calling them thus? Maybe we're the ones being irrational instead.
But it's not at all clear what sort of thing a universal moral truth would be, or what work it could do. Suppose someone comes up with some argument that actually proves some universal moral truth exists, and what it is. Why should I care? What relevance is it, sans enforcement? You'd apparently argue that I'd be insane for not caring about it, but that's a POV unique to you; I'd say that an idea of universal moral law, hanging in a vacuum, is incoherent and irrelevant.
I think there's another analogous case to be made with metaphysical issues here, which makes the ethical question seem not so troublesome in comparison.
We all have perceptions; sight, hearing, touch, taste, smell, etc. Most of us create mental models of a perspective-independent world based on these perceptions, extrapolated to what we think other people with other perspectives would perceive, and we call that model that we think our perceptions match up to "reality". Put another way: "reality" is what an all-sensing person (someone who could make absolutely every observation there was to make), who was inclined to believe his senses, would believe in. When ourselves or others have perceptions that don't match that model, we call those things they or we perceive "unreal". Particular statements capturing one part of reality are regular old factual truths.
Likewise we all have appetites; hunger, thirst, lust, and more subtle emotional appetites too. Most of us (barring sociopaths) create mental models of a perspective independent moral world based on those appetites, extrapolated to what we think other people with other perspectives would want, and we call that model "morality". Put another way: "morality" is what an omnipathic person (someone who experienced absolutely all appetites there were to be experienced), who was inclined to desire pleasant things and not unpleasant ones, would want. When ourselves or others have appetites that don't match that model we call those things they or we desire "immoral". Particular statements capturing one part of morality are "moral truths".
Asking why moral truths should shape your desires, or equivalently, why you should desire what is moral, is like asking why facts should shape your beliefs, or why you should believe what is real. I expect that the latter two questions sound ridiculous to you. The former two sound equally ridiculous to me.
And I generalize (prefer that to universalize) my hedonism as well and am also so concerned. But it's not like I made some conscious decision to generalize because it's the provably Right thing to do; rather I have senses of empathy and fairness that induce the generalizing, or perhaps *are* the generalizing. And if someone just doesn't have those senses then it's time for argument cum baculum magnum
Then I would say you are in fact an altruist. I've been saying all along that I can't justify why you should care about other people, any more than I can justify why you should believe your senses; they're just axioms to me, and you can't form a rational argument about axioms because rational arguments depend on shared axioms. But I still consider people who disagree with those axioms incorrect, regardless of whether I think I can prove that to them. So if you agree that other people are valuable for their own sakes - if you just feel that way, and approve of yourself feeling that way, and would like others to feel that way, whether or not you think you could rationally convince others to feel that way - then you are an altruist, and we have no real disagreement. It's the axioms that make the philosophy; everything else is just making explicit what is implicit in the axioms.