2008-04-21 - Yeah, I'm okay wit dat ...

For talking about the plot, the art, the dialogue, the characters, the site, and the individual updates...
Post Reply
BloodHenge
Mage/Priest War Veteran
Posts: 932
Joined: August 20th, 2007, 3:09 am
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by BloodHenge »

Forrest wrote:Think back to my hypothetical robot with no imperative programming: why would it care about itself more than anybody else? It's not programmed to care about anything. And even if it is an intelligent fact-learning robot, what mere facts could cause it to care about something when there is nothing it intrinsically cares about for those facts to be relevant to? e.g. if it cares about itself, learning that X will damage it could cause it to care to avoid X, likewise if it cares about others and learns that X will harm others... but if it cares about nothing, no learning of facts will cause it to spontaneously care about something.
Come to think of it, this sounds like a pretty pointless robot, and I don't see much reason for anyone to care about it. Since it doesn't care about anything, there's no reason for it to share any of the information it collects or the deductions it makes. Of course, since it doesn't care about silence, nothing stops it from producing output, but since it doesn't care about veracity either, nothing guarantees that its output will be accurate or even understandable. So at best, we have a needlessly complex Magic 8-Ball. Everyone's probably better off with it dismantled for spare parts, including the robot.
To conduct any kind of argument, you have to have some broadly agreed-upon premises to argue from, otherwise you're just shouting disagreements at each other.
I think I had an even bigger problem: I was trying to argue from premises I disagreed with. I think we can both agree that wasn't really going anywhere. I think I pretty much have it worked out now...

Sane people lack cognitive and perceptual disorders. (That is, they have no difficulty understanding available information.)
Rational people have a clear understanding of causality. (That is, they have no difficulty forming a plan to achieve a desired result.)
Moral people desire broadly occurring intrinsic good. (This is the probably the most complicated and least well-formed concept, but I hope you know roughly what I mean.)

Basically, I view insanity, irrationality, and immorality as three separate problems that may or may not share causes or cause one another. You and I would still expect sane, rational, moral people to behave within the same range, but for me, that discription includes no redundancy. Also, when I describe someone as "insane", "irrational", or "immoral", it apparently provides more information than when you say it.
User avatar
mindstalk
Typo-Seeking Missile
Posts: 916
Joined: November 9th, 2007, 10:05 am
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by mindstalk »

I endorse that trichotomy.
User avatar
Forrest
Finally, some love for the BJ!
Posts: 977
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by Forrest »

mindstalk wrote:Pragmatically as in that's how I live. Philosophically I don't see a justification for moral truth, but I'll act as if liberal morality is in fact morality, and get exercised about slavery and torture and shit.
Hmm interesting. So what do you do about people who don't share those values? Try to convince them otherwise by appealing to something you hope you have in common, like some sense of empathy? And what if they don't have anything in common with you to appeal to; force them to adhere to your values (those being not enslaving and torturing people) if you can manage and afford to do so, resign yourself to futility otherwise?

I'm not disagreeing with that sequence of tactics, that's pretty much what I do as well. Where I'm going with this is: what about when very powerful people or groups who have different values than you apply those same tactics, and succeed in forcing you to adhere to their values? I assume you would not like this. However, reflecting upon your "I don't like this" feeling, looking upon yourself in the third person and forming an opinion about yourself having that feeling, there are three ways you can go. You can either say that you're not justified in your dislike (i.e. you don't really have any reason to dislike this), and really, you should just be ok with this, like my hypothetical walk-the-walk apathetic moral nihilist. You could say that you are justified in your dislike just because you dislike it and people shouldn't do things you dislike, in which case you are an egoist. Or, you can say that you are justified in your dislike because your values are right and theirs are wrong, in which case you are some sort of universalist.
AFAIK the justification for induction is indeed still an open problem. We use it nonetheless.
I'm not talking about the problem of induction but about justification for particular observations. As I sit here at my desk, I see a cup of yogurt next to me. If you were to ask me if there was a cup of yogurt next to me, I would say yes, and if you asked me if I how I know that, I'd say I see if, and if you asked me how I know that sight informs me of reality... there's not really any answer to that question. I don't know, but what else am I going to go on?
If you can't know anything for sure, you can't know "there really is no reality". Maybe there is a reality!
There's a difference between "we can't know whether anything is true or false" and "nothing is true or false". Applying this to moral theories, the former would yield a theory which says that there are universal moral truths, we just can't even be certain that what we think are universal moral truths really are. (That's like the moral equivalent of science: science is out there trying to find the truth, so it clearly holds there to be one; but at the same time it's always tentative about its conclusions as to what in particular is the truth.) But moral nihilism claims "there are no moral truths, nothing is really good or bad". Likewise, metaphysical nihilism would claim that there are no truths at all, and therefore there is no reality.
At any rate, there are consistencies in what we observe, and some of what we observe can hurt or please us, and the consistencies we call other people tend to agree with us about other observed consistencies, as well as doing things we can't predict or control, so questions about whether this is *really* reality are kind of irrelevant. It's reality for us. If people are hallucinations, they're hallucinations that do what they want and can hurt us, especially if we treat them like hallucinations.
I actually agree very much with this, and this is sort of the basis for my particular brand of phenomenalism:

(1) If there's no way to ever possibly tell the difference, even in theory, between two states of affairs, then they are the same state of affairs.

(2) There's no way to tell the difference between a solipsistic "world" and being a person in a world ontologically distinct from you like we all presume we are.

Therefore, since there's no difference between "you + your perceptions" and "you + your perceptions + external world", then

(3) this "external world" talk is meaningless. It's not that "there is no external world", it's "wtf are you talking about? there's this stuff I can see and hear and touch, what are you babbling about beyond that?".

Note, however, that this implicitly depends upon empiricism (that bit about telling the difference between this and that state of affairs). So if you're buying that world view - i.e. "who cares about an objective world and morality, just believe your eyes and do whatever feels good" - then you're an egoistic* empiricist/hedonist. And I'm pretty much there with you, except that I universalize my empiricism and my hedonism and concern myself with other people's observations and feelings, too.

([*] Please note that egoistic != egotistic. The former is a purely abstract theoretical term, the latter is an insult).
Last edited by Forrest on May 23rd, 2008, 6:03 am, edited 2 times in total.
User avatar
Forrest
Finally, some love for the BJ!
Posts: 977
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by Forrest »

BloodHenge wrote:Come to think of it, this sounds like a pretty pointless robot, and I don't see much reason for anyone to care about it. Since it doesn't care about anything, there's no reason for it to share any of the information it collects or the deductions it makes. Of course, since it doesn't care about silence, nothing stops it from producing output, but since it doesn't care about veracity either, nothing guarantees that its output will be accurate or even understandable. So at best, we have a needlessly complex Magic 8-Ball. Everyone's probably better off with it dismantled for spare parts, including the robot.
The usefulness of the robot is irrelevant; it's a thought experiment intended to make a point. A robot not programmed to care about anything cannot learn to care about anything. (Though it could of course be programmed to care about things).
I think I had an even bigger problem: I was trying to argue from premises I disagreed with. I think we can both agree that wasn't really going anywhere. I think I pretty much have it worked out now...

Sane people lack cognitive and perceptual disorders. (That is, they have no difficulty understanding available information.)
Rational people have a clear understanding of causality. (That is, they have no difficulty forming a plan to achieve a desired result.)
Moral people desire broadly occurring intrinsic good. (This is the probably the most complicated and least well-formed concept, but I hope you know roughly what I mean.)
Ok, that's very useful. I tried asking earlier if you meant something like this and the answer came back no; specifically, I asked if you accepted behavioral disorders as a form of insanity, which by your definition of sane above you must not.

I think your sane corresponds to about half of my sane/rational, and your moral corresponds to the other half of my sane/rational (though "moral" usually seems to have connotations of only interpersonal interaction, I'm using it, for lack of a better word, it in a sense that includes merely bad but not wrong things like self-harm, so behavioral disorders like obsessions and compulsion fall under there too, not just things like sociopathy).

Your definition of rational seems unusually narrow; someone holding contradictory beliefs would not be irrational by your definition. (Reason and causality are not the same thing). But besides that, I'm comfortable with your definitions, though I still disagree with them. You're not a relativist, which is what I came here to argue about in the first place.
BloodHenge
Mage/Priest War Veteran
Posts: 932
Joined: August 20th, 2007, 3:09 am
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by BloodHenge »

Forrest wrote:and if you asked me how I know that sight informs me of reality... there's not really any answer to that question. I don't know, but what else am I going to go on?
Reproducibility isn't a bad standard for evaluating whether a method of observation informs the observer of reality. If you can look away and then look back and see the yogurt cup again, and if you can call someone else in from another room and he can see the yogurt cup, it's probably there. The more independent verification you get, the more likely it is that you have a valid method of observation. Vision has already been pretty rigorously tested in this manner.
Forrest wrote:([*] Please note that egoistic != egotistic. The former is a purely abstract theoretical term, the latter is an insult).
A great many insults have legitimate descriptive power under the appropriate circumstances. I imagine that some people could rightly be described as "egotistic" by an impartial observer.
User avatar
mindstalk
Typo-Seeking Missile
Posts: 916
Joined: November 9th, 2007, 10:05 am
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by mindstalk »

Forrest wrote:
mindstalk wrote:Pragmatically as in that's how I live. Philosophically I don't see a justification for moral truth, but I'll act as if liberal morality is in fact morality, and get exercised about slavery and torture and shit.
Hmm interesting. So what do you do about people who don't share those values? Try to convince them otherwise by appealing to something you hope you have in common, like some sense of empathy? And what if they don't have anything in common with you to appeal to; force them to adhere to your values (those being not enslaving and torturing people) if you can manage and afford to do so, resign yourself to futility otherwise?

I'm not disagreeing with that sequence of tactics, that's pretty much what I do as well. Where I'm going with this is: what about when very powerful people or groups who have different values than you apply those same tactics, and succeed in forcing you to adhere to their values? I assume you would not like this. However, reflecting upon your "I don't like this" feeling, looking upon yourself in the third person and forming an opinion about yourself having that feeling, there are three ways you can go. You can either say that you're not justified in your dislike (i.e. you don't really have any reason to dislike this), and really, you should just be ok with this, like my hypothetical walk-the-walk apathetic moral nihilist. You could say that you are justified in your dislike just because you dislike it and people shouldn't do things you dislike, in which case you are an egoist. Or, you can say that you are justified in your dislike because your values are right and theirs are wrong, in which case you are some sort of universalist.
Your attempt to apply "egoism" to me doesn't seem to match how the term is defined in Wikipedia or the Stanford Encyclopedia of Philosophy. At any rate, I would say dislikes don't need to be justified, they just are. I dislike being coerced and will resist it, and I expect many other people to dislike it as well, and hopefully resist. (Looking at the state of US civil liberties, not nearly enough...) And yes, I'll try various levels of appealing to common nature, values they hold that I might be able to exploit, and resistance/running away/giving in depending on power.
AFAIK the justification for induction is indeed still an open problem. We use it nonetheless.
I'm not talking about the problem of induction but about justification for particular observations. As I sit here at my desk, I see a cup of yogurt next to me. If you were to ask me if there was a cup of yogurt next to me, I would say yes, and if you asked me if I how I know that, I'd say I see if, and if you asked me how I know that sight informs me of reality... there's not really any answer to that question. I don't know, but what else am I going to go on?
Reality is what we can touch, or can touch us. What can hurt us, what can eat us, what we can eat, or mate with. The parents who hold us and the children we (well, women) give birth to. Sight and sound are just a heads-up on what's out there to touch; the yogurt you see is there if you can reach out and eat the yogurt. The fact that what you see probably indicates the presence of yogurt is supported by unconscious induction based on previous experience with things that looked like yogurt and turned out to be yogurt.
There's a difference between "we can't know whether anything is true or false" and "nothing is true or false". Applying this to moral theories, the former would yield a theory which says that there are universal moral truths, we just can't even be certain that what we think are universal moral truths really are. (That's like the moral equivalent of science: science is out there trying to find the truth, so it clearly holds there to be one; but at the same time it's always tentative about its conclusions as to what in particular is the truth.) But moral nihilism claims "there are no moral truths, nothing is really good or bad". Likewise, metaphysical nihilism would claim that there are no truths at all, and therefore there is no reality.
But it's not at all clear what sort of thing a universal moral truth would be, or what work it could do. Suppose someone comes up with some argument that actually proves some universal moral truth exists, and what it is. Why should I care? What relevance is it, sans enforcement? You'd apparently argue that I'd be insane for not caring about it, but that's a POV unique to you; I'd say that an idea of universal moral law, hanging in a vacuum, is incoherent and irrelevant.

It's a cousin of the Euthyphro dilemma, I think.
So if you're buying that world view - i.e. "who cares about an objective world and morality, just believe your eyes and do whatever feels good" - then you're an egoistic* empiricist/hedonist. And I'm pretty much there with you, except that I universalize my empiricism and my hedonism and concern myself with other people's observations and feelings, too.
And I generalize (prefer that to universalize) my hedonism as well and am also so concerned. But it's not like I made some conscious decision to generalize because it's the provably Right thing to do; rather I have senses of empathy and fairness that induce the generalizing, or perhaps *are* the generalizing. And if someone just doesn't have those senses then it's time for argument cum baculum magnum [*]

[*] with a big stick
User avatar
Forrest
Finally, some love for the BJ!
Posts: 977
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by Forrest »

mindstalk wrote:Your attempt to apply "egoism" to me doesn't seem to match how the term is defined in Wikipedia or the Stanford Encyclopedia of Philosophy.
I'm not necessarily calling you an egoist, I'm just saying that if your answer to this or that question was such and such, you would be an egoist. Anyway... to quote Wikipedia on ethical egoism:
Wikipedia wrote:Three different formulations of ethical egoism have been identified: individual, personal and universal. An individual ethical egoist would hold that all people should do whatever benefits him; A personal ethical egoist, that he should act in his own self-interest, but makes no claims about what anyone else ought to do, while universal ethical egoists argue that everyone should act in ways that are in their own interest.
I'm using it in at least the second sense, possibly also the first sense (since the claims of the second are included amongst the claims of the first).

Oh, and in regards to "egoistic empiricist", I'm using it in the sense synonymous with solipsism, which is sometimes called egoism.
At any rate, I would say dislikes don't need to be justified, they just are.
But your philosophical opinion on ethics just is that reflective approval or disapproval of your (and other people's) inclinations. Think about yourself in the third person and ask yourself two questions: "does this person [meaning yourself] like or dislike x [for some x]?", and then, "how do I feel about this person [yourself] liking or disliking x?" If you approve of yourself liking or disliking x, then you feel that those preferences are justified (which is just another way of saying "correct as far as you could tell"); if you disapprove, then you feel that your preferences are unjustified. How you approve or disapprove of your and other people's preferences defines what your ethical position is; if you approve of people (including yourself) caring about other people and disapprove the opposite, then you're an altruist of some variety, and if not, you're an egoist. If you really don't care one way or another, then you're a moral nihilist.

I'm asking "should or should not x be?" and you're replying "x is", which is the answer to a different question entirely. If you answer "I don't care", then you're a nihilist. If you answer "I'd prefer x to be and I'm ok with that, though I can't explain why or convince you to agree", then you're an x-ist.
Reality is what we can touch, or can touch us. What can hurt us, what can eat us, what we can eat, or mate with. The parents who hold us and the children we (well, women) give birth to. Sight and sound are just a heads-up on what's out there to touch; the yogurt you see is there if you can reach out and eat the yogurt. The fact that what you see probably indicates the presence of yogurt is supported by unconscious induction based on previous experience with things that looked like yogurt and turned out to be yogurt.
But that's just pushing the question back a stage; saying that sight is justified by its instrumental connection to touch is just like when you say that altruistic behavior is justified by its propensity to help accomplish your own goals or prolong your life or pass on your genes. Why should I believe my sense of touch any more than I should believe my sense of sight? I know, I know, because I'll die if I don't. Why should I care if I die? Obviously, I do care if I die, and I do believe my sense of touch; but then, I do believe my eyes and I do care about other people for their own sakes. The latter two do not need to be justified in terms of the former two; they are all equally axiomatic to me.

And since someone who disagrees with one's axioms cannot be persuaded by reasoning appealing to them, anyone who disagrees with me on those axioms appears to me to be irrational, and thus insane (by my rational = sane definition of sanity). Now, how to tell whether my axioms are the right ones, or whether I'm the crazy person, is that intractable philosophical problem I've been going on about this whole thread. We call people who reason in a fundamentally different way than ourselves "irrational", but how can we tell if we're right in calling them thus? Maybe we're the ones being irrational instead.
But it's not at all clear what sort of thing a universal moral truth would be, or what work it could do. Suppose someone comes up with some argument that actually proves some universal moral truth exists, and what it is. Why should I care? What relevance is it, sans enforcement? You'd apparently argue that I'd be insane for not caring about it, but that's a POV unique to you; I'd say that an idea of universal moral law, hanging in a vacuum, is incoherent and irrelevant.
I think there's another analogous case to be made with metaphysical issues here, which makes the ethical question seem not so troublesome in comparison.

We all have perceptions; sight, hearing, touch, taste, smell, etc. Most of us create mental models of a perspective-independent world based on these perceptions, extrapolated to what we think other people with other perspectives would perceive, and we call that model that we think our perceptions match up to "reality". Put another way: "reality" is what an all-sensing person (someone who could make absolutely every observation there was to make), who was inclined to believe his senses, would believe in. When ourselves or others have perceptions that don't match that model, we call those things they or we perceive "unreal". Particular statements capturing one part of reality are regular old factual truths.

Likewise we all have appetites; hunger, thirst, lust, and more subtle emotional appetites too. Most of us (barring sociopaths) create mental models of a perspective independent moral world based on those appetites, extrapolated to what we think other people with other perspectives would want, and we call that model "morality". Put another way: "morality" is what an omnipathic person (someone who experienced absolutely all appetites there were to be experienced), who was inclined to desire pleasant things and not unpleasant ones, would want. When ourselves or others have appetites that don't match that model we call those things they or we desire "immoral". Particular statements capturing one part of morality are "moral truths".

Asking why moral truths should shape your desires, or equivalently, why you should desire what is moral, is like asking why facts should shape your beliefs, or why you should believe what is real. I expect that the latter two questions sound ridiculous to you. The former two sound equally ridiculous to me.
And I generalize (prefer that to universalize) my hedonism as well and am also so concerned. But it's not like I made some conscious decision to generalize because it's the provably Right thing to do; rather I have senses of empathy and fairness that induce the generalizing, or perhaps *are* the generalizing. And if someone just doesn't have those senses then it's time for argument cum baculum magnum
Then I would say you are in fact an altruist. I've been saying all along that I can't justify why you should care about other people, any more than I can justify why you should believe your senses; they're just axioms to me, and you can't form a rational argument about axioms because rational arguments depend on shared axioms. But I still consider people who disagree with those axioms incorrect, regardless of whether I think I can prove that to them. So if you agree that other people are valuable for their own sakes - if you just feel that way, and approve of yourself feeling that way, and would like others to feel that way, whether or not you think you could rationally convince others to feel that way - then you are an altruist, and we have no real disagreement. It's the axioms that make the philosophy; everything else is just making explicit what is implicit in the axioms.
BloodHenge
Mage/Priest War Veteran
Posts: 932
Joined: August 20th, 2007, 3:09 am
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by BloodHenge »

Forrest wrote:And since someone who disagrees with one's axioms cannot be persuaded by reasoning appealing to them, anyone who disagrees with me on those axioms appears to me to be irrational, and thus insane (by my rational = sane definition of sanity). Now, how to tell whether my axioms are the right ones, or whether I'm the crazy person, is that intractable philosophical problem I've been going on about this whole thread. We call people who reason in a fundamentally different way than ourselves "irrational", but how can we tell if we're right in calling them thus? Maybe we're the ones being irrational instead.
I don't think that's necessarily accurate. If two people start with the same information and independently reach the same conclusion, it seems likely that they'd view each other as rational, regardless of the thought process that leads to the conclusion. The more often it happens, the more rational each is likely to consider the other.
User avatar
mindstalk
Typo-Seeking Missile
Posts: 916
Joined: November 9th, 2007, 10:05 am
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by mindstalk »

Forrest wrote:
But your philosophical opinion on ethics just is that reflective approval or disapproval of your (and other people's) inclinations. Think about yourself in the third person and ask yourself two questions: "does this person [meaning yourself] like or dislike x [for some x]?", and then, "how do I feel about this person [yourself] liking or disliking x?" If you approve of yourself liking or disliking x, then you feel that those preferences are justified (which is just another way of saying "correct as far as you could tell"); if you disapprove, then you feel that your preferences are unjustified. How you approve or disapprove of your and other people's preferences defines what your ethical position is; if you approve of people (including yourself) caring about other people and disapprove the opposite, then you're an altruist of some variety, and if not, you're an egoist. If you really don't care one way or another, then you're a moral nihilist.

I'm asking "should or should not x be?" and you're replying "x is", which is the answer to a different question entirely. If you answer "I don't care", then you're a nihilist. If you answer "I'd prefer x to be and I'm ok with that, though I can't explain why or convince you to agree", then you're an x-ist.
I'm skeptical that my approval of liking is what most philosophers would accept as justification. As for the question, I guess it was ambiguous; I took it as "should X be, as universal moral truth" not "should X be, as my personal preference"
But it's not at all clear what sort of thing a universal moral truth would be, or what work it could do. Suppose someone comes up with some argument that actually proves some universal moral truth exists, and what it is. Why should I care? What relevance is it, sans enforcement? You'd apparently argue that I'd be insane for not caring about it, but that's a POV unique to you; I'd say that an idea of universal moral law, hanging in a vacuum, is incoherent and irrelevant.
I think there's another analogous case to be made with metaphysical issues here, which makes the ethical question seem not so troublesome in comparison.

We all have perceptions; sight, hearing, touch, taste, smell, etc. Most of us create mental models of a perspective-independent world based on these perceptions, extrapolated to what we think other people with other perspectives would perceive, and we call that model that we think our perceptions match up to "reality". Put another way: "reality" is what an all-sensing person (someone who could make absolutely every observation there was to make), who was inclined to believe his senses, would believe in. When ourselves or others have perceptions that don't match that model, we call those things they or we perceive "unreal". Particular statements capturing one part of reality are regular old factual truths.

Likewise we all have appetites; hunger, thirst, lust, and more subtle emotional appetites too. Most of us (barring sociopaths) create mental models of a perspective independent moral world based on those appetites, extrapolated to what we think other people with other perspectives would want, and we call that model "morality". Put another way: "morality" is what an omnipathic person (someone who experienced absolutely all appetites there were to be experienced), who was inclined to desire pleasant things and not unpleasant ones, would want. When ourselves or others have appetites that don't match that model we call those things they or we desire "immoral". Particular statements capturing one part of morality are "moral truths".
I disagree strongly with this analogy; I don't think it works at all, as stated. The assumption of a consistent reality means that omniscience -- making every observation -- is coherent. Experiencing all appetites strikes me as inviting contradictions, since appetites can be opposed. And "desire pleasant things and not unpleasant ones," is flawed, I'm not sure if more tautological or incoherent. Things aren't inherently pleasant, but pleasant only relative to someone they give pleasure to. And desiring pleasant things is almost tautological, apart from people desiring unpleasant things as stepping stones to some greater pleasure or satisfaction.

"When ourselves or others have appetites that don't match that model we call those things they or we desire "immoral"." -- taking your words literally, someone who didn't like sweet food would be immoral. Though it may capture what many people actually do, hence "gay people are immoral", "BDSM people are twisted".

Also I am now officially pissed with trying to write lots of text in a browser window, and not with a real text editor. This is why I always considered mailing lists superior to web forums.
User avatar
Forrest
Finally, some love for the BJ!
Posts: 977
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth
Contact:

Re: 2008-04-21 - Yeah, I'm okay wit dat ...

Post by Forrest »

BloodHenge wrote:I don't think that's necessarily accurate. If two people start with the same information and independently reach the same conclusion, it seems likely that they'd view each other as rational, regardless of the thought process that leads to the conclusion. The more often it happens, the more rational each is likely to consider the other.
If you infer the same conclusions from the same information as me, that implies that you're reasoning from the same principles as me.
Mindstalk wrote:I'm skeptical that my approval of liking is what most philosophers would accept as justification.
I'm not saying that your approval constitutes actual justification, but that your approval constitutes you considering it justified. If you approve of someone's opinion, then you hold their opinion to be correct as far as you can tell, meaning that it follows from the information available to you and the principles you hold to be correct. If your principles are correct and it really does follow given all the information, then it is in fact justified; which is just saying that if a perfectly informed (which to an empiricist means all-sensing) rational person would approve of it, then it is justified.
As for the question, I guess it was ambiguous; I took it as "should X be, as universal moral truth" not "should X be, as my personal preference"
That's just the difference between "do I approve of all people, myself and others alike, liking x?" and "do I approve of myself liking x, just me personally?" If you agree with the former for some value of x, then I'd say you consider x a universal moral truth, whether or not you think you can prove that to anyone.
I disagree strongly with this analogy; I don't think it works at all, as stated. The assumption of a consistent reality means that omniscience -- making every observation -- is coherent. Experiencing all appetites strikes me as inviting contradictions, since appetites can be opposed.
I think maybe you're misunderstanding what I mean by appetites. I'm referring to the feelings you have which compel you to want this or that thing. You might generalize them as "pains", though that's not entirely accurate because I doubt we'd call the feeling of longing we have toward pleasant things a "pain"; but it gets the idea across that I'm talking about a motivating phenomenal experience, a sensation that tells you that this ought to be (in a very visceral sense) rather than that this is like the normal five senses do.

So if you have two people who are both hungry, an omnipathic person would desire that both of them eat. If there is only enough food there for one of them, then it's (warning: oxymoron ahead) contingently impossible for both of them to eat, and the omnipathic person would feel that that was bad. He would disapprove of someone bringing about that situation, and be rather indifferent to who eats the food since either way he feels the hunger of one person (and therefore, using his desires as a standard for morality, we would conclude that morality was indifferent as to who ate the food).

But that's little different from there being two hungry people and nothing to eat; oh well, life is hard sometimes. But there's nothing incoherent about him wanting both people to eat; it is a conceivable, logically possible state of affairs for both of them to eat, and that's what the omnipathic person wants. That he can't get it just means that that situation is irreparably bad.

If an omnipathic person were also omnipotent, then he could always satisfy everyone's appetites, no matter what their appetites were; if nothing else, by making each and every person the king of their own private universe where everything goes their way. That there is not such an omnipotent omnipath is just a sad fact of the world; and likewise with our own impotency regarding some bad situations.
And "desire pleasant things and not unpleasant ones," is flawed, I'm not sure if more tautological or incoherent. Things aren't inherently pleasant, but pleasant only relative to someone they give pleasure to. And desiring pleasant things is almost tautological, apart from people desiring unpleasant things as stepping stones to some greater pleasure or satisfaction.
Well, I'm actually kind of inclined to agree with you, but that's just because I'm a hedonist like you. But there are plenty of people out there who (at least claim to) disagree with hedonism, so I threw that in there to specify that this hypothetical omnipath has to be a hedonist for his desires to follow properly from the appetites he experiences, just like the omniscient being needs to be an empiricist for his beliefs to follow properly from the perceptions he has.
"When ourselves or others have appetites that don't match that model we call those things they or we desire "immoral"." -- taking your words literally, someone who didn't like sweet food would be immoral.
Not quite, because it's not other people's appetites matching our appetites, it's other people's appetites matching our model of what would be in accord with everyone's appetites. And actually now that I say that I realize that I misspoke; it is not perceptions or appetites that can be out of accord with reality or morality, but our interpretations thereof; our beliefs and desires. Since reality is based in perception and morality based in appetites then it's impossible for one to run against the other. But you can, spurred by a legitimate appetite, come to desire some state of affairs which runs afoul of others appetites, and that's bad; just as if you are spurred by your perceptions to believe something that runs afoul of other perceptions, then your belief is false.

Anyway, back to the point: our models of reality account for the idea that some people have different sensory equipment and thus react differently as to how they perceive the same state of affairs from the same perspective as us. If me, someone who is red-green color blind, and an alien who can see into the far infrared stand at the same place and look at the same scene we will have different perceptions, but my model of reality would expect that, so it's not a problem. Likewise, I may like the taste of sweet food, but I can understand that others might be differently constituted such that they don't, and my model of morality needs to account for that in order to be accurate.
Also I am now officially pissed with trying to write lots of text in a browser window, and not with a real text editor. This is why I always considered mailing lists superior to web forums.
Here here! Or UseNet. Ah, I miss the old days...
Post Reply