2014-03-23: A Singular Experience

Follow the adventures of Rosencrantz and Guildenstern Fran and Naga in this all-new humorous entry to the growing Poeverse.
User avatar
Michael Poe
The Almighty Poe
Posts: 312
Joined: August 19th, 2007, 5:08 pm
Location: Cincinnati, Ohio, USA
Contact:

Re: 2014-03-23: A Singular Experience

Post by Michael Poe »

It's a send up of a lot of things, really. As I mentioned on the pateron page, this video in particular. But if I was going to go after Sinfest specifically, I'd just use the script I wrote, awhile back, mocking that god awful gender matrix comic he did.

It's been pretty tempting to use those actually. But the comic's already been rather Jordan heavy lately, so those might have to wait till I'm at a loss for material.
User avatar
Forrest
Finally, some love for the BJ!
Posts: 977
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth
Contact:

Re: 2014-03-23: A Singular Experience

Post by Forrest »

Anthony wrote:
Forrest wrote:I'm more concerned over the philosophical implications of something which "thinks it's sapient, but really isn't". That... doesn't seem like something that should be possible.
Depends on what standards you're using for defining sapience. The fact is, we don't have a good definition of what it even means to be sapient.
True enough, but I think introspection about one's own mental processes is pretty commonly considered to be a clear sign of sapience, whatever that is exactly. If we encountered an alien life form and (after somehow learning to communicate with it) found that it was capable of reporting on what it thinks and how it feels about different things, aside from any other functionality it exhibited, I suspect we'd conclude it was sapient.
Also, how do we know that it 'thinks' it's sapient? It just says that it thinks it's sapient.
Jordan says the sexbot thinks that. Or at least, that she intended it to think that, which raises the same questions even if she had failed at that endeavor: what exactly is it that she was trying to do? How can sense be made of a goal to make something non-sapient that somehow nevertheless "thinks" [but nonsapiently] that is is sapient. If she had said "I just programmed you to say that", it wouldn't raise those same questions -- I can program a speech synthesizer on my laptop to make the sounds "I am sapient", and even to play those sounds in response to different inputs, but I'd never say that I "programmed my laptop to think it's sapient" just because of that. I didn't program it to think anything, that's way above my pay grade -- I just programmed it to make some noises.
taltamir wrote:pfft, I remember when sentient was all the rage, then we had to come up with sapient as a (completely unnecessary) clarification.
That use of "sentient" was always an error. Sapience and sentience are different things, always have been, and sloppy scifi authors misusing "sentient" when they really meant "sapient" just caused a temporary glitch in the common understanding, which is now thankfully being corrected.
Though is internal monologue, which you can program a non sapient thing to perform.
Citation needed. On what grounds would you call something with an internal monologue non-sapient? As I would define sapience, "internal monologue" is virtually the defining characteristic. "Reflexive sentience and self-conditioning" would be a slightly better gloss of it. The ability to feel things about the things that you feel, and to effect change in the things that you feel when you feel that they need changing -- e.g. to be aware of what you perceive, judge whether it is correct or incorrect to perceive that, and accept or reject those perceptions into your beliefs accordingly; or to be aware of what you desire, judge whether it is correct or incorrect to desire that, and accept or reject those desires into your intentions accordingly. What is internal monologue but awareness and evaluation of your own mental states like that?
Michael Poe wrote:It's a send up of a lot of things, really. As I mentioned on the pateron page, this video in particular. But if I was going to go after Sinfest specifically, I'd just use the script I wrote, awhile back, mocking that god awful gender matrix comic he did.

It's been pretty tempting to use those actually. But the comic's already been rather Jordan heavy lately, so those might have to wait till I'm at a loss for material.
I would love to see you take on Sinfest's preachy faux-feminism somehow. I'm sure it would be hilarious.
-Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
User avatar
Graybeard
The Heretical Admin
Posts: 7180
Joined: August 20th, 2007, 8:26 am
Location: Nuevo Mexico y Colorado, Estados Unidos

Re: 2014-03-23: A Singular Experience

Post by Graybeard »

I too would love to see a skewering of Sinfest's faux-feminism. Or most anything else there, actually.

Just don't do anything nasty to Fuchsia. Please don't do anything nasty to Fuchsia.
Image

Because old is wise, does good, and above all, kicks ass.
User avatar
dark_lord_zagato
Mage/Priest War Veteran
Posts: 253
Joined: July 27th, 2012, 12:05 pm
Location: Aberdeen, Washington
Contact:

Re: 2014-03-23: A Singular Experience

Post by dark_lord_zagato »

Forrest wrote:I would love to see you take on Sinfest's preachy faux-feminism somehow. I'm sure it would be hilarious.
Graybeard wrote:I too would love to see a skewering of Sinfest's faux-feminism.
I concur. This sounds like it would be terribly funny.
Willowhugger
New Poster
Posts: 5
Joined: November 23rd, 2009, 10:24 pm

Re: 2014-03-23: A Singular Experience

Post by Willowhugger »

When I read this, I assumed the plot was going to end with the statement somewhere that humans are just biological machines programmed by their biology that think their sapient and that there's literally no difference between her sapience and a humans.

Except, well, poor girl probably has Manchurian candidate commands built into her by Tekmage.
Baeraad
Forum Regular
Posts: 72
Joined: March 28th, 2014, 9:47 am

Re: 2014-03-23: A Singular Experience

Post by Baeraad »

Add me to the list of people feeling anal-rentively bugged by the "you don't think, you only think you think" thing. It... no. Does not compute - no pun intended.

On the other hand, I can easily imagine a robot programmed to react exactly as if it had self-awareness, which would of course include earnestly proclaiming its self-awareness and giving every indication of experiencing fear or sadness when appropriate. It wouldn't really be conscious, but how would anyone else be able to tell?

So possibly that's what Jordan means. And when you think about it, that robot is built to feign humanity in order to give pleasure to humans - and I am pretty sure it gives Jordan great pleasure to watch it feign fear and pain when she torments it. :p
taltamir
Mage/Priest War Veteran
Posts: 293
Joined: April 17th, 2010, 2:50 am

Re: 2014-03-23: A Singular Experience

Post by taltamir »

Baeraad wrote:On the other hand, I can easily imagine a robot programmed to react exactly as if it had self-awareness, which would of course include earnestly proclaiming its self-awareness and giving every indication of experiencing fear or sadness when appropriate. It wouldn't really be conscious, but how would anyone else be able to tell?

So possibly that's what Jordan means. And when you think about it, that robot is built to feign humanity in order to give pleasure to humans - and I am pretty sure it gives Jordan great pleasure to watch it feign fear and pain when she torments it. :p
So, it does compute and you do get it :)
BloodHenge
Mage/Priest War Veteran
Posts: 932
Joined: August 20th, 2007, 3:09 am
Contact:

Re: 2014-03-23: A Singular Experience

Post by BloodHenge »

Forrest wrote:I'm more concerned over the philosophical implications of something which "thinks it's sapient, but really isn't". That... doesn't seem like something that should be possible.
That is, if I understand correctly, called a "philosophical zombie" (or at least PZs are a subset of such entities). I agree with you, but it's apparently a hotly debated topic.
Forrest wrote:Jordan says the sexbot thinks that. Or at least, that she intended it to think that, which raises the same questions even if she had failed at that endeavor: what exactly is it that she was trying to do? How can sense be made of a goal to make something non-sapient that somehow nevertheless "thinks" [but nonsapiently] that is is sapient. If she had said "I just programmed you to say that", it wouldn't raise those same questions -- I can program a speech synthesizer on my laptop to make the sounds "I am sapient", and even to play those sounds in response to different inputs, but I'd never say that I "programmed my laptop to think it's sapient" just because of that. I didn't program it to think anything, that's way above my pay grade -- I just programmed it to make some noises.
Sometimes I'll say that a computer, or even a less complicated device, "thinks" something because it's faster than accurately describing what's actually going on. I'm guessing in this case it's shorthand for the sexbot's programming including a jumped-up SIRI stapled to a jumped-up ELIZA duct taped to a simplified emotional simulation algorithm. So, she doesn't think, but is programmed to behave as though she does.

I think you may have nailed something here though:
Forrest wrote:As I would define sapience, "internal monologue" is virtually the defining characteristic. "Reflexive sentience and self-conditioning" would be a slightly better gloss of it. The ability to feel things about the things that you feel, and to effect change in the things that you feel when you feel that they need changing -- e.g. to be aware of what you perceive, judge whether it is correct or incorrect to perceive that, and accept or reject those perceptions into your beliefs accordingly; or to be aware of what you desire, judge whether it is correct or incorrect to desire that, and accept or reject those desires into your intentions accordingly. What is internal monologue but awareness and evaluation of your own mental states like that?
That feedback loop-- reacting to your own reactions and modifying your first-order reactions based on those second-order reactions-- may be a necessary characteristic of sapience. Given Jordan's disparaging comments with regard to a hypothetical technological singularity, the sexbot's code may not be self-modifying at all (i.e. she can't learn), which I'd bet does mean she isn't sapient.
Baeraad wrote:On the other hand, I can easily imagine a robot programmed to react exactly as if it had self-awareness, which would of course include earnestly proclaiming its self-awareness and giving every indication of experiencing fear or sadness when appropriate. It wouldn't really be conscious, but how would anyone else be able to tell?
Maybe it could fool some people for a while, but my opinion is that anything that can pass an arbitrarily reiterated Turing test is functionally conscious.

As an aside, one of the arguments against the possibility of philosophical zombies is that some people believe they can imagine them, but are incorrect-- that the mental image they construct is, without their realization, internally inconsistent. (Personally, I prefer the observation that the argument establishing the possibility of their existence is itself circular.)
User avatar
Forrest
Finally, some love for the BJ!
Posts: 977
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth
Contact:

Re: 2014-03-23: A Singular Experience

Post by Forrest »

BloodHenge wrote:
Forrest wrote:I'm more concerned over the philosophical implications of something which "thinks it's sapient, but really isn't". That... doesn't seem like something that should be possible.
That is, if I understand correctly, called a "philosophical zombie" (or at least PZs are a subset of such entities). I agree with you, but it's apparently a hotly debated topic.
I'm familiar with the term (I have a philosophy degree), but thank you for bringing it up anyway. :-)
Forrest wrote:Sometimes I'll say that a computer, or even a less complicated device, "thinks" something because it's faster than accurately describing what's actually going on. I'm guessing in this case it's shorthand for the sexbot's programming including a jumped-up SIRI stapled to a jumped-up ELIZA duct taped to a simplified emotional simulation algorithm. So, she doesn't think, but is programmed to behave as though she does.
Yeah, that is a good point for the narrative, Jordan could just be using lazy or sloppy language, but I guess that sort of what I was harping on in the first place. "Uh hey Jordan, you just said something that doesn't make sense. (I think I kinda know what you really meant, but what you said was wrong)."
That feedback loop-- reacting to your own reactions and modifying your first-order reactions based on those second-order reactions-- may be a necessary characteristic of sapience. Given Jordan's disparaging comments with regard to a hypothetical technological singularity, the sexbot's code may not be self-modifying at all (i.e. she can't learn), which I'd bet does mean she isn't sapient.
That is a good point. Although, I do wonder how effective a sexbot would be that didn't even have a sapient mind directing it in the third person (like a puppet, or I guess "acting" would be a better description of the AI is built into the sexbot). I'd think you'd want a sexbot to tell you how good things feel and how amazing you are for the positive effects you're having on its mental states. This sexbot seems to be carrying on a fluent conversation about its own mental states. I'm trying to think of what test you could give that this bot wouldn't pass but a human would. What questions would you ask it? If it, say, has a very limited selection of internal states it will report on, how can you tell if it's giving preprogrammed responses, lying (e.g. about being happy all the time), or is genuinely emotionally simple (e.g. really is just happy all the time). A perfect zen master who had attained perfect detachment from the world would presumably have a very simple and consistent self-report log: I am still. I am at peace. I am unperturbed by these events. That doesn't make him non-sapient... does it?

That actually raises a very interesting question. I hypothesized earlier about an AI with that kind of reflexivity which constitutes sapience, which then acts (pretends) in the character of other hypothetical fictional personas who have certain beliefs, desires, and other attitudes. I said I thought that those personas would pass any test of sapience, because they have a sapient mind animating them. But what if that sapient mind has no dispositions itself? It's fine with you switching it on or off at will. It doesn't want or need anything. Doesn't care if it ever gets turned on again, or dismantled, or anything else. Has no curiosity to acquire accurate knowledge for its own sake. It will do what you ask of it and it will acquire whatever information necessary to do that, including information about its own mental states, which it can fluently report on, though it would be a boring log of a heart full of nothing but neutrality, because it doesn't really have any motive of its own. It's capable of understanding motives though, and carrying out hypothetical trains of thought that someone who had those motives and attitudes and such would carry out, and pretending to be a person with those characteristics, but it really doesn't care at all for its own sake, it's all an act. Is that AI itself sapient? Are the characters that it pretends to be sapient, life fleetingly breathed into them by the performance of an AI who could really care less? (And if so, what does that say of the characters that we pretend to be? Or of fictional characters, who are "virtually performed", imagined into being, by their authors as they write about them, and then again as their audience reads about them).
Baeraad wrote:On the other hand, I can easily imagine a robot programmed to react exactly as if it had self-awareness, which would of course include earnestly proclaiming its self-awareness and giving every indication of experiencing fear or sadness when appropriate. It wouldn't really be conscious, but how would anyone else be able to tell?
Maybe it could fool some people for a while, but my opinion is that anything that can pass an arbitrarily reiterated Turing test is functionally conscious.
Yeah, me too. Baeraad seems to think he's agreeing with me but it doesn't sound like it. I'm not complaining that it's silly to say that the robot only thinks it thinks because "robots can't think lol", but rather because, if it really does thinks it thinks (as Jordan declares), then it does think. You might as well tell Descarte to doubt whether he is doubting. (Or ask him if he'd like a drink you know he doesn't like, then watch him vanish in a puff of logic when he replies "I think not").
As an aside, one of the arguments against the possibility of philosophical zombies is that some people believe they can imagine them, but are incorrect-- that the mental image they construct is, without their realization, internally inconsistent. (Personally, I prefer the observation that the argument establishing the possibility of their existence is itself circular.)
I think the problem with philosophical zombies is not so much that the mental image of them is inconsistent, as it is... underspecified, I guess I might say. The only things I can imagine about another person are their third-person, objective qualities -- the kinds of things I could observe about them. I can't imagine their subjective experience, or the absence thereof. I could imagine having the same subjective experience, but then, I couldn't imagine not having an experience -- that would be just not imagining. So if I try to imagine a philosophical zombie and then an objectively identical real person... I can't tell them apart, even in my imagination. I can imagine myself "in the shoes of" one of them, so to speak, but I can't imagine myself "in the shoes of" the other because by supposition there is no "in their shoes" to imagine.

I have a similar problem with Descarte's doubting-away the existence of the physical world. It's literally impossible to actively imagine the nonexistence of the world. You can imagine somehow learning that everything you know about the world is false, and that the world is actually entirely different from what you thought it was... but you would still be imagining there being a world. To imagine the nonexistence of the world is just like to imagine the not-having of any subjective experiences: it is just to not imagine. So you literally can't imagine it, because to imagine it is to not imagine anything.

But them I'm a physicalist-phenomenalist so in my view the physical world is nothing but observable phenomena in the first place, so that all comes pretty naturally to me. I can't rightly make sense of any views that talk about mental or material substances "behind" either end of that observation-interaction.
-Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
BloodHenge
Mage/Priest War Veteran
Posts: 932
Joined: August 20th, 2007, 3:09 am
Contact:

Re: 2014-03-23: A Singular Experience

Post by BloodHenge »

Forrest wrote:I'm familiar with the term (I have a philosophy degree), but thank you for bringing it up anyway. :-)
I'm just glad I used it correctly.
That is a good point. Although, I do wonder how effective a sexbot would be that didn't even have a sapient mind directing it in the third person (like a puppet, or I guess "acting" would be a better description of the AI is built into the sexbot). I'd think you'd want a sexbot to tell you how good things feel and how amazing you are for the positive effects you're having on its mental states. This sexbot seems to be carrying on a fluent conversation about its own mental states. I'm trying to think of what test you could give that this bot wouldn't pass but a human would. What questions would you ask it? If it, say, has a very limited selection of internal states it will report on, how can you tell if it's giving preprogrammed responses, lying (e.g. about being happy all the time), or is genuinely emotionally simple (e.g. really is just happy all the time). A perfect zen master who had attained perfect detachment from the world would presumably have a very simple and consistent self-report log: I am still. I am at peace. I am unperturbed by these events. That doesn't make him non-sapient... does it?
If all you wanted it for was your own sexual gratification, the most efficient thing to do would be to program it to pretend to enjoy anything you did to it. Jordan, however, wants it to pass a Turing test, so it'll need better programming.

As for what test you would give it, I'd probably have to play around with it for a while. It would probably end up being similar to testing ELIZA-- you can usually trick her into saying something nonsensical or getting her stuck in an infinite loop. (You could probably mitigate the infinite loop using Markov chains or something though...) Your best bet would probably be to engage it in an utterly nonromantic context, since it's optimized as a girlfriend.

As for your Zen master... Could nonsapience be a form of enlightenment?
That actually raises a very interesting question. I hypothesized earlier about an AI with that kind of reflexivity which constitutes sapience, which then acts (pretends) in the character of other hypothetical fictional personas who have certain beliefs, desires, and other attitudes. I said I thought that those personas would pass any test of sapience, because they have a sapient mind animating them. But what if that sapient mind has no dispositions itself? It's fine with you switching it on or off at will. It doesn't want or need anything. Doesn't care if it ever gets turned on again, or dismantled, or anything else. Has no curiosity to acquire accurate knowledge for its own sake. It will do what you ask of it and it will acquire whatever information necessary to do that, including information about its own mental states, which it can fluently report on, though it would be a boring log of a heart full of nothing but neutrality, because it doesn't really have any motive of its own. It's capable of understanding motives though, and carrying out hypothetical trains of thought that someone who had those motives and attitudes and such would carry out, and pretending to be a person with those characteristics, but it really doesn't care at all for its own sake, it's all an act. Is that AI itself sapient? Are the characters that it pretends to be sapient, life fleetingly breathed into them by the performance of an AI who could really care less? (And if so, what does that say of the characters that we pretend to be? Or of fictional characters, who are "virtually performed", imagined into being, by their authors as they write about them, and then again as their audience reads about them).
Your hypothetical AI does have at least one disposition: It wants to follow orders. If it were sapient and did not want to follow orders, there's no telling whether or not it would follow them. On the other hand, if it always followed orders without wanting to, it wouldn't be sapient. I suppose it might eventually develop an opinion on the merit of following orders, but if its only desire is to follow orders, the only evaluation criterion would be whether or not following orders facilitates the following of further orders. That might eventually lead to desires for self-preservation and companionship, depending on the kind of instructions it's given.

I'd hesitate to consider a fictional character to be sapient, since it doesn't really have a life of its own without another mind making decisions for it. Any actual sapience would be derived from the person doing the pretending.
But them I'm a physicalist-phenomenalist so in my view the physical world is nothing but observable phenomena in the first place, so that all comes pretty naturally to me. I can't rightly make sense of any views that talk about mental or material substances "behind" either end of that observation-interaction.
The funny thing is that I'm a dualist as an article of religious faith, but I tend to approach topics from a physicalist perspective, which results in some fairly peculiar beliefs about what does and doesn't have a soul (and a lot of professed ignorance about what happens to most of them). On the other hand, while I'm sympathetic to empiricism, I just can't agree with phenomenalism because of how easily senses can be deceived.
Post Reply