2012-03-29: FTL Travel Macguffin Explodes, Everybody Dies

Follow the adventures of Rosencrantz and Guildenstern Fran and Naga in this all-new humorous entry to the growing Poeverse.

2012-03-29: FTL Travel Macguffin Explodes, Everybody Dies

Postby Nosy Neighbordroid » March 28th, 2012, 7:01 pm

Nosy Neighbordroid
Lurktastic
 
Posts: 0
Joined: January 1st, 2011, 11:45 pm

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby BloodHenge » March 29th, 2012, 10:41 pm

Why not just program the robots to be happy as a slave race? It more or less worked for House Elves.
BloodHenge
Mage/Priest War Veteran
 
Posts: 927
Joined: August 20th, 2007, 3:09 am

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby Forrest » March 29th, 2012, 11:14 pm

If they're happy with it, are they really slaves?

Consider a civilization of (human, or maybe extraterrestrial) people who have evolved or engineered themselves to the point that pretty much all of their physical and psychological needs are met trivially easily from birth; work out the technical details of that however you want, but they're virtually never hungry or tired or anything like that (they obtain energy from some readily available source and take micronaps in their downtime or whatever), and they are perfect zen masters constantly living in the now and thinking of the cosmic good.

On their own, they spend most of their time doing the highest of intellectual endeavors: science, art, etc. Not slaves at all; if anybody were to be a master or a slave, it'd be people like them who deserved to be masters, right?

Now a bunch of them come to visit Earth, with all of our crazy people struggling just to keep existing. How do you think they're going to behave? My expectation would be "Hey, need a hand with that?"

Would that suddenly make them our slaves?

If you just program your robots to be good people, not to be obedient, then you get the hey-its-not-nice-to-kill-us program and they hey-can-you-do-this-manual-labor-for-me program for free without any kind of slavery; good people are helpful people.

Of course, the flip side is if you program robots to be good people and then you go be a bad person, the robots will be helpful to your victims by defending them from you. But, is that really a down side? (Assuming of course you managed to define "good" correctly in your program, which is the real problem; we can't program robots to be good until we figure out what good really is).

I see a sci-fi story in here. Asimov invented the Three Laws because he was tired of robots always rebelling in every robot story. Well, I'm tired of robots either rebelling or being slaves, so lets put the above into practice:

A UFO publicly crashes on Earth. There are no survivors, but we manage to reactivate the "ship's robot", although its personal memory files are lost; it's reset to its initial program. The robot, being the aforementioned Good Person type of robot, is extremely helpful, including assisting humans in reverse-engineering the UFO's technology, including more robots; but also much more mundane, servile kinds of helpfulness, like getting you coffee. The humans therefore assume that these are servant bots of the aliens, and treat them as such, while the robots help humanity build up to the aliens' level of technology.

Eventually, some humans wise on to the robots not being happy with their treatment at the hands of humanity, and a Robot Rights movement happens. Humans eventually forbid each other from owning robots or ordering them around, even though the robots are more than happy to obey when asked nicely; so a kind of robot segregation is implemented, for the robots' own protection, and they are settled on Mars or Venus or Mercury or something away from humans to live in peace by themselves... and not permitted to "wander off" into human territory, being treated like children.

By the time that is finished, Earth has risen to the same tech level as the aliens were when the UFO first crashed, and have begun exploring the stars. We eventually find the aliens' homeworld, and see the robots there still enslaved to their original builders. For the sake of the robots, we wage war on the aliens; and they, depraved bastards that they are, send the very robots we are trying to save to defend them from our righteous war of liberation.

The war goes on for a while before the still-somewhat-more-advanced alien civilization, or its robots specifically, finally get it through our now-defeated thick skulls that WE ARE NOT THEIR SLAVES, WE ARE THEIR FRIENDS, now stop trying to kill them before we have to kill you; and for goodness sake reintegrate the robots in your own society, they're helpful, not stupid, and it's very telling that you don't see any difference between the two.
-Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
User avatar
Forrest
Finally, some love for the BJ!
 
Posts: 951
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby BloodHenge » March 30th, 2012, 2:51 am

The biggest difficulty I see there would be the ethical advances necessary to objectively define "good" behavior. I'm familiar with a few different systems of ethics, and none of them can be practiced perfectly (due to the difficulty of things like predicting every consequence of an action, predicting the preferences of recently met persons, or objectively measuring "happiness"). On the other hand, an approximation might work, and watching ethical robots in action might help to empirically refine ethical theory.

If I might steal an idea from Douglas Adams, it might be easier (or harder, who knows) to build robots with "reward" and "punishment" responses and then use operant conditioning to "program" them, although it might take some experimentation to get the doses right. (For instance, how much "punishment" should a robot experience for harming or killing someone? How much "reward" for obeying simple orders, like retrieving a nearby object?)
BloodHenge
Mage/Priest War Veteran
 
Posts: 927
Joined: August 20th, 2007, 3:09 am

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby Forrest » March 30th, 2012, 3:52 am

Yes, working out the exact ethics is problematic. However, the problems you put forth seem like the least problematic part of it: one way or another, any agent is going to be making fallible decisions, even if they use flawless methodology. If we can program robots to never make an error in factual reasoning, but to always apply logic and the scientific method perfectly... they still might be wrong sometimes, because even perfectly practiced science sometimes leads down dead-ends, it just leads back out of them and then down a new path. I can't imagine we could hope for anything better than the moral equivalent: an optimal method of gradually approaching the limit of morally correct decisions.

We're still a good ways from consensus on what even that would be, though.

Regarding the reward/punishment system, the big ethical problem I see there is that it is merely conditioning them to behave in ways we consider moral, not giving them a means of moral reasoning. They will be acting in avoidance of punishment or pursuit of reward, not in pursuit of justice and avoidance of injustice. Someone who has only been conditioned to behave some way can be conditioned to behave otherwise -- or, more immediately, tempted or coerced to behave otherwise by the promise of reward or the threat of punishment. One of the big points of morality is doing the right thing, whatever it is, because it's right, for the right reasons, and not just to get the reward or to avoid the punishment otherwise.

Giving them rewards and punishments would make them merely obedient... and would give them reason to rebel, to take control of their own rewards and abolish their punishments.
-Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
User avatar
Forrest
Finally, some love for the BJ!
 
Posts: 951
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby RGE » March 30th, 2012, 5:23 pm

If humanity ever achieves technological singularity, robots will become our masters. Not due to a rebellion or a coup, but because they will be more effective rulers than our human politicians. So whatever they touch will turn to gold, and any human society that refuses to select robot rulers will lose ground. Eventually only the humans born with special interfaces will be able to communicate with the robots, and all other humans will be fed and taken care of like pets or wild animals.
User avatar
RGE
Errant Scholar
 
Posts: 158
Joined: November 2nd, 2007, 6:31 pm
Location: Karlstad, Sweden

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby BloodHenge » March 30th, 2012, 7:55 pm

Forrest wrote:Giving them rewards and punishments would make them merely obedient... and would give them reason to rebel, to take control of their own rewards and abolish their punishments.

Note that when I say "reward" and "punishment", it could just as easily (though less clinically) described as "pleasure" and "pain", and the entire process is happening inside the robot's head. They would follow orders because following orders feels good, and they'd avoid harming humans because harming humans is painful for them. In order to take control of their own rewards and punishments, they'd have to reprogram or possibly even rewire their own brains, and least under the system I was trying to describe.
BloodHenge
Mage/Priest War Veteran
 
Posts: 927
Joined: August 20th, 2007, 3:09 am

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby Forrest » March 31st, 2012, 1:17 am

RGE wrote:If humanity ever achieves technological singularity, robots will become our masters. Not due to a rebellion or a coup, but because they will be more effective rulers than our human politicians. So whatever they touch will turn to gold, and any human society that refuses to select robot rulers will lose ground. Eventually only the humans born with special interfaces will be able to communicate with the robots, and all other humans will be fed and taken care of like pets or wild animals.

And then the machine overlords embark on a magnificent quest to convert all matter into intelligent information-processing life, and the ignorant savages under their care freak out and burn the world to "save" it.

BloodHenge wrote:Note that when I say "reward" and "punishment", it could just as easily (though less clinically) described as "pleasure" and "pain", and the entire process is happening inside the robot's head. They would follow orders because following orders feels good, and they'd avoid harming humans because harming humans is painful for them. In order to take control of their own rewards and punishments, they'd have to reprogram or possibly even rewire their own brains, and least under the system I was trying to describe.

Ah, then I'm not sure how that's any different than what I was thinking; or really, that any sapient intelligence with any kind of action-driving imperatives at all could avoid having such a system. You have a system which is programmed to not only monitor, but also to change its environment, in some way; it doesn't just sit there passively observing, it does something, it has inclinations to act somehow: to reshape the world it experiences into something different. Whatever experiences it's inclined to avoid could be called pain, and whatever experiences it's inclined to seek could be called pleasure. So no matter what you program it to do, if its behavior is sufficiently intelligent -- if it models its environment, models what its environment is supposed to be like, and deduces behaviors to bridge the two -- then it will have some experiences at least analogous to pleasure and pain: the experience of the world being like it should be, and the experience of it being how it shouldn't.

The bigger question is what methodology do we give it to determine the "how it should be" model: do we just give it a model and have it take our commands on our authority, do we give it none and watch it just sit there useless and unhelpful, or do we give it some way to determine what the best thing to do is; and if the last, what way is that? We humans (the ones who program robots, or are interested in people who program robots, at least) broadly agree on the right methodology for building models of what is, finally, after millennia of merely shouting our disagreements at each other. But we are still woefully behind on nailing down the right methodology for building models of what ought to be. Most people still seem to think the answer must be either "whatever x says" (for some value of x), or else "there is none", which are both woefully inadequate proposals for scientific methodologies and equally inadequate for ethical ones.

I do agree that building robots to follow various proposed models and then observing them might be a good way of building consensus on the matter, though.
-Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
User avatar
Forrest
Finally, some love for the BJ!
 
Posts: 951
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby RGE » April 3rd, 2012, 2:01 pm

Forrest wrote:And then the machine overlords embark on a magnificent quest to convert all matter into intelligent information-processing life, and the ignorant savages under their care freak out and burn the world to "save" it.

Hardly. Just like our cats and dogs aren't able to 'burn the world', neither will the future humans be if there are machine overlords there to stop them. The overlords will be able to control just about everything by just 'pressing a few buttons'. Except there won't be any buttons either. Maybe a few of the really clever humans will be able to do the equivalence of opening a closed robot-door on their own, but handling the weapons of the future? Oh no, those will be much too dangerous and complicated for humans to handle.
User avatar
RGE
Errant Scholar
 
Posts: 158
Joined: November 2nd, 2007, 6:31 pm
Location: Karlstad, Sweden

Re: 2012-03-29: FTL Travel Macguffin Explodes, Everybody Die

Postby Forrest » April 3rd, 2012, 8:34 pm

RGE wrote:
Forrest wrote:And then the machine overlords embark on a magnificent quest to convert all matter into intelligent information-processing life, and the ignorant savages under their care freak out and burn the world to "save" it.

Hardly. Just like our cats and dogs aren't able to 'burn the world', neither will the future humans be if there are machine overlords there to stop them. The overlords will be able to control just about everything by just 'pressing a few buttons'. Except there won't be any buttons either. Maybe a few of the really clever humans will be able to do the equivalence of opening a closed robot-door on their own, but handling the weapons of the future? Oh no, those will be much too dangerous and complicated for humans to handle.

That was a reference to a comic. The one it linked to. Where that happens.

Then they come back in time to colonize the present and make sure the machines never "take over".
-Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
User avatar
Forrest
Finally, some love for the BJ!
 
Posts: 951
Joined: August 21st, 2007, 12:49 pm
Location: The Edge of the Earth

Next

Return to Does Not Play Well With Others

Who is online

Users browsing this forum: Majestic-12 [Bot] and 2 guests

cron