I have a very strange view of things when it comes to Artificial Intelligence. If a form of artificial intelligence displays traits associated with semi-sentience or full sentience then I say let them have rights.
Besides we don't know what self-aware AI will be like in the future, we don't even know it they will have malevolant or benevolant attitudes towards us humans. One thing we also have to consider what we humans will be like in the future as it may impact their views of us.
there aren't little particles with "purpose" charge on them, interacting through some kind of "purpose" field.
they're a human invention, we can ascribe whatever purpose to them as we desire.
Anything sentient deserves rights. If a computer is made that is sophisticated enough to qualify as being "alive" in the sense that it thinks by it's own accord and has feelings (not nessesarily in the same way as humans), it deserves to be treated with respect. After all, as humans, we are nothing more than complicated biological machines, our brains are merely vastly complex biological circuits.
The mighty question of "is this thing showing complex self awareness, that is not us, allowed to mingle within our established base of human rights?" is being boiled down. only reason this might be objected in anyway is because unlike another biological creature with that same criteria (say that we gave an alien these rights), the AI is a hand made being existing without any doubt of where it came from. which we cannot relate to ourselves.
the boiled down question remaining for the opposing party i would think is "It has no soul, does it matter if it has a choice?"
We are used to giving animals this excuse, they don't have the same sentience as us, so we prescribed them as having no soul. which we are projecting to this new thing that we have made. i am fairly confident in this assumption, despite not standing in the position.
In my opinion if an AI has been programmed to think exactly like a human then it should be considered a person and given the rights a person deserves.
Sentient AI is a pretty retarded idea on it's own ,
Following your criteria, sentience would also be an entirely artificial concept. You won't find 'sentience' particles either. Actually, you won't find anything regarding 'sentience' directly out there.
Why should they have rights? they were created by us, and therefore serve us. They are not living. Also, because they are programmed, their "thoughts" could be altered somewhat easily, making them not really a sentient being.
This is one of those questions where we will never know until it happens.
If an AI is fully self aware it has life, If it knows it's there for scientific/whatever reasons it would have minimal rights like the ability to have conversations and etc.
If it fully believes it is a human it should have human rights as you have basically created life and should treat it as a living person.
Though thats just my opinion.
Also, never teach them how to use guns, That right is nulled as soon as they are given network control.
Before we determine whether or not an AI deserves rights, we need to figure out if it's actually conscious. As far as I know scientists only have theories on what consciousness/self-awareness actually is, so we can't know for sure the difference between a truly self-aware being, or a very clever, very convincing, yet not actually conscious machine.
If we ever make very complex AIs that are self aware and have qualia/experiences, they would have to have rights.
If we had humanoid robots walking around that were not aware, there would be some complicated laws regarding how to treat them properly, since they are someone's property, and that needs to be respected.
And if we did know that the machine experiences qualia we wouldn't know how does the machine feel if it isn't human. How do you attempt to know a particular feeling you have never felt? Rights can't be made based on completely uncertain suppositions.
If something is capable of begging for its life, not for the sake of functionality, but for the sake of being alive, then I consider it sentient and to be protected similar to humans.
This isn't going anywhere. Most of the argument's I see here presuppose that if something is sentient then it deserves rights but give little definition on what they understand as sentience.
One could hardly think of some animals as not being sentient, yet we do not treat them as humans. And, even if they interact with society in many ways and have rights (in some countries), they are seen as 'something a human could use for their purposes' and not seen as an end.
I already argued about why I think human rights exist. There IS a difference between humans and other sentient beings (until now): humans are purposeless. That means that humans don't act in a way or another or define themselves (just) according to a pre-established nature, they don't "have to". As every other living being, they are born, they reproduce, they eat, they (normally) preserve themselves, etc. but they are not defined by that (as every other living being, including sentient ones, is).
you have no idea how hard it is not to pepper this thread with HAL 9000 dialogue clips.
Personally, assuming that they do have what we would define as full or partial sentience, then yes, artificial intelligences should be afforded rights to some degree/the same degree as humans depending on the situation, but then again...
it's a complicated issue I doubt we'll be able to answer until the advent of what we're talking about.
It really depends on what level of intelligence they have. You could give the two smartest people on the planet more rights than you're average slobby teen and have no problems. I mean would you let the fry cook from burger king run the mission to mars?
I'm a week behind on this one, sorry.