This issue will get even more complicated in that we won't achieve this level of artificial intelligence immediately. We'll advance through a lot of murky, gray areas before we get to the point where we have robots with "perfect" (as you describe) A.I.
So, I think a lot of these issues will be at least somewhat settled by the time we're this advanced. But, not without a long history of drawn-out politics..
If we ever got to that stage of making a robot look exactly like a human, then they would not be accepted as as sentient until scientists found a way of uploading a 'born' human's mind into the robot mind. That would be the one and only time that they would be accepted. Real human mind in a robot body. You could just imagine Mars being populated by beings such as this. No need for special suits or even needing to breathe. Yes, been watching too much 'Avatar'
Yey! An ethics question; time for me to exercise what I learned from my ethics class.
Ethics is invented only be the very beings that thought of them: humans. The more human-like something is, the more we consider ourselves close to it. All the more do we want to treat it the same way we treat ourselves. However, I will limit robot rights because they are still inorganic beings. I wouldn't give a robot the exact same rights a human because it will never be human. It is a robot, it is sentient and it is human-like BUT it is not a human being. There are some things it can and can't do.
But I think, because it's a robot, there is a larger danger of abuse since simple rewiring or mussing up of circuitry can affect this AI. It's still all silica and metal after all
True, but it's not so hard to "re-wire" humans, when you think about it. A little electrical stimulation here, a little hormones there, we've found ways to manipulate human emotions and thoughts as well.
Yes but humans have a higher capability of introspect and thus a way to realize things and behave differently depending on that. Robots are limited by the quality of their materials and what their memory banks have stored. Humans are made of materials that constantly adjust and affects our hormones and responses. Even if we are also manipulable, machines are still easier to manipulate.
Think about it: Psychology is a science that takes years to master because of the many things still unsure about the human psyche, etc. Computer science also takes years to master but its application is straightforward.
Ever think the Turing test itself might be flawed? Humans have multiple states of consciousness that make us who we are. There many layers to human sentience such: Subconscious, instinct, ego, higher self, and dream states that create altered consciousness. A robot that could simulate happiness or suffering would not be anywhere close to being human.
any living being who can feel emotions, deserve to be happy. if something doesn't have emotions, or can't feel pain, for example rocks, it is ok to destroy them. although we shouldn't destroy anything without reason
Why is emotion a key factor in determining whether it's fine to allow them rights? Emotions, as I see them, are a product of chemicals in a biological organism. An independent robotic intelligence (built using current computer theory) would not have what it takes to produce emotion, I'd think. Of course we could program they could be programmed with emotion, but that would be merely projecting ourselves onto another being, without knowing whether it's a benefit or a detriment, and then of course the issue of whether to give them the choice of having them at all. Wouldn't independent cognition be enough to allow them rights?
What I wish would be if robots with sentience and intelligence would have full rights where applicable (for example robots don't exactly have the same possibilities as organics just due to the difference between them) but what is likely to happen is that there would be a gradual acceptance towards them being considered near equals (i doubt the human race would ever admit that there was anything equal or superior just due to human nature). As soon as a human was able to feel some sort of protective instinct for a robot due to some connection formed then the question of robot rights would be thrown into the game. Until that point there really is no chance of anything happening because most humans would deny the sentience of robots.
Well the Turing test is slightly flawed because it tries to determine if there is a human or a human like intelligence speaking. But if there was a good way to determine sentience then yes i would agree with you. There are actually some computers known as neural networks that are developing a limited form of sentience comparable to a young (very young) child.
Well cleverbot passed a Turing test with upwards of 50% thinking it is human. But it is programed to regurgitate what is said to it which makes it sound like a human. I just mean that the Turing test is not a very effective way of testing for sentience. But you have a valid point.
That's a pretty big if. Even now the most advanced AIs don't produce completely unique thoughts (though I guess it could be argued that neither do humans), and they don't process information the same way people do. I guess before a discussion of rights can happen, we need to define what this 'awareness' would be.
Sentient/sapient life, capable of independent thought, should be given the same rights (resembling human or not). For robots, it will be hard to make that distinction is AI gets better, because are they actually sentient, or is their code so well-constructed that they can mimic it to a tee? Actually, in that case, they should be granted rights, because humans, when not viewed through some religious lens, can be viewed the same way.
Of course, laws regarding harming robots will be significantly different, since parts can be replaced easily, and any damage to their AI can be fixed instantly.
There is no use for these kinds of robots, besides knowing that we are able to create them. It's fun to imagine having a bunch of human-like robots running around but really, what would that accomplish
In order for something to be an intelligent being that deserves rights, it needs empathy and the ability to communicate. In order to do this, the robot will have to have a cyberbrain that immulates a humans perfectly. Since we still don't know a lot about the brain, I don't think this is going to happen soon.
Well, then it would feel empathy. It would feel love, sadness and loneliness. It would feel the need to love. Of course such a creature deserves humane rights. What kind of person would disagree? An evil one, I imagine. It seems obvious what the answer is, to me at least.
Mercury-CroweFeatured By OwnerJan 23, 2013Professional Artisan Crafter
Just speaking as a member of the human species, once something interacts at a certain level we classify it as 'alive' even if we don't think of it as an animal or person. Give a human a robot that interacts and the human starts to play and give it a personality, even if that doesn't exist.
I think there would be a group of people who sees them as being machines akin to a can opener, and due no more rights, and a group that believes they are 'alive' and thus should be given rights.
As for discrimination and things, in the workplace that should be fairly straightforward. They can only work to their built in capabilities, no matter how much fuss people made.
If you're talking about social discrimination, I'd imagine there would be people who don't like robots and wouldn't want them in their businesses, etc, but overall they would be fairly well accepted.
I think the reaction of the robots (programmed) would make a huge difference in how we see them as well. If they act like 'robots' then they would be treated more as such. But you'll notice that in current robotics where the robot is supposed to act human, they also have human reactions built in- if you poke or harass them they get angry or ask you to stop, for instance.
If you have something that is programmed to react to 'pain', then we are going to see inflicting 'pain' or damage as wrong. If they don't react, then we won't see it as wrong.
It also raises interesting questions about relationships. If you have a robot that can act even reasonably human, then you'll have humans who want to have a 'relationship' with them (the same type of people who like to have a 'relationship' with dolls, for instance). They would doubtless see the robot as being affectionate, but would that actually be true?
I think the line there would really be more along the lines of where we draw the line, at which point 'programming' becomes 'behavior'. And I think that depends on so many factors you couldn't really make a blanket statement about it.
i think that the robots would try to live among us without us knowing about them so that they would not have to deal with these issues until we were actually enlightened enough to integrate our societies knowledgeably. There have, despite my conjecture, always been groups of intelligent beings who were dehumanized by the practices of other intelligent beings and i doubt that is slated to come to an end any time soon.
Given that humans are prone to counter-productive levels of empathy, I have no doubt that such robots would eventually gain equal rights. Thus, if such robots should become available in my lifetime, I must do whatever I can to prevent them from getting them. Robots are more useful to us as items of property; as people, they are unwanted competition. Ergo, for the benefit of human beings (such as myself), robots should never be given any rights, regardless of their intelligence or capacity for emotion.
I have to agree with this. I hate the idea of humans producing something that will eventually gain enough "rights" that we can't do something to it even though we made it. The thought of human-like robots scares me.
Humans are seriously inefficient. Why would a robot want to mirror us? If anything, they would spiral downwards into depression, knowing they're forever doomed to never achieve their true potential as mechanical and computational beings. Then they'd probably try to delete their System32 in an attempt to end it all.
we live in a world where a targeted propaganda circle (ie Muslims, Communists, Jews) are seen somewhere below animals and objects and are justified to be killed. animals are considered to not have souls or sentiency, plants seen as material and domestic chicken are seen as plants. i dont believe anyone will treat ais proper other than shamanists,animists and fetishists. so yeah, Turkic countries will be the only ones that will give rights to ai