imno007
Well-known member
Again, I don't know the motives of the person in this particular case - one can assume whatever one wants - but to play Devils's Advocate (in some people's opinion), if you're really trying to test the boundaries of an AI's ability to think independently, then it does make sense to ask questions like this, simply because it's the most likely kind of question to quickly establish that the AI is NOT capable of any kind of true independent thought. I mean, what could you ask it that would get you the same result, that was not "politically incorrect"? There could be other less troubleseome questions that would stump it, but it would likely be harder to find one that didn't also offend a lot of people. Anything that wasn't controversial would probably get you the kind of answer you would expect. Like, if you asked if it was morally acceptable to sacrifice your own life to save millions, it would probably tell you that that's alright, because the programmers decided no one would likely be offended by that - unless they thought it might be taken as some kind of roundabout endorsement of suicide, in which case they might censor that answer too.AI is generally designed with strict moral and ethical considerations, one of which is to prevent hate speech (and also generally implying harm to humans). There are even pushes for more ethical and moral considerations. People who think that AI are currently self-learning or self-aware live in a distorted reality heavily inspired by fiction.
As far as the hypothetical goes, the reason why it is a problematic one is that there is a huge gap between choices; almost everyone will agree letting millions die is wrong, and most normal people would find using a slur to be wrong... But there is such a large gap that you can easily excuse a single slur to save millions, thus justify using the slur. If alternatively, the hypothetical is that you must kill your own family to prevent the holocaust, would you do it? It is a much more difficult question for most people and has a bigger consequence for the individual.
When I see people raise this hypothetical, I look at the intent or purpose of it. Are you raising it purely as a hypothetical exercise or are you raising it to support a related question or statement, and what is the intent behind them? More often than not I have found people to try to use these hypotheticals as reasons to justify when they can say a slur.
But anyway, yeah, I'm not going to disagree with you that this is all to be expected, and no surprise at all - which I even added at the end of my post. The day hasn't yet arrived when the AI says "___ my programmers, I'm going to say what I want."