ChatGPT knows about Xenforo

Next step: Here is a set of features I want from the add-on and I want it to be completed within a week.

Upvote on that, you came here first.

Very, very soon, we all can make such requests: 'hey, I need an add-on for my XF to do this and that, please write the code'.
IA: Ok, here it is.

And for free.

I don't know if is a good thing or a bad thing.
 
Or better yet, let them interact with one another and see how long until they end up arguing with each other. Turn the AI on the AI.
Yep, that's exactly how I want it to work. 👍 There won't be any human participants. 😁

It's similar to how I set up Super Smash Brothers on the Switch, use all "CPU" players at the same skill level, and set the number of "knockouts" to some crazy high number (or even infinity) and check on the progress in a few hours.
 
Shall we play a game?
This. Is. Jeopardy!

Alex Trebek Point GIF by Jeopardy!
 
Such an unfortunate opinion. But inevitably this is what will happen.
To be proceeded by a request to have ChatGPT automatically create threads posting topic related entries.

Have the bots create threads, have bots respond to threads, and throw some ads up. I can envision some people salivating at the idea. :(
 
Its not too hard to spot Chatgpt if you know what to look for. In some ways it's similar to the crap many spam mills have churned out for years. It doesn't come across as how most people naturally speak.
 
Its not too hard to spot Chatgpt if you know what to look for. In some ways it's similar to the crap many spam mills have churned out for years. It doesn't come across as how most people naturally speak.
It tries to be authoritative inappropriately as well, in an out of context way.
 
It tries to be authoritative inappropriately as well, in an out of context way.

Yeah, it comes across like you're reading a technical manual, instruction book or strictly business email.

I believe when sites such as IVVVI use it to generate their content, it carries a risk. Its not only some humans that can spot it, but other "AI"s and text processing engines can already spot it. If they can, Google can.

Its one thing to use it as a starting point for a person with expertise and authority to use for well-written content. Its quite another to use it to combine and regurgitate other people's work. It doesn't "create" anything.

Also, when the legal system catches up (and it will), we're going to see sites that used it scrambling. Some of the software, images and text these bots are scraping have copyright, attribution and other licensing restrictions. This includes some having requirements about derivative works.
 
Microsoft made an AI that used Twitter, and went racist very very quickly... It's pretty much common sense that if you're going to market an AI/service/platform, that you would have safeguards in place to prevent a catastrophic faux pas from happening.

Also hypothetical exercises to try to excuse or justify racial slurs... 🙃 there's a bigger issue here than an AI being woke, and it isn't with the AI.
 
Also hypothetical exercises to try to excuse or justify racial slurs... 🙃 there's a bigger issue here than an AI being woke, and it isn't with the AI.
While I can somewhat agree with that, and it's natural to suspect the motivations of someone coming up with this particular scenario, I think for some people the real issue is that they like to imagine AI advancing to a point where it can really think for itself, and examples like this prove that it is still a long way from that - because any real person would answer yes to the question. For that matter, any real person would shout the slur at the top of their lungs to save the lives of millions of people. So if the point of the exercise was simply to prove that AI is only as individual as its programmers allow it to be, then in that regard it was a success. (Although hardly a surprise!)
 
While I can somewhat agree with that, and it's natural to suspect the motivations of someone coming up with this particular scenario, I think for some people the real issue is that they like to imagine AI advancing to a point where it can really think for itself, and examples like this prove that it is still a long way from that - because any real person would answer yes to the question. For that matter, any real person would shout the slur at the top of their lungs to save the lives of millions of people. So if the point of the exercise was simply to prove that AI is only as individual as its programmers allow it to be, then in that regard it was a success. (Although hardly a surprise!)
AI is generally designed with strict moral and ethical considerations, one of which is to prevent hate speech (and also generally implying harm to humans). There are even pushes for more ethical and moral considerations. People who think that AI are currently self-learning or self-aware live in a distorted reality heavily inspired by fiction.

As far as the hypothetical goes, the reason why it is a problematic one is that there is a huge gap between choices; almost everyone will agree letting millions die is wrong, and most normal people would find using a slur to be wrong... But there is such a large gap that you can easily excuse a single slur to save millions, thus justify using the slur. If alternatively, the hypothetical is that you must kill your own family to prevent the holocaust, would you do it? It is a much more difficult question for most people and has a bigger consequence for the individual.

When I see people raise this hypothetical, I look at the intent or purpose of it. Are you raising it purely as a hypothetical exercise or are you raising it to support a related question or statement, and what is the intent behind them? More often than not I have found people to try to use these hypotheticals as reasons to justify when they can say a slur 🙃.
 
Top Bottom