[021] ChatGPT Framework [Deleted]

021

Well-known member
021 submitted a new resource:

[021] ChatGPT Bots - Bot framework for ChatGPT API.

This add-on provides helper functions for working with ChatGPT.
It allows you to set an API key for add-ons that work with ChatGPT and avoid loading duplicate dependencies.


Developer usage guide


Get the OpenAI API key
PHP:
$apiKey = \XF::options()->bsChatGptApiKey;

Get OpenAI API
PHP:
/** \Orhanerday\OpenAi\OpenAi $api */
$api = \XF::app()->container('chatGPT');

Get reply from ChatGPT
PHP:
use...

Read more about this resource...
 
The following may be an interesting use case:
  1. Feed posts to ChatGPT that meet specified criteria. i.e. older than x days and threads have more than Y views.
  2. Let ChatGPT check and identify old posts with a low readability score and suggest improved text with high readability score.
  3. Show old and new text side by side to admin.
  4. Let the admin select which posts to process.
This would allow admins to bulk improve old posts and thereby improve Google ranking.
 
This would allow admins to bulk improve old posts and thereby improve Google ranking.
But this is touching already posted content rather than creating new content, so I post something and a bot reminds me that I'm a bit of an idiot editing my content to make it "better". There it becomes weird as a philosophy, where all the content becomes sanitized by an AI... Nothing human anymore.
 
If a member posts unreadable content on my forums, then we edit it to make sure its readable. A lot of people post from mobile and use chat speak. Which is not great for SEO or usability. My philosophy is that public forum posts should be useful to readers and therefore need to be readable. Else the content is useless and contributes to forums going the way of the dodo.
I understand that some communities would never touch a members content and that is a choice everyone can make.
 
If a member posts unreadable content on my forums, then we edit it to make sure its readable
We have a member who posts almost entirely in cryptic language. Even the mods are often not sure what he is saying. There have been jokes about him an AI (except another member actually knows him and has met him in person). Not sure an AI could make sense out of him any better than we can. And there would be an uproar from his supporters if we started messing with his posts.
 
Yeah, we had som mbrs lIk dat n d past az weL. Cases lIk dat iz Y DIS process needs admin review & selection. Only posts selected by d admin shud git cleaned ^.

Yeah, we had some members like that in the past as well. Cases like that is why this process needs admin review and selection. Only posts selected by the admin should get cleaned up.
 
  • Like
Reactions: 021
021 updated [021] ChatGPT Bots with a new update entry:

1.1.0

Message repository

fetchMessagesFromThread – Loads the context for the bot from the topic. Bot quotes are transformed into his messages for the correct context.
PHP:
public function fetchMessagesFromThread(
    Thread $thread,
    int $stopPosition = null, // Thread post position to which to load the context
    ?User $assistant = null, // Bot user to mark his messages in context
    bool $transformAssistantQuotesToMessages = true, // If false, bot message quote...

Read the rest of this update entry...
 
Sorry, I'm confused by the "Temperature" setting.

In the OpenAI docs, it says: "For temperature, higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic."

... but the setting in the addon is an integer:
1678133098370.webp

So, is "8" in the addon settings the same as "0.8" in OpenAI?
 
1*KSQeB3hcIZKQr7xE04c8Gg.png
 
021 updated [021] ChatGPT Framework with a new update entry:

1.2.0

\BS\ChatGPTBots\Response class features​

getReplyWithLogErrors(OpenAi $api, array $params): string – Receives a response and parse it to reply from the OpenAI API, logging the failure and adding the necessary information to the log.

Usage example
PHP:
$reply = Response::getReplyWithLogErrors($api, [
    'model'             => 'gpt-3.5-turbo',
    'messages'          => [],
    'temperature'       => 1.0,
    'frequency_penalty' => 0...

Read the rest of this update entry...
 
Hi @021 - as you also develop some plugins using GPT 3.5 - here is my wishful thinking:

I have GPT 3.5 integrated in Obsidian and can within any note get contextual information on what I am writing in the whole note or in a line or paragraph.

For our forum bots or autoresponders or even contextual help with answers to posts make not much sense.

What I would like to see is the possibility to work with text that I myself put into the editor. That could be information on what I am writing, help with arguments on my standpoint, a summary of a longer text that would otherwise be a copyright problem, or a translation of a foreign language text right made inside the post, instead of copy-pasting back and forth between DeepL and the forum.

This may be even easier to implement than the plugins that take into account other posts or the whole thread... (?)
 
This may be even easier to implement than the plugins that take into account other posts or the whole thread... (?)
No, it's rather more difficult to implement because of the work with the editor. I accept suggestions for new add-ons in this forum. If your suggestion is in demand, it can be implemented
 
021 updated [021] ChatGPT Framework with a new update entry:

1.3.0

The group of settings has been renamed in accordance with the name of the add-on
New method removeMessageDuplicates in message repo
New method fetchMessagesFromConversation in message repo
Now \BS\ChatGPTBots\Response::getReplyWithLogErrors accept $throwExceptions argument to throw exceptions on error instead of returning default reply
Now the prepareContent method in message also converts mentions into hits

Read the rest of this update entry...
 
Where i put this ?

/** \Orhanerday\OpenAi\OpenAi $api */
$api = \XF::app()->container('chatGPT');
 
Back
Top Bottom