XF 2.3 AI agents

I’ve been using them a while in Cursor / Windsurf, and Claude Code since it was released in beta a few months back. I’m now using the Claude Max plan, and it’s mind blowing how good it is. I’ve shown @Naz some of the things I’ve been doing with it over the last couple of weeks.
Tempted to upgrade from Claude Pro to Max plan as well. Updated to Claude Max just now 🤓 A bit late to the party, but I started using Claude Code about three weeks ago and I'm loving it. I wrote a Gemini CLI MCP server wrapper and added it as an MCP to Claude Code so Claude Sonnet 4 and Gemini 2.5 models can be friends and collaborate on coding tasks/reviews and verify each other's generated code https://github.com/centminmod/gemini-cli-mcp-server :D

1751527274599.webp
 
Last edited:
Tempted to upgrade from Claude Pro to Max plan as well. Updated to Claude Max just now 🤓 A bit late to the party, but I started using Claude Code about three weeks ago and I'm loving it. I wrote a Gemini CLI MCP server wrapper and added it as an MCP to Claude Code so Claude Sonnet 4 and Gemini 2.5 models can be friends and collaborate on coding tasks/reviews and verify each other's generated code https://github.com/centminmod/gemini-cli-mcp-server :D

View attachment 324362
I really hope I get to the point I can make use of that.
 
Tempted to upgrade from Claude Pro to Max plan as well. Updated to Claude Max just now 🤓 A bit late to the party, but I started using Claude Code about three weeks ago and I'm loving it. I wrote a Gemini CLI MCP server wrapper and added it as an MCP to Claude Code so Claude Sonnet 4 and Gemini 2.5 models can be friends and collaborate on coding tasks/reviews and verify each other's generated code https://github.com/centminmod/gemini-cli-mcp-server :D

View attachment 324362
I'm doing the same, got an MCP server set up for Gemini (but I've been using Gemini CLI as well), and OpenAI so it can use the recently discounted O3 model as well along with the image models. I "almost" hit the limit on the Max Plan today, was at 11:40am, and it would have reset at 12pm, so only 20 minutes.
 
Last edited:
I'm doing the same, got an MCP server set up for Gemini (but I've been using Gemini CLI as well), and OpenAI so it can use the recently discounted O3 model as well along with the image models.
Yeah, using Gemini CLI and just installed OpenCode to see what the fuss is about - will see about creating MCP servers for Claude and OpenCode to access other LLM models in other MCP client supported apps :)
I "almost" hit the limit on the Max Plan today, was at 11:40am, and it would have reset at 12pm, so only 20 minutes.
20 mins, nice! Good to know I won't have to wait that long :)
Also, have a look at ccusage now you are on the Max plan:
Haven't used Opus 4 much yet, but you can see when I upgraded to Claude Max today with a bit of Opus 4 usage :D

1751576867581.webp
 
Claude Max $100
wow that is a bit pricey!
Yes, I was resisting for a while, but I kept running into usage limits, especially when combining Claude and Gemini 2.5 via MCP, which further increased the usage. But the subscription works out well given the fixed cost versus paying Claude's per-token costs via their API. Just look at my above ccusage prior to July 4th, already paid for itself on Claude Pro $20/month + GST. Even paid for itself in value compared to Claude Max $100/month plan as well :D
Not really when you actually use the product enough to get the ROI for it.
Yeah. I am going to consolidate and review my hosting costs and server deals to see if I can fit the Claude Max $100 + $10 GST budget. Also been training Claude Code on Xenforo and Centmin Mod LEMP stack data for future projects/tools/features ;) Preparing Centmin Mod to also be able to handle local server AI/LLM software/tech stacks for myself and users :D
 
Yes, I was resisting for a while, but I kept running into usage limits, especially when combining Claude and Gemini 2.5 via MCP, which further increased the usage. But the subscription works out well given the fixed cost versus paying Claude's per-token costs via their API. Just look at my above ccusage prior to July 4th, already paid for itself on Claude Pro $20/month + GST. Even paid for itself in value compared to Claude Max $100/month plan as well :D

Yeah. I am going to consolidate and review my hosting costs and server deals to see if I can fit the Claude Max $100 + $10 GST budget. Also been training Claude Code on Xenforo and Centmin Mod LEMP stack data for future projects/tools/features ;) Preparing Centmin Mod to also be able to handle local server AI/LLM software/tech stacks for myself and users :D
I've been using it to help build out ideas I've had and wanted to implement for ages. Started building out a dashboard yesterday for central monitoring of my Jetbackup jobs so I can monitor them all, see what's happening, and have intelligent scheduling so I don't have clashes backing up multiple servers to the same location at the same time.
 
I'm a bit confused with the Claude pricing.

I'm experimenting with agent mode in vscode after buying a small number of credits for the API (currently on Claude free)
I think because I'm on the free plan, my API key only gives me access up to 3.7

If I sign up for Pro, do you get an API key to use with it? Or do you still have to buy API credits?

I keep hitting rate limits on the API too as I'm on Tier 1, to hit Tier 2 I'd need to top up with $40 of credit, but I don't want to do that if I don't get access to the 4.0 models.

What's my best option for light to medium usage on the best models to test for a month or so?
 
I came across this in the Batch Processing document. Looks like it's half price and useful for content moderation 🤔


How the Message Batches API works​

When you send a request to the Message Batches API:

  1. The system creates a new Message Batch with the provided Messages requests.
  2. The batch is then processed asynchronously, with each request handled independently.
  3. You can poll for the status of the batch and retrieve results when processing has ended for all requests.
This is especially useful for bulk operations that don’t require immediate results, such as:

  • Large-scale evaluations: Process thousands of test cases efficiently.
  • Content moderation: Analyze large volumes of user-generated content asynchronously.
  • Data analysis: Generate insights or summaries for large datasets.
  • Bulk content generation: Create large amounts of text for various purposes (e.g., product descriptions, article summaries).
 
I don't know how to program but only customize cms (I've been doing it for over 20 years)... with chat gpt in 1 week I managed to create 3 scripts to import an old forum that is 20 years old and over the years instead of migrating a cms had been put in read-only mode and the new forum was opened with another cms...
snitz2000
pnphpbb2 (a fork of phpbb2 by postnuke)
bbpress
chatgpt prepared the scripts for me, I gave it the error or the results it asked me to check and in the end I managed to migrate everything...
I did hundreds of tests but with chatgpt I succeeded...
I would have liked to be able to make the scripts public but unfortunately since they are made with continuous file updates and many things that I knew and that chatgpg did not tell me unfortunately they would not work for others.
However I imported a forum with more than 1 million posts that otherwise would have been lost.
I forgot...
I also prepared all the scripts for the redirects from the old forums to the new one with mapping of the old ids to the new ones.
 
I have been looking at this for about a month or so now. Was even thinking about starting a forum (AI&XF) on the subject.
does anyone have a good plan script for creating xenforo addons?
I was wondering as well.
 
Here is a crazy little ten hour ChatGPT session I had today trying to refine a page node html code with JS. The code is not overly complex and as you can see there were quite a few itineration's. I wanted to test the AI on changing a form reload button to a table clear function. The AI would alter the code and something would break. The parameters were clearly defined at the start. I had to reupload the original code several times for a fresh start and the final version still does not work correctly. The page can be seen HERE (Interactive Jeep CJ VIN Decoder - select 1981~1986 for best demonstration). I tried this using ChatGPT 4o (standard AI) and am going to test o3 (advanced reasoning) and o4-mini-high (coding) next.

tai1.webp
 
Last edited:
Back
Top Bottom