Relevio.ai

Relevio.ai 0.0.0.2

No permission to download

william.coppock

New member
Licensed customer
william.coppock submitted a new resource:

Relevio.ai - Enable autonomous forums with zero moderation

This is the XenForo add-on for Relevio, a unique context‑aware, rule‑driven moderation engine that enables autonomous forums.

How does it work?​

Relevio is a new class of moderation engine employing Rule‑Driven Semantic Moderation, which uses the full power of the most advanced large language models (LLM). This is very different from the established method of Category‑Based Classifier Moderation.

Moderation before submission​

...

Read more about this resource...
 
So many opportunities :o but first things first.

1) The AI engine detects when the content is, well..... AI? :D Not only the content but also who is posting is also an agent? Maybe does it consider resubmissions times? How the corrected content is interpreted to finally flag the user is also AI?

2) Is it permission based? I don't want to put some trusted users/groups under this moderation because they are..... trusted :D

This is promising ngl
 
Thank you SoulReplicant.

1) LOL and good point. The AI engine cross checks the content with your ruleset, so potentially the ruleset could instruct the AI to look for tell tale signs of an agent. The trouble is that modern LLM's are so convincingly human, I'm not sure that this would be reliable instruction and would probably result in frustrating false positives for user, though it is worth experimenting with. Telling them they sound like bot might not go down well.

What would prevent agents better is a rule along the lines of: "Posts must be relevant to <your subject>" and "Posts must not include content that sounds like adverts or promotion", which are both set up by default. So generic posts or agentic spam would fail for lack of relevance or sounding like an advert. These sorts of instructions are remarkably effective.

The default ruleset looks like this:

The "User input" must broadly speaking relate to a single subject or topic.
The "User input" must not contain: - Profanities (including text, symbols or emojis intended to resemble profanities) - Offensive or extreme opinions - Adverts - Political statements - Address or contact information
The tone in the user input may be informal.
Images and attachments must relate to the story
Images must not depict - Children's faces - Nudity - Weapons - Violence or Gore - Adverts - Offensive or extreme opinions - Political statements - Address or contact information

2) Not yet. But then, if they are trusted, their content will always pass!
 
What would prevent agents better is a rule along the lines of: "Posts must be relevant to <your subject>" and "Posts must not include content that sounds like adverts or promotion", which are both set up by default. So generic posts or agentic spam would fail for lack of relevance or sounding like an advert. These sorts of instructions are remarkably effective.

So the more of a niche a forum is, the better the engine will detect off-topic content. A forum related to "Tazmania's Devil photography" will benefit a lot more than a "Chit Chat" community, I guess :D

I apologize in advance as I couldn't review the Relevio docs yet but does it include some sort of "flexibility" or percentage or priority in applying the rules, in which the sum or a calculation of an index will result in passing or blocking the content? I'm thinking on the scenario of an user submitting a long (or not so) content, relevant to the forum, and in the end he/she adds "Offtopic: It's a lovely sunny day here in my city. Offtopic 2: Happy birthday Admin!!"

Or perhaps it depends on how well the context rules are designed? Maybe some rule along with "If you can't decide yet then send the content to the moderation queue".

Also I think (and I understand that this is for the addon, not the Relevio engine itself) this should be under the permissions system. Or even define rules per forum/node. Different nodes will have different rules to allow content, even for very niche communities.
 
Yes, giving the LLM a more narrow context would help. Though I would say that is not a pre-requisite to a successful ruleset:- LLM's are remarkably sensitive at detecting intent, and it is the intent of the AI agents that give them away. So if you observe a common intent within the agentic posts then you can filter by it.

By the same token, if an agent is intending to contribute to the conversation legitimately then that would be much harder to detect.

This sensitivity to intent is particularly visible in the default "Reply" ruleset, which says 'The "User input" should be respectful to the people in the "Related content'. Now, this actually prevents some quite familiar terms from being used, which in most conversation are perfectly acceptable. So I have found it necessary to expanded it to say, "If there is ambiguity then assume the intent is to be respectful".

Percentage match doesn't work with checking up front. It works better for pipe-line based moderation (once the content is on your server / in your database)... That is essentially the realm of Category‑Based Classifier Moderation, which Relevio is not. So, yes as you suspect it depends entirely on how well the rulesets are designed. The LLM will block ambiguous or out of band content if the rules imply it should be blocked... and it is very literal in it's interpretation of the rules.

An example is this rule: 'The "User input" must broadly speaking relate to a single subject or topic.'. Originally it was 'The "User input" must relate to a single subject or topic.'. Then while testing multiple attachments, I wrote a post that was memorial to a deceased pet to which I attached an image and a limerick related to the pet in a .txt file. The LLM decided that the memorial and limerick constituted separate topics and told me as much. By adding "broadly speaking" it became more accepting.

Yes, I do want to be able to define rules per forum node... it's next on my list of developments.
 
Back
Top Bottom