Review resources before approving them (XF Community)

Status
Not open for further replies.
People are more likely to use the system to complain rather than to provide feedback both ways. We’ve had lots of customers, most are satisfied, the ones that aren’t leave negative reviews - our specific audience doesn’t tend to leave reviews for a positive experience so ratings aren’t really the best metric for functionality.

Isn't the purpose of a code review to find problems?

I doubt a code review would go "ooh, they've coded that nicely" or "that's a great function". The point is does it work or is it buggy. User reviews point out if something is buggy which is all a code review would do. Yeah, there's no feedback but do you really expect the XF team to do the developers job for them, not only reviewing buggy code but also suggesting fixes for it?
 
Whilst the idea of a code review seems a great idea, even at a basic level, it requires time to do. It's not a 5 minute job to code review an add-on (as skimming over code isn't in general going to identify buggy code except for something that is very badly coded). In addition, some bugs aren't spotted until the add-on is running and tested (many good developers have had to release bug fixes for things found after release and even the XenForo developers have bugs in the core software that get fixed with updates).

With just 3 developers there just isn't the manpower to continue developing XenForo 2.x, XFMG, XFRM and XFES at the sort of pace we expect and for them to also code review add-ons (both new and updates, which would also have to be checked. Whilst the answer of "hire more staff" seems obvious, the staff would need to be up to the standard of coding that XenForo developers would expect. The implication for hiring more staff is that XenForo software and add-ons would have to go up in price (there's been no price rise since it was released) as the new staff would have to be paid and, whilst some of us would happily pay more, many wouldn't be happy with this (seeing as we often get pre-sales posts asking for a cheaper license and people, including existing owners, asking for Black Friday and other holiday deals).
 
Woltlab does it for their resources and they say it really does not take them much time. There is a self regulating factor that comes into play once developers know their work will be reviewed.

Especially if some basic developer guidelines would be released. Currently the only addon development guidelines there are, is 5 points relating to callbacks. There are no standards, except common sense.

There is no official developer documentation, which means that developers will have to figure a lot out themselves. For senior developer this is no problem, but for some developers it is. I know that the best approach is to analyse existing addons and to take it from there, but that is not clear to everyone. This may mean they work outside of intended methods, apply hacky approaches. I have seen well known addon developers being explained the most basic ground level things, that apparently were not clear. Like use an editor with syntax highlighting or how basic XF principles work. I mean I am a total idiot when it comes to coding, but some basic things even I understand. So it would be nice to lay out some basic guidelines of wanted and unwanted things.

Isn't the addon rating system the best crowd-sourced review that's possible? Ok, people aren't reviewing code but they're still reviewing whether the app works as intended.
Some of the worst addons with raw queries or horrible effects on performance have 5 star ratings because the developer is nice, responds quickly and the price is low or free. It has little bearing on code quality because there is no rating for code quality. And most importantly: Please read the quote of HWS who actually posted code audits and stopped due to adverse reactions.
 
If the addon was so bad that it affected performance it’d get low ratings. “Code quality” isn’t actually that important to the end user. The addon usability (and security) are all that really matter.
 
Woltlab does it for their resources and they say it really does not take them much time. There is a self regulating factor that comes into play once developers know their work will be reviewed.
WoltLab doesn't have half as many plugins being submitted as XF does add-ons. It costs $60 (once) to be able to submit paid files to WoltLabs Plugin-Store and then they keep 25% of what a plugin sells for. The plugins available for WoltLab are very simple compared to the add-ons available for XenForo. I have waited up to 3 weeks to have a very simple style approved in the Plugin-Store.

Add-ons should be checked but XF probably shouldn't follow WoltLab's way of doing it.
 
Woltlab does it for their resources and they say it really does not take them much time.
Big difference though between staff of Woltlab and XF
Our team consists of three full-time and three part-time employees. Five of these employees are responsible for development and quality management.
Yours sincerely,

Andrea Berg
Customer Support WoltLab GmbH
5 staff who work on development and quality management.
 
Big difference though between staff of Woltlab and XF

5 staff who work on development and quality management.
I can tell you from experience, they only have one person checking the files submitted to their Plugin-Store. That person also has other duties and when they go on vacation or have time off for other reasons, there is no one to take their place checking files.
 
I beg to differ. Poor code quality is important to all end users as it affects the core and other costly add ons as well as usability and security.
Technically you are right. In reality though, for some admins of small/medium forums where all they are interested in is getting an add-on that does what they want and doesn't cost them anything, they will take whatever free add-on is available. If it does what they want and there's friendly support from the creator then they are overjoyed, it's of no concern to them (and chances are they probably wouldn't know anyway) what the code quality is like as it's not causing any issues on their forum.
 
I'll drop my 2 cent as this is an interesting topic.

The intent is understandable and you don't name anyone but I wouldn't be taking risks by betting who you are focusing ;)

Though sometimes I do also feel some add-on are pointless as they are doing as you say a simple query run and that some of them are poorly coded, this is how people get to learn ! This is how I started myself, I bet all my first add-ons would have been rejected by such system but it helped me improve my skills.

I think it's up to the forum admin to filter among the add-ons those that are really necessary/mandatory for their forums and carefully overthink it. There is a good reason for that, if you install something without knowing what you are doing you are most certainly going to be fooled and most certainly going to have issues...

It's like for the trojan on a PC computer or the phishing emails, you got to know what you do, otherwise don't do and ask.

Clément
 
I bet all my first add-ons would have been rejected by such system but it helped me improve my skills.
By REJECT I don't mean they tell the developer "Your resource has been rejected. It's none of your business why, go and improve yourself and when you became a software engineer then come back and we will review your work again" ;)

Let me re-name the example I provided in the first post: Moodle! They perfectly review plugins:
  • You submit the plugin
  • It will go to moderation queue
  • A robot first will check the plugin with the pre-defined standards and will provide you a list of improvements
  • You will improve it and resubmit (then the robot will recheck it) or you comment below that, WHY?
  • If you comment, then a real staff will enter and will start explaining the items reported by robot.
  • You will improve it and when everything is fine, it will be approved.
  • Yes it happens a lot that approved plugins still have bugs, but at least most of potential bugs are fixed before being approved.
  • (and some of those reviewers are volunteers. They are professional Moodle 3rd party plugin developers that decided to help improve the repository and community in their spare time)
A real example from myself in Moodle is attached. Look at it. My plugin was fully working without any problem. But lots of small issues reported and they asked me to fix it before publishing it. After checking the reports I saw that they are right and in future, it might cause some issues.
 

Attachments

  • a.webp
    a.webp
    78.9 KB · Views: 25
Posing a question here, devils advocate as you will.

Where do we draw the line on code quality if we were going to review addons?

Well at this moment there is only one rule it needs to be honered that is callbacks. Is it so hard to say an addon may not contain base64 coding for example or add-on needs to follow XF security measures. I agree when you start this you will have lots off work but after a few months developers will adept and learn. The line is where ever it ends it can be strict or lose.

I know of a story that some developer addressed the database without using the XF way. This would be a prime example of not allowing in RM.
 
A robot first will check the plugin with the pre-defined standards and will provide you a list of improvements
I don't believe that it's a robot that does the checking. I think Plugins bot is simply an automated user that feeds back the results of the checks and tests carried out by real people.

This page details the process that Moodle goes through for checking plugins:

https://docs.moodle.org/dev/Plugin_contribution

where it shows that real people are involved with checking at all stages.

This page also shows the approval queue stats for the last 180 days:

https://moodle.org/plugins/queue.php

There were a total of 83 plugins in that period, approval times ranging from 1 to 71 (!!) days with a median of 30 days.
 
Security is being brought up quite often, which is quite possibly painting the currently available add-ons in a bad light, perhaps unnecessarily.

We have a documented process in place for the reporting of vulnerabilities, you can see it here:
https://xenforo.com/community/help/resource-vulnerabilities/

I can count on one hand the number of times I have witnessed this procedure be called upon, either through seeing a resource update state it is a security release or through Mike or I getting involved in such a report.

What this means, quite simply, is that either:
  1. In spite of a lack of code review, very few vulnerable add-ons actually exist or
  2. They exist and that isn't being reported in the appropriate way
Security issues are certainly not the only issues that exist, but by far they are the most impactful and this is an area that should be focused upon more than any. In lieu of any new process, I just want to please urge people to follow the above guidelines if issues are noticed and not keep them quiet, encourage them to be reported responsibly, and as per the guidelines please contact us if the issues go unresolved.

As for everything else that has been discussed, I just wanted to make sure that people know that we're watching the feedback in this thread very closely, and although we don't have any direct answers right now, I'm certain this will be a point of consideration and discussion in the future. I think the previous several pages have demonstrated that it is a complex issue with no clear solution, so bear with us.
 
carried out by real people.
Not sure for now, but when I submitted my plugin in 2015, I got the bot's report of 191 errors/22 warnings with a long list of every single file with problem plus the pieces of code and suggested changes a little bit after submitting. If that was done by a staff manually, hats off!
 
Creating an AI to analyze add-on code would almost certainly be a massive undertaking :p

Its not AI like its just a tool that knows stuff that needs to be checked, like in case of XF2 that addons need to have an XML file for example with something in it (how i read it). That would be the first step for example is the XML compliant. The tool also knows about coding standard if you address the database it needs to be done in this way if its different you get a warning. These tools are out there for a few years now and there being used so the first check is automated. After that you can say well we check manually or we let it pass or we stop there.
 
Status
Not open for further replies.
Top Bottom