Resource icon

GoodForNothing Shell Defer 1.0.4

No permission to download

Mr. Goodie2Shoes

Well-known member
Mr. Goodie2Shoes submitted a new resource:

GoodForNothing Shell Defer - Move XenForo's defer task to shell!

I actually made it a long time ago for a friend who wanted to make sure the cron entries run properly even without having any visitors for a day.

The script will be executed in a loop until all the deferred process is completed. And a 'defer lock' will be created to make sure that only one instance of the defer process is running.

And after seeing this thread, I wanted to post it to the public. :)

To make the add-on work, all you have to do is upload the files from the upload directory....

Read more about this resource...
 
please define `too many' in your opinion.. 600 visitors is too many ? and visitors in here including guests and search engines ?
 
@Mr. Goodie2Shoes I dug into why the performance impact was happening.

What was happening is a cron deferred task was coming up, and then instantly requeuing itself for now + some seconds. Except 'now' is cached at the start of the script. No matter how long the defer script runs for.

IMO; After the sleep(10), it would be a good idea to add:

Code:
XenForo_Application::$time = time();
 
Last edited:
@Mr. Goodie2Shoes

Rather than use a lock file, how about using the database to ensure only one instance is running at a time:
Code:
$db = XenForo_Application::getDb();

$lock_result = $db->fetchOne("select get_lock(?, ?)", array('cli-defer-lock', 1));
if ($lock_result !== 1)
{
  exit(0);
}

This has the advantage of automatically cleaning up if the process goes away, and it works across multiple worker nodes.
 
Why do you need a "defer lock"?

If you don't start defer with the shell but leave it in the PAGE_CONTAINER it can also be triggered more than once at the same time.
 
Why do you need a "defer lock"?

If you don't start defer with the shell but leave it in the PAGE_CONTAINER it can also be triggered more than once at the same time.
How the deferred task system is designed and works is quite basic. It really doesn't scale well without a lot of superfluous DB reads & writes (due to speculative fetches+ discards, and then lock contention on discarding).

And if the deferred tasks themselves aren't designed properly, they are prone to deadlock and the task being executed many times at once.
 
How the deferred task system is designed and works is quite basic. It really doesn't scale well without a lot of superfluous DB reads & writes (due to speculative fetches+ discards, and then lock contention on discarding).

Our forum is not very small and we never had problems with defer (also manually triggered from the shell because it would run too often from PAGE_CONTAINER).

@Mike : What do you think about it? Does defer need a lock?
 
Well my intention is purely something like:
There's already one deferred process taking place and it's trying its best to finish the task ASAP. No need to waste server resources with additional instances.
I might be wrong though :p as concurrent process means the task timing will be reduced. Oh well...
 
@Mike : What do you think about it? Does defer need a lock?
The deferred job runner is atomic within a particular job. The process that successfully updates the job is the one that runs it; others that have tried to grab that job in the interim will have their update fail so they will skip the job. So in that regard, another lock isn't explicitly needed. Multiple simultaneous runners shouldn't be an issue unless it starts overloading the server (if you have multiple cores available then multiple jobs could run simultaneously with minimal slowdown).
 
The deferred job runner is atomic within a particular job. The process that successfully updates the job is the one that runs it; others that have tried to grab that job in the interim will have their update fail so they will skip the job. So in that regard, another lock isn't explicitly needed. Multiple simultaneous runners shouldn't be an issue unless it starts overloading the server (if you have multiple cores available then multiple jobs could run simultaneously with minimal slowdown).
So that means it wont hurt to add an explicit lock just to prevent any more unnecessary executions? :)
 
Back
Top Bottom