1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

GoodForNothing Shell Defer 1.0.4

Move XenForo's defer task to shell!

  1. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    Mr. Goodie2Shoes submitted a new resource:

    GoodForNothing Shell Defer - Move XenForo's defer task to shell!

    Read more about this resource...
     
    SneakyDave likes this.
  2. dono

    dono Member

    do we need to remove the deferred trigger from the PAGE_CONTAINER template ?
     
  3. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    Totally optional, if you have too many visitors you probably want to remove it :)
     
    dono likes this.
  4. dono

    dono Member

    please define `too many' in your opinion.. 600 visitors is too many ? and visitors in here including guests and search engines ?
     
  5. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    hmmm... you'll save some resource if you remove the trigger from PAGE_CONTAINER
     
  6. Xon

    Xon Well-Known Member

    Your addon already contains a template modification which does that :p
     
  7. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    I've mentioned in the description that importing the XML is optional to remove the public call of deferred.php and @dono wanted to clear it up I guess ;)
     
    Xon likes this.
  8. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

  9. Xon

    Xon Well-Known Member

    @Mr. Goodie2Shoes I dug into why the performance impact was happening.

    What was happening is a cron deferred task was coming up, and then instantly requeuing itself for now + some seconds. Except 'now' is cached at the start of the script. No matter how long the defer script runs for.

    IMO; After the sleep(10), it would be a good idea to add:

    Code:
    XenForo_Application::$time = time();
    
     
    Last edited: Jan 12, 2015
    Mr. Goodie2Shoes likes this.
  10. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    Xon likes this.
  11. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    If this was the issue (which I think it is), I think it's safe to remove sleep() from the script...
     
  12. Xon

    Xon Well-Known Member

    @Mr. Goodie2Shoes

    Rather than use a lock file, how about using the database to ensure only one instance is running at a time:
    Code:
    $db = XenForo_Application::getDb();
    
    $lock_result = $db->fetchOne("select get_lock(?, ?)", array('cli-defer-lock', 1));
    if ($lock_result !== 1)
    {
      exit(0);
    }
    
    This has the advantage of automatically cleaning up if the process goes away, and it works across multiple worker nodes.
     
  13. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    ah... learned something new today :D
     
    Xon likes this.
  14. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    Xon likes this.
  15. HWS

    HWS Well-Known Member

    Why do you need a "defer lock"?

    If you don't start defer with the shell but leave it in the PAGE_CONTAINER it can also be triggered more than once at the same time.
     
  16. Xon

    Xon Well-Known Member

    How the deferred task system is designed and works is quite basic. It really doesn't scale well without a lot of superfluous DB reads & writes (due to speculative fetches+ discards, and then lock contention on discarding).

    And if the deferred tasks themselves aren't designed properly, they are prone to deadlock and the task being executed many times at once.
     
  17. HWS

    HWS Well-Known Member

    Our forum is not very small and we never had problems with defer (also manually triggered from the shell because it would run too often from PAGE_CONTAINER).

    @Mike : What do you think about it? Does defer need a lock?
     
  18. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    Well my intention is purely something like:
    I might be wrong though :p as concurrent process means the task timing will be reduced. Oh well...
     
  19. Mike

    Mike XenForo Developer Staff Member

    The deferred job runner is atomic within a particular job. The process that successfully updates the job is the one that runs it; others that have tried to grab that job in the interim will have their update fail so they will skip the job. So in that regard, another lock isn't explicitly needed. Multiple simultaneous runners shouldn't be an issue unless it starts overloading the server (if you have multiple cores available then multiple jobs could run simultaneously with minimal slowdown).
     
  20. Mr. Goodie2Shoes

    Mr. Goodie2Shoes Well-Known Member

    So that means it wont hurt to add an explicit lock just to prevent any more unnecessary executions? :)
     

Share This Page