Facebook makes PHP 9x faster via virtual machine (HHVM?)

I gave this a try on one of my test installs on an otherwise clean CentOS6.5 install with Percona DB with an install of xF1.2.1 (I still need to renew that particular license).
Code:
hhvm --version
HipHop VM 3.0.1 (rel)
Compiler:
Repo schema: e69de29bb2d1d6434b8b29ae775ad8c2e48c5391

nginx -v
nginx version: nginx/1.5.13

Off the bat, page generation times were noticeable faster compared to php-fpm running on the same server through socket config - and this was without giving HHVM a chance to "warm up" or to start using JIT. I would love to be able to use it as my daily-driver, but there was some un-desirable behaviour that I couldn't shake and cannot put up with on a production server...

Taigachat stopped working. Fairly trivial but it would annoy my community. More importantly for me, though, is that the ACP became in-accessible. The CSS just plain wouldn't load, and any attempt to log in would result in "This action could not be completed because you are no longer logged in."

Screen Shot 2014-04-26 at 14.04.11.webp

So, it's almost there. Just needs a bit of bug-fixing.
 
I'm not having any issues on debian with it

Code:
root@debian:~/Databases# hhvm --version
HipHop VM 3.0.1 (rel)
Compiler: tags/HHVM-3.0.1-0-g97c0ac06000e060376fdac4a7970e954e77900d6
Repo schema: a1146d49c5ba0d6db903beb3a4ed8a3766fef182

root@debian:~/Databases# nginx -v
nginx version: nginx/1.7.0

This is my HHVM config
Code:
location ~ \.(hh|php)$ {
    fastcgi_keep_conn on;
    fastcgi_pass   127.0.0.1:9000;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
    #fastcgi_param  SCRIPT_FILENAME    $request_filename;
    include        fastcgi_params;
}

which I include in my server block for the site
Code:
server {
    listen       80;
    server_name  localhost;

    access_log  /var/log/nginx/log/host.access.log  main;


    root   /usr/share/nginx/html;
    index  index.html index.htm;

        location / {
                index index.php index.html index.htm;
                try_files $uri $uri/ /index.php?$uri&$args;
                location /internal_data {
                        location ~ \.(data|html|php)$ {
                                internal;
                        }
                        internal;
                }
                location /library {
                        location ~ \.(default|html|php|txt|xml)$ {
                                internal;
                        }
                        internal;
                }
        }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    include /etc/nginx/hhvm.conf;
}
 
If you really want me to. I can get it working, but can't support it, because I don't know enough about it yet.
Thanks. I don't mind that you can't support it. I will try to learn as much as I can. I just need help trying to figure out which Linode package I should pay for.
 
Have had a site (XF 1.3.2) running on HHVM 3.0.1 for a few days, no issues at all with Xenforo so far, no tweaks needed. This is using nginx and fastcgi.

can't say the same about building the initial box, went for an Amazon Liux 14.03 distro. Bloody thing would get half way through compiling hhvm and crash - this is two hours into the compile. tried a few things but it wouldn't work. Switched to ubuntu 14.04 and it worked without issue. No working package for it, but at least the compile went through without incident, even if it did take five hours to compile.

Am planning on getting an instance up and putting it behind the load balancer alongside a traditional setup. My main XF site is certainly busy enough to compensate for the slow JIT ramp up time that may make this not a good option for quieter XF sites.
 
Okay, so I gave this another shot on a clean Debian 7 VPS with the latest HipHop 3.1 and Nginx 1.7.1

I fixed the content issues I said I was having back in April by making sure the Nginx document_root was set in the server scope and not in the location scope or by hand. This change not only allowed me to use HHVM successfully, but also allowed me to take advantage of Nginx's static content caching properly.

Testing this on a non-production server has led me to believe that HHVM isn't worth the effort if your website doesn't get more than a few concurrent users. Its strengths become apparent when you have a handful of visitors on your site to cause the JIT engine to "warm up".

For lighter XenForo installs, I'd recommend you just stick with a wisely-calibrated nginx + php-fpm setup. If you get more than 10 concurrent users, though, you'll definitely want to consider putting in the effort to use HHVM.

TL;DR - HHVM works if your Nginx is set up properly and is a good option if your website gets enough traffic.
 
Considering we can get up to 100 page views a second, it suits us, however have hit a major hudle with elastisearch, which simply hangs on the api calls, trying to work that one out
 
Was having one big problem with this, Enhanced search wouldn't work. Ended up tracking down the problem to an issue with the way the Zend_http_client worked, replaced the zend call witha straight open stream connection, works a charm.
 
Top Bottom