Anyone hosting XF via Docker?

DevOops

Member
I would like to host XF in docker so I don't need to start another VPS for it. It really is standard practice to have an official docker image these days - and considering how long it took to upgrade php correctly using homebrew - I'm sure it would save time for many. It is also another layer of security with respect to the host system and its data.
I noticed several docker-XF environments for development posted online but none for production/hosting - is there a good reason for that?
Thanks!
 
One of the biggest issues with running XenForo via Docker in PROD, is the complexity of the configuration; I've attached a sample docker composer file that should guide you on the right path to getting XenForo configured via Docker (Similar to how we run Fellowsfilm).

DISCLAIMER: This Docker (Composer) file listed below does not provide a copy of XenForo, you will need to extract your legally acquired copy of XenForo to XENFORO_PATH as defined in the ENV file below.

ENV File
Code:
DOMAIN=mycoolforum.com
XENFORO_PATH=/home/user/xenforo
CERT_EMAIL=mycool@forum.email
NGINX_CONF_PATH=/home/user/xenforo-config/xenforo.conf
PHPFPM_CONF_PATH=/home/user/xenforo-config/php-fpm.conf
PHPFPM_INI_PATH=/home/user/xenforo-config/php.ini
MYSQL_ROOT_PASSWORD=
MYSQL_USERNAME=
MYSQL_USER_PASSWORD=
MYSQL_DATABASE=
REDIS_PASSWORD=

What files you need to create before running the composer file:
  • xenforo.conf
    • provide file nginx.conf file that includes the server block for XenForo
  • php-fpm.conf
    • provide full php-fpm conf
  • php.ini
    • You only need to specify specific ini variables that the forum requires for operation.
Some Notes:
  • Certbot (Caveat)
    • Certbot does not get certificates on the first run, you will need to expose port 80 on the container to get certificates, and start it BEFORE executing the composer file.
    • I recommend using one of the DNS Validation certbot images to avoid the above caveat.
  • Alternatively: Removing Certbot
    • Ensure you have provided certificates by changing - ssl_certificates:/var/www/certbot/:ro under nginx to associate to your SSL certificate path. - <your_path_here>:/var/www/certbot/:ro
  • Redis
    • Password is required for Redis, however the Bitnami image does support removing the password via: ALLOW_EMPTY_PASSWORD=yes
  • Path Exposure in Docker
    • /var/www/certbot: Your Certificates
    • /app: XenForo
    • /opt/bitnami/nginx/conf/server_blocks/xenforo.conf: XenForo NGINX Server Conf
    • /opt/bitnami/php/etc/php-fpm.conf: Your PHP Configuration
    • /opt/bitnami/php/etc/conf.d/xenforo.ini: Your INI File for PHP
  • DNS Name Resolution:
    • Nginx
      • nginx or web
    • PHP
      • phpfpm or fpm
    • MySQL
      • mysql or database
    • Redis
      • redis or cache
    • Elasticsearch
      • elasticsearch or elastic
  • Renewing Certificates (it's not automated, i suggest a crontab to run a command similar to this once a week) - This will get updated certs, and reload nginx.
    • docker compose run --rm certbot renew && docker compose exec webserver nginx -s reload
      • this will reload nginx every time you run the cron.
    • If you're using cloudflare, i highly suggest using an origin certificate and using authenticated origin pulls
  • Additional Configuration Notes
    • You may need to tinker with this composer file to get it to work right for you.
    • there are two networks
      • frontend
        • This connects nginx, php, and certbot
        • only php can talk to the backend
      • backend
        • This connects mysql, redis, elasticsearch
        • php is the only frontend connection that can reach these services
While this composer file will allow you to spin up essentially everything you need - it is worth noting that this does not contain the required resources to utilize ffmpeg for video conversion. If you require this functionality, you will need to build your own phpfpm image with ffmpeg added.

Composer File
YAML:
version: '3.9'

services:

  nginx:
    container_name: bitnami/nginx:1.22.0
    image: nginx
    volumes:
      - ${NGINX_CONF_PATH}:/opt/bitnami/nginx/conf/server_blocks/xenforo.conf:ro
      - ${XENFORO_PATH}:/app
      - ssl_certificates:/var/www/certbot/:ro
    ports:
      - 80:80
      - 443:443
    depends_on:
      - phpfpm
      - certbot
    links:
      - "phpfpm:fpm"
    networks:
      - frontend
 
  certbot:
    container_name: certbot
    image: certbot/certbot:latest
    command: sh -c "certbot certonly --standalone -d ${DOMAIN} --text --agree-tos --email ${CERT_EMAIL} --server https://acme-v02.api.letsencrypt.org/directory --rsa-key-size 4096 --verbose --keep-until-expiring --preferred-challenges=http"
    entrypoint: ""
    environment:
      - TERM=xterm
    depends_on:
      - nginx
    networks:
      - frontend
    volumes:
      - ssl_certificates:/var/www/certbot/:rw
 
  phpfpm:
    container_name: phpfpm
    tty: false
    image: bitnami/php-fpm:8.1.7
    restart: always
    ports:
      - 9000:9000
    volumes:
      - ${XENFORO_PATH}:/app
      - ${PHPFPM_CONF_PATH}:/opt/bitnami/php/etc/php-fpm.conf:ro
      - ${PHPFPM_INI_PATH}:/opt/bitnami/php/etc/conf.d/xenforo.ini:ro
    networks:
      - frontend
      - backend
    links:
      - "nginx:web"
      - "mysql:database"
      - "redis:cache"
      - "elasticsearch:elastic"
    depends_on:
      - mysql
      - redis
      - elasticsearch
 
  mysql:
    container_name: mysql
    image: mysql:8
    restart: always
    ports:
      - 3306:3306
    networks:
      - backend
    links:
      - "phpfpm:fpm"
    volumes:
      - mysql_data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - MYSQL_USER=${MYSQL_USERNAME}
      - MYSQL_PASSWORD=${MYSQL_USER_PASSWORD}
      - MYSQL_DATABASE=${MYSQL_DATABASE}
 
  redis:
    container_name: redis
    image: bitnami/redis:7.0
    restart: always
    environment:
      - ALLOW_EMPTY_PASSWORD=no
      - REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    ports:
      - 6379:6379
    networks:
      - backend
    links:
      - "phpfpm:fpm"
    volumes:
      - redis_data:/bitnami/redis/data
 
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:8.2.2
    restart: always
    environment:
      - cluster.name=xenforo
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
    networks:
      - backend
    links:
      - "phpfpm:fpm"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
 
volumes:
  mysql_data:
    driver: local
  redis_data:
    driver: local
  elasticsearch_data:
    driver: local
  ssl_certificates:
    driver: local
 
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

I'll leave nginx and fpm configurations up to you :)

If there happens to be enough interest, I will consider publishing docker images to make this process easier and include additional features needed.
 
Last edited:
One of the biggest issues with running XenForo via Docker in PROD, is the complexity of the configuration; I've attached a sample docker composer file that should guide you on the right path to getting XenForo configured via Docker (Similar to how we run Fellowsfilm).

DISCLAIMER: This Docker (Composer) file listed below does not provide a copy of XenForo, you will need to extract your legally acquired copy of XenForo to XENFORO_PATH as defined in the ENV file below.

ENV File
Code:
DOMAIN=mycoolforum.com
XENFORO_PATH=/home/user/xenforo
CERT_EMAIL=mycool@forum.email
NGINX_CONF_PATH=/home/user/xenforo-config/xenforo.conf
PHPFPM_CONF_PATH=/home/user/xenforo-config/php-fpm.conf
PHPFPM_INI_PATH=/home/user/xenforo-config/php.ini
MYSQL_ROOT_PASSWORD=
MYSQL_USERNAME=
MYSQL_USER_PASSWORD=
MYSQL_DATABASE=
REDIS_PASSWORD=

What files you need to create before running the composer file:
  • xenforo.conf
    • provide file nginx.conf file that includes the server block for XenForo
  • php-fpm.conf
    • provide full php-fpm conf
  • php.ini
    • You only need to specify specific ini variables that the forum requires for operation.
Some Notes:
  • Certbot (Caveat)
    • Certbot does not get certificates on the first run, you will need to expose port 80 on the container to get certificates, and start it BEFORE executing the composer file.
    • I recommend using one of the DNS Validation certbot images to avoid the above caveat.
  • Alternatively: Removing Certbot
    • Ensure you have provided certificates by changing - ssl_certificates:/var/www/certbot/:ro under nginx to associate to your SSL certificate path. - <your_path_here>:/var/www/certbot/:ro
  • Redis
    • Password is required for Redis, however the Bitnami image does support removing the password via: ALLOW_EMPTY_PASSWORD=yes
  • Path Exposure in Docker
    • /var/www/certbot: Your Certificates
    • /app: XenForo
    • /opt/bitnami/nginx/conf/server_blocks/xenforo.conf: XenForo NGINX Server Conf
    • /opt/bitnami/php/etc/php-fpm.conf: Your PHP Configuration
    • /opt/bitnami/php/etc/conf.d/xenforo.ini: Your INI File for PHP
  • DNS Name Resolution:
    • Nginx
      • nginx or web
    • PHP
      • phpfpm or fpm
    • MySQL
      • mysql or database
    • Redis
      • redis or cache
    • Elasticsearch
      • elasticsearch or elastic
  • Renewing Certificates (it's not automated, i suggest a crontab to run a command similar to this once a week) - This will get updated certs, and reload nginx.
    • docker compose run --rm certbot renew && docker compose exec webserver nginx -s reload
      • this will reload nginx every time you run the cron.
    • If you're using cloudflare, i highly suggest using an origin certificate and using authenticated origin pulls
  • Additional Configuration Notes
    • You may need to tinker with this composer file to get it to work right for you.
    • there are two networks
      • frontend
        • This connects nginx, php, and certbot
        • only php can talk to the backend
      • backend
        • This connects mysql, redis, elasticsearch
        • php is the only frontend connection that can reach these services
While this composer file will allow you to spin up essentially everything you need - it is worth noting that this does not contain the required resources to utilize ffmpeg for video conversion. If you require this functionality, you will need to build your own phpfpm image with ffmpeg added.

Composer File
YAML:
version: '3.9'

services:

  nginx:
    container_name: bitnami/nginx:1.22.0
    image: nginx
    volumes:
      - ${NGINX_CONF_PATH}:/opt/bitnami/nginx/conf/server_blocks/xenforo.conf:ro
      - ${XENFORO_PATH}:/app
      - ssl_certificates:/var/www/certbot/:ro
    ports:
      - 80:80
      - 443:443
    depends_on:
      - phpfpm
      - certbot
    links:
      - "phpfpm:fpm"
    networks:
      - frontend
 
  certbot:
    container_name: certbot
    image: certbot/certbot:latest
    command: sh -c "certbot certonly --standalone -d ${DOMAIN} --text --agree-tos --email ${CERT_EMAIL} --server https://acme-v02.api.letsencrypt.org/directory --rsa-key-size 4096 --verbose --keep-until-expiring --preferred-challenges=http"
    entrypoint: ""
    environment:
      - TERM=xterm
    depends_on:
      - nginx
    networks:
      - frontend
    volumes:
      - ssl_certificates:/var/www/certbot/:rw
 
  phpfpm:
    container_name: phpfpm
    tty: false
    image: bitnami/php-fpm:8.1.7
    restart: always
    ports:
      - 9000:9000
    volumes:
      - ${XENFORO_PATH}:/app
      - ${PHPFPM_CONF_PATH}:/opt/bitnami/php/etc/php-fpm.conf:ro
      - ${PHPFPM_INI_PATH}:/opt/bitnami/php/etc/conf.d/xenforo.ini:ro
    networks:
      - frontend
      - backend
    links:
      - "nginx:web"
      - "mysql:database"
      - "redis:cache"
      - "elasticsearch:elastic"
    depends_on:
      - mysql
      - redis
      - elasticsearch
 
  mysql:
    container_name: mysql
    image: mysql:8
    restart: always
    ports:
      - 3306:3306
    networks:
      - backend
    links:
      - "phpfpm:fpm"
    volumes:
      - mysql_data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - MYSQL_USER=${MYSQL_USERNAME}
      - MYSQL_PASSWORD=${MYSQL_USER_PASSWORD}
      - MYSQL_DATABASE=${MYSQL_DATABASE}
 
  redis:
    container_name: redis
    image: bitnami/redis:7.0
    restart: always
    environment:
      - ALLOW_EMPTY_PASSWORD=no
      - REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    ports:
      - 6379:6379
    networks:
      - backend
    links:
      - "phpfpm:fpm"
    volumes:
      - redis_data:/bitnami/redis/data
 
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:8.2.2
    restart: always
    environment:
      - cluster.name=xenforo
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
    networks:
      - backend
    links:
      - "phpfpm:fpm"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
 
volumes:
  mysql_data:
    driver: local
  redis_data:
    driver: local
  elasticsearch_data:
    driver: local
  ssl_certificates:
    driver: local
 
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

I'll leave nginx and fpm configurations up to you :)

If there happens to be enough interest, I will consider publishing docker images to make this process easier and include additional features needed.
Thanks! Yes, seems overly complex, but this is exactly what I was looking for from XF devs - which they claim they couldn't provide because of the propriety of the software. Obviously it is not an issue as you can just drop in the proprietary parts after. Having a maintained docker-compose file that has eyes on it and many people use will be a good time saver: 1) more people to notice security issues 2) more devs to simplify the config 3) battle tested.

1) I'd use the 7.0.2-alpine Redis image.
2) Are you sure you want to expose the Redis DB to the internet like that?
3) Unconventional to use bridge networking in production - maybe I'm wrong though.
4) I'm using Traefik over nginx as it is more docker friendly. It could help simplify this - and will definitely help simplify the certbot aspect.
5) Why does it need both MYSQL & Redis?
6) I would build an image like this instead of using the more complicated and likely bloated Bitnami image for XF: https://github.com/jonz94/docker-php-fpm-alpine/blob/main/Dockerfile
7) Why not add the elasticsearch stuff to the image running XF?

Seems like we're on the right track now - thanks!
 
1) I'd use the 7.0.2-alpine Redis image.

As I said, this is nothing but a template for you to improve upon; This will put you on the right track to a fully deployed setup - I personally would not be using bitnami or standard packages and would use rapidfort images instead.

2) Are you sure you want to expose the Redis DB to the internet like that?

It's worth noting that my servers are not exposed to the internet as we have two different firewalls that are only setup to allow access to the ports we've specified, which in this case are 443 and 80, and even then ONLY to Cloudflare's proxy; If you are intending to use this on a server that does not have a local firewall (or even a network firewall from your provider). You should adjust your settings to prevent port exposure.

3) Unconventional to use bridge networking in production - maybe I'm wrong though.

Networks are bridged like this so if you do choose to add network rules or a firewall for your server-side based connections, you can effectively isolate the frontend traffic and the backend traffic. This also makes it easier to review your traffic (fe/be) when using something like nettop, bashtop, or nload.

4) I'm using Traefik over nginx as it is more docker friendly. It could help simplify this - and will definitely help simplify the certbot aspect.

I personally am not a fan of Traefik, NGINX has been stable and very robust for me for years. I have zero intention of changing it.
Additionally, Certbot was only added to this composer file because in my setup, I do not use a certificate manager - rather, Cloudflare provides me with certificates to use that can be validated using their authenticated origin proxy functions.

5) Why does it need both MYSQL & Redis?

Redis is used for Page Cache (See Xon's addons, and Xenforo Docs). While it does not use "a lot" of memory or storage, having Redis can decrease the time it takes to generate pages for guests. and also be used to store things like sessions, etc.

6) I would build an image like this instead of using the more complicated and likely bloated Bitnami image for XF: https://github.com/jonz94/docker-php-fpm-alpine/blob/main/Dockerfile

Again, Bitnami was only used for ease of the docker compose file - not everyone is experienced with Docker and using images that are minimized can sometimes skip over entire configuration options that can be defined via environment. The goal here was to provide as minimal of a configuration as needed (.env and compose file).

I would recommend using images by rapidfort as they have been minimized and bloat has been removed. However they are a copy of the bitnami images (pre-debloat). The downside here is that there is no php-fpm image by these guys.

Personally, I compile my own php image as in order to use video functions with XenForo MG, you require ffmpeg; thus my php-fpm image contains ffmpeg for execution of video conversion - Though i'd argue conversions should be piped to a ffmpeg docker image, however that would require modification of the code.

7) Why not add the elasticsearch stuff to the image running XF?

Not everyone uses Elasticsearch. I just didn't include it because I didn't feel like it.

Code:
  elasticsearch:
    container_name: xenforo_elasticsearch
    image: bitnami/elasticsearch:latest
    networks:
      - backend
    ports:
      - 9200:9200
      - 9300:9300
    environment:
      - ELASTICSEARCH_HEAP_SIZE=1g
    volumes:
      - elasticsearch_data:/bitnami/elasticsearch/data
    healthcheck:
      test: ["CMD-SHELL", "curl -fsSL 'http://localhost:9200/_cat/health?h=status' | grep yellow"]
      interval: 1s
      timeout: 3s
      retries: 5
 
volumes:
  elasticsearch_data:
    driver: local

Add the above to include Elastic. Please note that the above example includes no replication and therefore will return a yellow status, so the health-check here has been modified to check for "yellow" and mark it as healthy if so.

-----

It is worth noting that my configuration looks nothing like the above, considering my Docker Compose file contains replication of sql, redis, and elastic; I am also using MariaDB for better performance in comparison to MySQL; Let alone use of 1Password Connect for secrets automation, and backup programs that make snapshots of everything a few times a day and backs up to B2.

Like I said, if there is enough "demand" for a more "official" docker file that can be used to bring up an entire configuration of XenForo, I'd happily evaluate publishing images to make it happen. But the big thing here, is that no matter what I do - I cannot include XenForo in the distribution.

So really, a well formatted Docker Compose Stack with a well formatted "Get Started" guide would likely be the better approach (minus building an fpm image capable of ffmpeg).
 
As I said, this is nothing but a template for you to improve upon; This will put you on the right track to a fully deployed setup - I personally would not be using bitnami or standard packages and would use rapidfort images instead.



It's worth noting that my servers are not exposed to the internet as we have two different firewalls that are only setup to allow access to the ports we've specified, which in this case are 443 and 80, and even then ONLY to Cloudflare's proxy; If you are intending to use this on a server that does not have a local firewall (or even a network firewall from your provider). You should adjust your settings to prevent port exposure.



Networks are bridged like this so if you do choose to add network rules or a firewall for your server-side based connections, you can effectively isolate the frontend traffic and the backend traffic. This also makes it easier to review your traffic (fe/be) when using something like nettop, bashtop, or nload.



I personally am not a fan of Traefik, NGINX has been stable and very robust for me for years. I have zero intention of changing it.
Additionally, Certbot was only added to this composer file because in my setup, I do not use a certificate manager - rather, Cloudflare provides me with certificates to use that can be validated using their authenticated origin proxy functions.



Redis is used for Page Cache (See Xon's addons, and Xenforo Docs). While it does not use "a lot" of memory or storage, having Redis can decrease the time it takes to generate pages for guests. and also be used to store things like sessions, etc.



Again, Bitnami was only used for ease of the docker compose file - not everyone is experienced with Docker and using images that are minimized can sometimes skip over entire configuration options that can be defined via environment. The goal here was to provide as minimal of a configuration as needed (.env and compose file).

I would recommend using images by rapidfort as they have been minimized and bloat has been removed. However they are a copy of the bitnami images (pre-debloat). The downside here is that there is no php-fpm image by these guys.

Personally, I compile my own php image as in order to use video functions with XenForo MG, you require ffmpeg; thus my php-fpm image contains ffmpeg for execution of video conversion - Though i'd argue conversions should be piped to a ffmpeg docker image, however that would require modification of the code.



Not everyone uses Elasticsearch. I just didn't include it because I didn't feel like it.

Code:
  elasticsearch:
    container_name: xenforo_elasticsearch
    image: bitnami/elasticsearch:latest
    networks:
      - backend
    ports:
      - 9200:9200
      - 9300:9300
    environment:
      - ELASTICSEARCH_HEAP_SIZE=1g
    volumes:
      - elasticsearch_data:/bitnami/elasticsearch/data
    healthcheck:
      test: ["CMD-SHELL", "curl -fsSL 'http://localhost:9200/_cat/health?h=status' | grep yellow"]
      interval: 1s
      timeout: 3s
      retries: 5
 
volumes:
  elasticsearch_data:
    driver: local

Add the above to include Elastic. Please note that the above example includes no replication and therefore will return a yellow status, so the health-check here has been modified to check for "yellow" and mark it as healthy if so.

-----

It is worth noting that my configuration looks nothing like the above, considering my Docker Compose file contains replication of sql, redis, and elastic; I am also using MariaDB for better performance in comparison to MySQL; Let alone use of 1Password Connect for secrets automation, and backup programs that make snapshots of everything a few times a day and backs up to B2.

Like I said, if there is enough "demand" for a more "official" docker file that can be used to bring up an entire configuration of XenForo, I'd happily evaluate publishing images to make it happen. But the big thing here, is that no matter what I do - I cannot include XenForo in the distribution.

So really, a well formatted Docker Compose Stack with a well formatted "Get Started" guide would likely be the better approach (minus building an fpm image capable of ffmpeg).
Having a docker compose file for Xenforo would be fantastic. I am currently struggling to get it running. Would be awesome if you did that.
 
I don't see why it wouldn't be good for business to provide an official xf-docker repo with compose files we can iterate on, etc. I'd add SSO auth and traefik to it as well.
 
I noticed several docker-XF environments for development posted online but none for production/hosting - is there a good reason for that?
The fact that compose file example you got just above is already this long for a dev-only setup is exactly why no one shares one for production. Not because it's secret but because it's too specific to how you run things.

For example, you mentioned Elasticsearch, but in production you would almost certainly run a 3-nodes (or 5-nodes) ES cluster on a completely different machine. And maybe have common extra plugins in there. And maybe also want one with Kibana.
Same for Redis or Memcache.
You also would probably not have your Xenforo's nginx directly exposed to the internet, and instead have Cloudflare or your loadbalancers above. And if the latter have a WAF somewhere in there.
Then you'd probably also have a lot of extra stuff going on to backup data both at the DB level and the filesystem level for XF uploads.

And all of this is quite specific to the infra you already have and how you like to run things.

At the end of the day, even an official Xenforo docker image would be hard to design. Everyone would disagree on what exactly goes into it due to it being a PHP image and no-one agreeing on what is bloat or not:
  • is installing all php extensions XF marks as optional bloat or not?
  • what should the default php.ini look like? should it differ between php and php-fpm?
  • do you prefer apache or nginx?
  • if nginx do you prefer 1 supervisor-style container for nginx+fpm all-in-one or 2 separate ones?
  • ...

For example, you mention wanting to use Alpine, but a lot of us wouldn't be fan of it at all because of slightly different CLI tools and DNS behaviour. So even the base OS flavour, before even getting into Debian vs RH base, would be a problem.

Frankly I get why the XF devs don't publish any guide for this, and I wouldn't if I were them either. It would probably not be used by the more experienced crowd, and would likely be misleading the novice crowd into very unsafe defaults while they'd believe to be doing the right thing because "it's the official docker image and compose file" until they lose all their data...
 
I would prefer the leanest setup for the official docker image / docker-compose setup.
Somebody who needs an ES cluster has the resources to modify everything to his personal need.
Others (like me) who want to run a small community are thankful for a turnkey-solution :)
 
Others (like me) who want to run a small community are thankful for a turnkey-solution
Honestly... in most aspects (of a shared hosting environment or a VPS/dedi with a quality panel)... it doesn't get much easier in setting up.
In most cases, the hardest part (setting up the panel and hosting instance) is done by the provider. All you have to do is create a simple database with credentials, copy some files to your domain root directory and then run an install routine. There may be some modules you have to install for your specific instance, but even that is easy enough to resolve with help from here.
 
Docker helps reducing dependency-issues on OS level and centralizing config data.
Seems that I have to create it without official help.
 
would prefer the leanest setup for the official docker image / docker-compose setup.
I get your point, but if you were an XF dev, where exactly would you draw the line between "lean" and "just a demo don't do this"?
It is really difficult...

If the goal is "what is the minimal number of components needed to run Xenforo" then you would only have:
1. MySQL/MariaDB single node
2. Xenforo container with both nginx and php-fpm inside it (or apache2)

Because adding Redis/memcache and Elasticsearch is clearly not technically necessary. Even if it will make a big difference.

And then even for a potential official docker image, "lean" can take many forms.
Does it mean "simplest"? In which case you'd likely use an Ubuntu image. Or does it mean "lightweight" in terms of image size? In which case you would use Alpine maybe? Or something like RedHat UBI nano which doesn't even have a package manager.
Is having curl inside your image a reasonable necessity or is that too much? Technically only libcurl is really necessary with PHP apps after all.

Also, and this is a bit out of topic, but the only people that have a good reason to want Alpine are people running a lot of containers with different base layers on their systems, where saving bandwidth for image deployment and local extraction matters. Certainly not a concern (beyond wanting pretty numbers) unless you're pretty deep into this all.
And I precise the base layer thing because when you account for layer caching it is even more nuanced. A 80MB ubuntu base layer is still only 80MB 1 time, even when you have 200 containers using it. So a 30MB alpine image is still only saving you 50MB total with 200 containers on that host. Would that still be a good argument for ligthweightness then? (all other things being equal, things are more complicated in practice)

Docker helps reducing dependency-issues on OS level and centralizing config data.
Seems that I have to create it without official help.
I can understand the feeling, but for the sake of the example, here is our base PHP image: php-mainline (essentially just using Sury PHP distribution on top of Debian, with our standard user/group ids + Snuffleupagus + a script to test the FPM pool's healthiness independently of the HTTP layer).
And attached is the Dockerfile for XF itself which is a child of that image, alongside its entrypoint script which supports running either XF via PHP-FPM or a perpetual shell loop of https://xenforo.com/community/resources/cli-job-cron-runner.7931/ (by Sim just above) for CLI-triggering of jobs instead of relying on /jobs.php being hit by users.

Aside from this, you will need an nginx container that passes through requests to FPM, in our case via a shared unix socket (shared volume between the two containers). And our setup for that nginx is even funkier because we went the non-root + immutable filesystem + ModSecurity route...

If you find inspiration there, great, but you will probably quickly realize that it is not super helpful to you...

The real turnkey option as others have mentioned is XF Cloud or other SaaS offerings. If you want to go the selfhosting route with a clean Docker setup, it will require digging quite deeply into it all.
 

Attachments

First, thanks for the detailed answer!
With 'simplest' I mean it in context of running one instance of the forum - so super-lightweight does not matter.
If anybody wants to host many instances for customers, then a specific custom setup will be made in any case.
My question was more about getting up a privately hosted instance.
 
Back
Top Bottom