1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Anyone running their InnoDB tables with ROW_FORMAT=COMPRESSED?

Discussion in 'Server Configuration and Hosting' started by jeffwidman, Jun 12, 2015.

  1. jeffwidman

    jeffwidman Active Member

    Just stumbled across the InnoDB Row_format setting.

    Anyone running row_format=compressed?

    row_format=compact is the default for legacy reasons, and looks like there's nothing but upside to switching to 'Dynamic'.

    For VPS servers which are tight on RAM but plenty of CPU, read-heavy workloads, and text-heavy tables, seems like it might beneficial to switch to 'Compressed' because you can stick more of your DB in the buffer pool.

    Curious if anyone's actually tried this in production though?
     
  2. Xon

    Xon Well-Known Member

    Tables which are less used but text heavy such as xf_conversation_message or xf_edit_history are ideal candidates. You can also save a lot of diskspace compressing the xf_post table. This doesn't hurt performance as much as you would expect, as posts tend to be modified rarely and access frequency drops of remarkably as the posts grow older.

    You should avoid compressing tables which are heavily accessed and are composed of non-text data. Especially if they are heavily used as joins between tables. ie the xf_node table
     
  3. jeffwidman

    jeffwidman Active Member

    Thanks for the detailed response @Xon. That makes a lot of sense.

    A couple of questions:

    1) Do compressed tables use RAM more efficiently? I read conflicting reports--sounds like sometimes the InnoDB buffer pool stores both a compressed and uncompressed copy of the data, sometimes just a compressed copy, and couldn't find much detailed docs on this.

    2) Is the performance hit from compressed tables primarily on writes, or both reads and writes?

    3) What block size are you using for your compressed tables?
    Default is 16k in MariaDB 10, not sure whether how much is actually gained by dropping to 8k, and I've read insert times for most workloads are horribly slow if using 4k.

    4) Any idea how much of the request processing time for Xenforo is generally in the DB vs PHP interpreter?
    I haven't looked into this, just been assuming as a forum grows, DB query time increases while PHP time stays relatively constant.

    My use case: I run Nginx/PHP-FPM/MariaDB10. I've got several forums hosted on the same VPS--most are 10+ years old, so they've accumulated a lot of posts that generate a long-tail of search traffic, but overall concurrent activity isn't that high on any one forum. So it's a read-heavy workload for sure. I have plenty of SSD space on this box, so saving disk space doesn't matter. I've also got plenty of CPU headroom, so enabling compression shouldn't cause any issues from a total load perspective, but I assume it still has a minor impact on latency of each request, so all else being equal, I normally wouldn't enable it. It's RAM that I'm tight on, and with those forums each having a long tail of posts, so the more I can fit into the InnoDB buffer pool, the better--I assume any CPU hit will be outweighed by the benefit in decreased times to access RAM rather than SSD. Along these lines, I've read a few places that compression could actually increase performance because it lets more stuff fit in RAM... https://signalvnoise.com/posts/3571-scaling-your-database-via-innodb-table-compression

    Also, for anyone who stumbled across this post later, @Xon was nice enough to list which tables he compresses here: https://xenforo.com/community/threads/scaling-database.98161/#post-945458 and good tip about saving RAM by switching to Elastic Search now that it uses significantly less RAM than the `xf_search_index` table.

    Also, be sure you've fulfilled the requirements for using Compressed or Dynamic... MySQL silently fails back to compressed otherwise: https://www.percona.com/blog/2014/01/14/innodb-file-formats-here-is-one-pitfall-to-avoid/

    An easy query to check the row_format of your tables:
    Code:
    SELECT TABLE_NAME, ROW_FORMAT FROM INFORMATION_SCHEMA.TABLES WHERE ROW_FORMAT='compact'
    
     
    Last edited: Jun 12, 2015
    Mouth and eva2000 like this.
  4. eva2000

    eva2000 Well-Known Member

    Haven't done much with innodb compressed tables myself.

    Only way is to test ;) If you do make sure you have a high ratio of successful compressions operations (compress_ops_ok) to overall total compression operations (compress_ops)

    Code:
    mysql -e "SELECT * FROM INFORMATION_SCHEMA.INNODB_CMP" 
    +-----------+--------------+-----------------+---------------+----------------+-----------------+
    | page_size | compress_ops | compress_ops_ok | compress_time | uncompress_ops | uncompress_time |
    +-----------+--------------+-----------------+---------------+----------------+-----------------+
    |      1024 |            0 |               0 |             0 |              0 |               0 |
    |      2048 |            0 |               0 |             0 |              0 |               0 |
    |      4096 |            0 |               0 |             0 |              0 |               0 |
    |      8192 |            0 |               0 |             0 |              0 |               0 |
    |     16384 |            0 |               0 |             0 |              0 |               0 |
    +-----------+--------------+-----------------+---------------+----------------+-----------------+
    FYI, compressed tables will end up requiring larger innodb buffer pool memory allocations as both compressed and uncompressed copies are made. So probably not what you're looking for for memory reduction.
     
    jeffwidman likes this.
  5. jeffwidman

    jeffwidman Active Member

    So I know that's true if the buffer pool is big enough, but it also looked like the uncompressed gets evicted when their isn't enough space, so possibly you can store more of the db in a compressed form: https://www.facebook.com/notes/mysq...-pool-work-with-compression/10150184845380933

    I'm unclear though whether this is a net benefit in performance, or just a tradeoff that is about even in performance because more of the table is in RAM, but the buffer pool is cycling more frequently because you're doing more evictions. I suspect you're right that the net benefit isn't there... not sure whether @Xon is storing compressed to save space on disk or because he's seeing performance benefits on a memory-constrained box.

    I've also read MariaDB is experimenting with page-level compression, which would store the tables uncompressed in the buffer pool, and do the compression to/from saving to disk. So no memory savings there.
     
  6. Xon

    Xon Well-Known Member

    One of the definite aims is the diskspace saving when compressing tables like xf_posts or xf_edit_history.

    SpaceBattles when it was at ~16 million posts the xf_post table was about 9gb compressed on disk or about 21gb uncompressed.

    I'm looking forward to that when it goes-live with the 10.1.x line.
     
    eva2000 and Mouth like this.
  7. Mouth

    Mouth Well-Known Member

    So, steps for those wanting to implement ...

    1. Make your forum in-active via /admin.php?options/list/boardActive
    2. Check your MySQL config (eg. my.cnf) to ensure you have these 2 settings;
    • innodb-file-per-table = 1
    • innodb_file_format = Barracuda
    3. Restart MySQL ( eg. /etc/init.d/mysql restart ) if you needed to add/change either of the above config settings
    4. Convert the most beneficial XF tables for DB compression with;
    Code:
    alter table xf_edit_history ROW_FORMAT=COMPRESSED;
    alter table xf_conversation_message ROW_FORMAT=COMPRESSED;
    alter table xf_profile_post ROW_FORMAT=COMPRESSED;
    alter table xf_post ROW_FORMAT=COMPRESSED;
    
    5. Make your forum active (undo step #1)

    "SELECT TABLE_NAME, ROW_FORMAT FROM INFORMATION_SCHEMA.TABLES;" to check your tables now have the compressed format.
     
    Last edited: Jun 14, 2015
    RoldanLT, Marcus, jeffwidman and 3 others like this.
  8. Solidus

    Solidus Well-Known Member

    Is this a long process for 1 million posts and 250k conversations?
     
  9. Mouth

    Mouth Well-Known Member

    Few mins at most.
     
    Solidus likes this.
  10. RoldanLT

    RoldanLT Well-Known Member

    1.3M Forum
     
  11. Solidus

    Solidus Well-Known Member

    xf_post caused phpmyadmin to time out with that command, running via mysqladmin took about 4 minutes.
     
  12. Mouth

    Mouth Well-Known Member

    Hmmm ... you might want to take a look at your mysql.ini for optimisation opportunities, does seem higher than expected for 1M records.
    Although server performance, and limiting if you're on a VPS, can be the impacting factor.
    That said, if you're otherwise happy with your server/mysql performance then I wouldn't bother about it. ~4m is still reasonable for a once-off process.
     
  13. Solidus

    Solidus Well-Known Member

    It's a dedi, I constantly tune my.cnf so it may not have been optimal at the time.
    Still very happy with performance though.
     
    Mouth likes this.

Share This Page