WordPress/MySQL (InnoDB) tables get fragmented near instantaneously after optimizing tables?

So I originally ran mysqltuner and it kept saying (on a fresh WordPress install, using Percona InnoDB with random dummy data for WordPress posts) that there was fragmented tables. I don’t know if this is the proper way to check for fragmented tables or not:

SELECT TABLE_SCHEMA, TABLE_NAME, CONCAT(ROUND(data_length / ( 1024 * 1024 ), 2), 'MB')        DATA, CONCAT(ROUND(data_free / ( 1024 * 1024 ), 2), 'MB')FREE from information_schema.TABLES where TABLE_SCHEMA NOT IN ('information_schema','mysql') and Data_free > 0;

but that spits out:

Read More
+----------------+-------------+--------+--------+
| TABLE_SCHEMA   | TABLE_NAME  | DATA   | FREE   |
+----------------+-------------+--------+--------+
| db_wordpress   | wp_postmeta | 2.52MB | 4.00MB |
| db_wordpress   | wp_posts    | 1.52MB | 4.00MB |
+----------------+-------------+--------+--------+

So I’m unsure if those tables are truly fragmented or not. I’ve ran:

ALTER TABLE wp_postmeta ENGINE='InnoDB';

which supposedly is the correct way to “optimize” InnoDB tables? That then made the above query show:

+----------------+-------------+--------+--------+
| TABLE_SCHEMA   | TABLE_NAME  | DATA   | FREE   |
+----------------+-------------+--------+--------+
| db_wordpress   | wp_postmeta | 0.02MB | 4.00MB |
| db_wordpress   | wp_posts    | 1.52MB | 4.00MB |
+----------------+-------------+--------+--------+

Now, mysqltuner was still saying those two tables were fragmented, so I tried:

OPTIMIZE TABLE wp_posts;

When running the above query then put the “data” column back to the original of 2.52MB…

So I’m unsure what’s going on exactly? Let alone why exactly the tables (particularly wp_posts and wp_postmeta) would be fragmented as my understanding was (primarily?) deletes were the big cause of fragmentation? If it’s also inserts would I have fragmentation issues on pretty much every new post that’s made in WordPress considering that’s what seems to have caused the tables to get fragmented in the first place?

Either way, I’m just unsure if that query above is the correct one to check for fragmentation and if so, would ALTER TABLE wp_postmeta ENGINE='InnoDB' or OPTIMIZE TABLE wp_posts be the correct query to run to optimize the table and if so why would the query still show as fragmented?

EDIT:

Percona’s Config Wizard gave me:

# MyISAM #
key-buffer-size                = 32M
myisam-recover                 = FORCE,BACKUP

# SAFETY #
max-allowed-packet             = 16M
max-connect-errors             = 1000000
skip-name-resolve
#sql-mode                       = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY # I disabled this due to problems with WP/Plugins
sysdate-is-now                 = 1
innodb                         = FORCE
#innodb-strict-mode             = 0 # I disabled this due to problems with WP/Plugins

# DATA STORAGE #
datadir                        = /var/lib/mysql/

# BINARY LOGGING #
log-bin                        = /var/lib/mysql/mysql-bin
expire-logs-days               = 14
sync-binlog                    = 1

# CACHES AND LIMITS #
tmp-table-size                 = 96M # I tweaked this due to mysqltuner recommendation
max-heap-table-size            = 96M # I tweaked this due to mysqltuner recommendation
query-cache-type               = 1 # I tweaked this due to mysqltuner recommendation
query-cache-size               = 96M # I tweaked this due to mysqltuner recommendation
max-connections                = 100
thread-cache-size              = 16
open-files-limit               = 65535
table-definition-cache         = 1024
table-open-cache               = 256

# INNODB #
innodb_stats_on_metadata = 0 # I added this when testing
innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 2
innodb-log-file-size           = 64M
innodb-flush-log-at-trx-commit = 2
innodb-file-per-table          = 1
innodb-buffer-pool-size        = 400

Related posts

Leave a Reply

2 comments

  1. I would guess that you are storing all your tables in the central tablespace ibdata1, because when you do that, the data_free for all tables is reported as the amount of unused space in the whole ibdata1 file.

    In other words, comparing data_length to data_free is not a very useful way to measure fragmentation unless you use innodb_file_per_table=1.

    ALTER TABLE wp_postmeta ENGIN=InnoDB will indeed do a table restructure and rebuild secondary indexes. It makes sense that the pages will be filled better, and so your 2.52MB was rewritten down to 0.02MB (which is probably one or two 16KB data pages).

    At the scale you’re working with, I wouldn’t worry about fragmentation at all. Even storing the data in the fragmented form only took 2 and half MB, so your buffer pool is still only a fraction filled. The reason to attend to fragmentation is when you are trying to stretch your buffer pool to fit more of your data. In your case, it’s going to fit no matter what.


    Re your comment and additional info:

    innodb-buffer-pool-size        = 400
    

    This is not a good value to set. The unit for this variable is in bytes, so you have requested an awfully tiny buffer pool. In fact, InnoDB will disregard your config and instead use the minimum value of 5MB. If you check your MySQL error log, you may see this:

    [Note] InnoDB: Initializing buffer pool, size = 5.0M
    

    This is way too small for any typical production web site. The size of the buffer pool is a crucial tuning parameter if you want to get good performance. The buffer pool should be large enough to hold your working set, that is the pages of data that are most frequently used by your application. See MySQL Performance Blog’s post “10 MySQL settings to tune after installation”.

    That said, your database might be so small that 5.0MB is adequate — for now. But keep in mind that the default size of the buffer pool in MySQL 5.6 is 128MB.

    You have allocated 32MB to your key buffer, which is used only for MyISAM indexes. I recommend against using MyISAM for any tables in most cases. If you have no MyISAM tables, then memory would be better allocated to your InnoDB buffer pool.

    You also allocated 96MB to your query cache. The query cache has some downsides to it — depending on your site’s traffic, it could actually hurt more than help. Unless you’re getting a lot of bang for the buck from it, I would disable it (query_cache_type=0 and query_cache_size=0), and use that RAM for the buffer pool. See MySQL Performance Blog’s post “MySQL Query Cache” for more information on this.


    Re your second comment:

    No, the innodb_buffer_pool_size is not in MB, it’s in bytes. You can make it MB only if you add the “M” suffix to the number.

    The MySQL 5.6 manual on innodb_buffer_pool_size says:

    The size in bytes of the buffer pool, the memory area where InnoDB caches table and index data. The default value is 128MB.

    I just tested it with MySQL 5.5.36 — not Percona Server, just stock MySQL community edition. I confirm that when I set innodb_buffer_pool_size=400, what I get is 5.0MB, which is the documented minimum size.

    I also tested with MySQL 5.1.70, and I see this in the error log when I start with the buffer pool size set to 400:

    140203  9:46:20 [Warning] option 'innodb-buffer-pool-size': signed value 400 adjusted to 1048576
    140203  9:46:20  InnoDB: Initializing buffer pool, size = 1.0M
    

    MySQL 5.1’s minimum buffer pool size is documented to be 1.0MB.

  2. Hmm, very interesting Bill as my setting is definitely in MB as mysqltuner shows max memory used is ~510MB, 100 connections at 1.1mb + the 400mb buffer pool size. Also, if I adjust it something like 900 MySQL fails to start with error_log stating it can’t allocate 900mb to the innodb buffer.

    Definitely must be a different between Percona and MySQL then.