Page 1 of 1

Mariadb eating disk - help!

Posted: Mon Feb 22, 2016 1:15 pm
by imadsani
I've reworked the my setup to eliminate nginx and work solely with apache. This time I decided to use MariaDB 10.1 (innodb db), boy was this a mistake.

The application is a news website, apart from the standard news insertions to the db, news article read analytics are also recorded to the db. That means one sql entry for each page view (30-50 visits/second). Before all this the db was set to myisam, we were experiencing issues with table crashes so we decided to switch to innodb.

The problem: Since the move, mariadb has started to eat up all the space in /tmp. I don't have /tmp partitioned to a separate partition so it ends up eating all the disk. I've tried looking for something regarding this online but haven't been able to find anything that would help.

CPU consumption stands around 40-50% and overall server ram usage stands firmly at 5GB, hasn't gone above that the whole day today.

Server specs:
Xeon(R) CPU E5-1650 v3
64GB DDR4 ECC RAM
2 x 240GB SSDs (Software Raid 1)
CentOS 7.2

server.conf

Code: Select all

[server]

[mysqld]

skip_name_resolve
default_storage_engine = InnoDB
tmp_table_size = 512M
max_heap_table_size = 512M
max_connect_errors = 10000
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_buffer_pool_size = 15G
innodb_buffer_pool_instances = 15
innodb_log_buffer_size = 128M
innodb_write_io_threads = 8
innodb_read_io_threads = 8

# LOGGING #

log_error = /var/log/mysql/mysqld.log
slow_query_log = 0
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
#log_queries_not_using_indexes = 1

[galera]

[embedded]

[mariadb]

[mariadb-10.1]

[mysqld_safe]
Here's a screenshot from newrelic for the past 3 hours

Image


What am I doing wrong? Please help!

Re: Mariadb eating disk - help!

Posted: Tue Feb 23, 2016 9:33 am
by prupert
Assuming the metrics provided are correct, it seems that the disk was incredibly slow between 9:10 and 9:20. This caused processes waiting on the disk, temporarily slowing down everything. Perhaps a RAID rebuild? Hardware issues? Overbooked public cloud?