I have a users table with a datetime field last_seen_at. Updating this field takes around 120ms, and i'd like it to be a lot quicker as i do it on pretty much every pageload on my site. I can't work out why it's so slow: there's around 55,000 records which shouldn't be problematically large (i'd have thought).
Here's the table info:
mysql> show table status like 'users'; +-------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +-------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | users | InnoDB | 10 | Compact | 55609 | 954 | 53051392 | 0 | 43352064 | 26214400 | 67183 | 2015-09-22 13:12:13 | NULL | NULL | utf8_general_ci | NULL | | | +-------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ mysql> desc users; +---------------------------------+--------------+------+-----+-----------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------------------+--------------+------+-----+-----------------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | last_seen_at | datetime | YES | MUL | NULL | | +---------------------------------+--------------+------+-----+-----------------+----------------+ mysql> show indexes from users; +-------+------------+------------------------------------------------+--------------+---------------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-------+------------+------------------------------------------------+--------------+---------------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | users | 0 | PRIMARY | 1 | id | A | 57609 | NULL | NULL | | BTREE | | | | users | 1 | index_users_on_last_seen_at | 1 | last_seen_at | A | 57609 | NULL | NULL | YES | BTREE | | | +-------+------------+------------------------------------------------+--------------+---------------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
As you can see i've got an index on the last_seen_at column already. I've ommitted all other columns (apart from id) for clarity's sake.
When i update last_seen_at i do it like so:
update users set last_seen_at = '2015-10-05 12:34:45' where id = 1182;
MySQL server info:
Server version: 5.5.44-0ubuntu0.12.04.1 (Ubuntu)
Is there anything i can do to speed up the update?
EDIT - i'd previously said the query was taking 700ms. it's actually more like 120ms, sorry, i was looking at the wrong query. This still feels a bit too long though. Is this actually a reasonable write time after all?
EDIT - all my timings come from manually entering sql queries in the mysql shell client. I do use MySQL in my Ruby on Rails web app, but that app is not involved for the purposes of this question: i'm purely looking at the database level.
Well, you appear to be performing the update in the most efficient manner - i.e. using the primary key on the table, so there is not much that can be done there. Assuming the 120ms to update is purely the time taken by the db server (as opposed to the round trip in the web page), I can only think of a few things that might help:
You have indexed the column being updated - that typically adds a little time to the update as the index has to be maintained. I see that you need to use that column, so you can't get rid of the index; but if you could, you might well see better performance.
Batching updates is sometimes a good way of avoiding the real-time performance hit, but still achieving what you want.
You could have the web-triggered insert go into a holding table with a timestamp field, then (offline) batch update the real data. See https://dba.stackexchange.com/questions/28282/whats-the-most-efficient-way-to-batch-update-queries-in-mysql for an example batch update statement.
DB optimisation may help, but only if the db is not in good shape already - so things like memory allocation, tablespace fragmentation, buffer pools etc.
Good luck!
There is not much you can do about this. You already have an index on your column, and it just takes some time to find the row using the index and update it.
The index might be fragmented, which will slow down your lookup. You can rebuild the index using analyze.
An option might be to delay the update or to prevent it from blocking page building by using some asynchronous / background task in the programming environment you are using (aka, fire-and-forget).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With