Showing posts with label MySQL. Show all posts
Showing posts with label MySQL. Show all posts

MySQL Server has gone Away - Error 2006


By default, the server closes the connection after eight hours if nothing has happened. You can change the time limit by setting the wait_timeout variable when you start mysqld. See Section 5.1.3, “Server System Variables”.
If you have a script, you just have to issue the query again for the client to do an automatic reconnection. This assumes that you have automatic reconnection in the client enabled (which is the default for the mysql command-line client).
Some other common reasons for the MySQL server has gone away error are:
  • You (or the db administrator) has killed the running thread with a KILL statement or a mysqladmin killcommand.
  • You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.
  • A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.
  • You got a timeout from the TCP/IP connection on the client side. This may happen if you have been using the commands: mysql_options(..., MYSQL_OPT_READ_TIMEOUT,...) or mysql_options(..., MYSQL_OPT_WRITE_TIMEOUT,...). In this case increasing the timeout may help solve the problem.
  • You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (thereconnect flag in the MYSQL structure is equal to 0)
  • You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given inSection C.5.2.10, “Packet too large.
    An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
  • You also get a lost connection if you are sending a packet 16MB or larger if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
  • It is also possible to see this error if host name lookups fail (for example, if the DNS server on which your server or network relies goes down). This is because MySQL is dependent on the host system for name resolution, but has no way of knowing whether it is working—from MySQL's point of view the problem is indistinguishable from any other network timeout.
    You may also see the MySQL server has gone away error if MySQL is started with the --skip-networkingoption.

How To Resize InnoDB Data/Log files


When you use the InnoDB (transactions-capable) storage engine, you can resize InnoDB data and log files by following the instructions below.

Resizing the data files.

The easy way is to first use mysqldump to dump the data from all InnoDB tables. Then shut down MySQL properly and recreate the InnoDB data and log files. As last step, just restore data from the dump files back into MySQL.

Resizing the log files.

First do a proper MySQL shut down. Backup the log files and move them away from the logs directory, then change log file size settings and finally restart MySQL. Please note that if you do not move log files away, InnoDB will refuse to start and will complain about the log file size.

How to calculate a good InnoDB log file size

The basic point is that your log file needs to be big enough to let InnoDB optimize its I/O, but not so big that recovery takes a long time. That much Peter covered really well. But how do you choose that size? I'll show you a rule of thumb that works pretty well.


In most cases, when people give you a formula for choosing a configuration setting, you should look at it with skepticism. But in this case you can calculate a reasonable value, believe it or not. Run these queries at your server's peak usage time:

PLAIN TEXT
SQL:
mysql> pager grep sequence
PAGER SET TO 'grep sequence'
mysql> SHOW engine innodb STATUS\G SELECT sleep(60); SHOW engine innodb STATUS\G
Log sequence number 84 3836410803
1 row IN SET (0.06 sec)

1 row IN SET (1 min 0.00 sec)

Log sequence number 84 3838334638
1 row IN SET (0.05 sec)
Notice the log sequence number. That's the total number of bytes written to the transaction log. So, now you can see how many MB have been written to the log in one minute. (The technique I showed here works on all versions of MySQL. In 5.0 and newer, you can just watch Innodb_os_log_written from SHOW GLOBAL STATUS, too.)

PLAIN TEXT
SQL:
mysql> SELECT (3838334638 - 3836410803) / 1024 / 1024 AS MB_per_min;
+------------+
| MB_per_min |
+------------+
| 1.83471203 |
+------------+
As a rough rule of thumb, you can make the log big enough that it can hold at most an hour or so of logs. That's generally plenty of data for InnoDB to work with; an hour's worth is more than enough so that it can reorder the writes to use sequential I/O during the flushing and checkpointing process. At this rate, this server could use about 110 MB of logs, total. Round it up to 128 for good measure. Since there are two log files by default, divide that in half, and now you can set

PLAIN TEXT
CODE:
innodb_log_file_size=64M
Does that look surprisingly small? It might. I commonly see log file sizes in the gigabyte ranges. But that's generally a mistake. The server I used for the measurements above is a big one doing a lot of work, not a toy. Log file sizes can't be left at the default 5MB for any real workload, but they often don't need to be as big as you might think, either.

If this rule-of-thumb calculation ends up showing you that your log file size ought to be many gigabytes, well, you have a more active write workload. Perhaps you're inserting a lot of big rows or something. In this case you might want to make the log smaller so you don't end up with GB of logs. But also realize this: the recovery time depends not only on the total log file size, but the number of entries in it. If you're writing huge entries to the log, fewer log entries will fit into a given log file size, which will generally make recovery faster than you might expect with a big log.

However, most of the time when I run this calculation, I end up finding that the log file size needs to be a lot smaller than it's configured to be. In part that's because InnoDB's log entries are very compact. The other reason is that the common advice to size the logs as a fraction of the buffer pool size is just wrong.

One final note: huge buffer pools or really unusual workloads may require bigger (or smaller!) log sizes. This is where formulas break down and judgment and experience are needed. But this "rule of thumb" is generally a good sane place to start.

Quick and Dirty MySQL Performance Troubleshooting

Sooner or later, you’re going to get that phone call (or email, SMS, instant message, tweet, or whatever): The database is slow! Fix ASAP! And you’ll be expected to very quickly figure out what’s slowing things down–often wither very little context, background, or sense of what may have changed recently at the application layer.

It’s just a server. How hard could it be?

Well, as we all know “that depends.” It depends on a lot of things that you likely have little time to investigate in the middle of a crisis. So where should you focus your time and attention?

This article assumes that you have little to no good monitoring on the server and virtually no idea of what “typical” usage looks like either. Sadly, that’s a real-world situation for a lot of MySQL administators–especially those in smaller organizations who are expected to wear half a dozen different hats on any given day.

In other words, let’s go back to basics and talk about high-level performance troubleshooting–not fancy high end hardware or new releases of software that you haven’t had a chance to try yet.

Hardware Bottlenecks

The first thing I tend to do is check for hardware bottlenecks. That means logging into the box and running a small list of commands to get a quick idea of what’s happening. Mainly I’m looking for stress points. Which resources are most constrained right now? CPU? Memory? Disk I/O? Something else?

There are three main utilities I’ll run to in a situation like this:

  • top
  • vmstat
  • iostat

First I’m going to use top to see if anything is hogging CPU on the machine. If there are non-mysql processes using a substantial percentage of the CPU cores, I’m going to want to have a look at what that is and see about limiting its use or moving it a dedicated server. If I see mysqld using up a lot of CPU, I know it’s working hard and will have to drill into what’s happening inside of MySQL (maybe some poorly written queries). If nothing is apparently chewing up the CPU time, I know that the problem is likely elsewhere.

Next I’ll run vmstat over a fairly short interval–typically 5 or 10 seconds.

$ vmstat 5

‘ll generally run this for at least two or three minutes to get a sense of what the CPU and memory use are like. I’m also watching to see how much time the CPU is stalled waiting for I/O requests. Doing this for several minutes will make the occasional spikes really stand out and also allow for more time to catch those cron jobs that fire up every few minutes.

The last thing I’ll check before poking at MySQL itself is iostat. Just as with vmstat, I’m going to run it with a short interval (5 or 10 seconds) and do so for several minutes. I’ll likely filter the output so that I only see the output for the most active disk or array (the one where all of MySQL’s data lives).

$ iostat -x 5 | grep sdb

I’m looking closely at the % busy and tps to get a qualitative feel for how “busy” the I/O subsystem is, and I’ll watch both Blk_wrtn/s and Blk_read/s to figure out how many blocks are being written and read every second.

Once I have a basic feel for what the system is doing, I’ll start digging into MySQL itself a bit.

Probing MySQL

Looking inside MySQL is a sort of two-layer problem for me. First I want a high-level picture of what it seems to be doing and then I want to dig into the storage engine(s) doing all the I/O work, since I/O is a very common bottleneck.

For the highest level view, I want to see a number of things quickly:

  • how many queries per second is the server handling
  • how many clients are connected (and active)
  • are there many “slow” queries being executed
  • what, if any, unusual errors are being logged

The first few items can be answered by looking at the results of a few SHOW PROCESSLIST commands along with some SHOW GLOBAL STATUS. Or, better yet, by using a tool that is able to summarize and prevent that data in a more friendly and efficient manner. innotop and mytop both do that. (I wrote mytop but fully admit that innotop is more feature rich and frequently maintained. But either will handle the basics.) Sorting the running queries by execution time is often revealing.

To find out about slow queries I’m going hope that the slow query log is enabled and the server has a sane long_query_time. But even the default of 10 seconds is helpful in truly bad situations.

I’ll also want to glance through MySQL’s error log to make sure nothing bad-looking has started to appear.

With that out of the way, I’ll generally dive into the storage engine statistics. And nowadays that almost always means asking InnoDB to cough up some information with SHOW ENGINE INNODB STATUS. While there’s a wealth of information to process in that output, I’m mainly interested in a few high-level stats at the end.

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 23904405026; in additional pool allocated 37084160
Buffer pool size 1310720
Free buffers 1
Database pages 1274443
Modified db pages 770518
Pending reads 1
Pending writes: LRU 0, flush list 1, single page 0
Pages read 733227814, created 65128628, written 1679994934
98.63 reads/s, 3.09 creates/s, 227.34 writes/s
Buffer pool hit rate 999 / 1000

I’m usually keen to see that the buffer pool hit rate is good. The closer to 1000 / 1000 things get, the happier I am. I also want to see how mange pages are being read and written per second. If either of those seems high, it should correspond to a high volume of I/O seen earlier in iostat.

The Culprit?

In nine out of ten cases, the culprit is obvious by now–or was obvious half way through this exercise. The reality is that most of the time changes that are very easy to spot are responsible for a sudden decline in performance. The trick is this: you have to be looking in order to see them!

Silly as that sounds, it’s been my experience that most problems go undetected until someone complains–especially in smaller IT groups where there isn’t someone regularly looking after the MySQL server that runs just fine 99.9% of the time. And since it fails so infrequently, nobody bothers to setup decent monitoring or performance logging infrastructure to detect problems before end users are impacted.

Sound familiar?

Just a Start…

The tasks I’ve presented here are just the beginning. Once you’ve got a good sense of what’s happening on a MySQL server, you can really start to dig in and think about how to improve the situation. Do you add or change hardware? Modify the application? Adjust some of MySQL or InnoDB’s run-time or start-time parameters?

Re-syncing broken MySQL databases replication

How do I configure a slave if the master is running and I do not want to stop it?
There are several possibilities. If you have taken a snapshot backup of the master at some point and recorded the binary log filename and offset (from the output of SHOW MASTER STATUS) corresponding to the snapshot, use the following procedure:

  1. Make sure that the slave is assigned a unique server ID.

  2. Execute the following statement on the slave, filling in appropriate values for each option:
    mysql> CHANGE MASTER TO
    ->     MASTER_HOST='master_host_name',
    ->     MASTER_USER='master_user_name',
    ->     MASTER_PASSWORD='master_pass',
    ->     MASTER_LOG_FILE='recorded_log_file_name',
    ->     MASTER_LOG_POS=recorded_log_position; 

  3. Execute START SLAVE on the slave.
If you do not have a backup of the master server, here is a quick procedure for creating one. All steps should be performed on the master host.

  1. Issue this statement to acquire a global read lock:
    mysql> FLUSH TABLES WITH READ LOCK; 

  2. With the lock still in place, execute this command (or a variation of it), you will have to use another login console to execute command below, as once you left the mysql console, the 'READ LOCK' is unlocked:
    shell> tar zcf /tmp/backup.tar.gz /var/lib/mysql 

  3. Issue this statement and record the output, which you will need later:
    mysql> SHOW MASTER STATUS; 

  4. Release the lock:
    mysql> UNLOCK TABLES; 
An alternative to using the preceding procedure to make a binary copy is to make an SQL dump of the master. To do this, you can use mysqldump --master-data on your master and later load the SQL dump into your slave. However, this is slower than making a binary copy.
Regardless of which of the two methods you use, afterward follow the instructions for the case when you have a snapshot and have recorded the log filename and offset. You can use the same snapshot to set up several slaves. Once you have the snapshot of the master, you can wait to set up a slave as long as the binary logs of the master are left intact. The two practical limitations on the length of time you can wait are the amount of disk space available to retain binary logs on the master and the length of time it takes the slave to catch up.

Case Study:

In this case study, I assume that you have the replication being done correctly before. User permission and mysql settings has been properly configured.

I have a single master-slave database replication, and the slave's data was not synchronized with the master data. To solve this, we need to perform steps below to re-sync the data.

Step 1. ( on master server )

# mysql -p -u root
> use exampledb;
> flush tables with read lock;
> show master status;
( you need to write down the 'File' and "Position' displayed for later usage. )

( now open another console to dump the database out. Make sure the bin file and the position was the same before and after you dumped the database out. )
# mysqldump - u root -p exampledb > exampledb.sql

> unlock tables;
> quit;

Step 2. ( on slave server )

( you will need to copy over the exampledb.sql file from master to this slave server first. )

# mysql -u root -p
> slave stop;
> show slave status;
> quit;

# mysqladmin -u root -p create exampledb
# mysql -u root -p exampledb <> change master to master_host='192.168.0.100', master_user='slave_user', master_password='some_password', master_log_file='mysql-bin.006', master_log_pos=1123;
> start slave;
> show slave status;
> quit;

Now you need to try to update some data from the master and check whether it was replicated to the slave server. The easiest way is to create a new test database.

Step 3. ( on master server )
> mysqladmin -u root -p create test1

Step 4. ( on slave server )
You need to check on the data directory for the newly created database and check the mysql error log.

If everything replicated, then your job has done!!


Reference:
http://homepage.mac.com/kelleherk/iblog/C711669388/E226281480/index.html
http://www.howtoforge.com/mysql_database_replication_p2
http://dev.mysql.com/doc/refman/5.0/en/replication-faq.html