Ошибка lost connection to mysql server during query

I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench.
I noticed also that it appears whenever I run long query.

Is there away to increase the timeout value?

Nimeshka Srimal's user avatar

asked May 12, 2012 at 12:14

user836026's user avatar

user836026user836026

10.5k14 gold badges72 silver badges129 bronze badges

New versions of MySQL WorkBench have an option to change specific timeouts.

For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600

Changed the value to 6000.

Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.

Marko's user avatar

Marko

20.4k13 gold badges48 silver badges64 bronze badges

answered Oct 8, 2012 at 22:49

eric william nord's user avatar

15

If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:

[mysqld]
max_allowed_packet=16M

By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.

While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.

For WAMP users: you’ll find the flag in the [wampmysqld] section.

Community's user avatar

answered Jul 3, 2014 at 13:36

Harti's user avatar

4

Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) — for example: --net_read_timeout=100.

For reference see here and here.

answered May 12, 2012 at 12:17

Yahia's user avatar

YahiaYahia

69.4k9 gold badges114 silver badges144 bronze badges

2

SET @@local.net_read_timeout=360;

Warning: The following will not work when you are applying it in remote connection:

SET @@global.net_read_timeout=360;

Edit: 360 is the number of seconds

Vince V.'s user avatar

Vince V.

3,1192 gold badges30 silver badges45 bronze badges

answered Apr 20, 2015 at 3:58

user1313024's user avatar

user1313024user1313024

2894 silver badges3 bronze badges

2

Add the following into /etc/mysql/my.cnf file:

innodb_buffer_pool_size = 64M

example:

key_buffer              = 16M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 8
innodb_buffer_pool_size = 64M

Umair Ayub's user avatar

Umair Ayub

18.9k14 gold badges71 silver badges146 bronze badges

answered Apr 17, 2015 at 15:11

MysqlMan's user avatar

MysqlManMysqlMan

2272 silver badges2 bronze badges

2

In my case, setting the connection timeout interval to 6000 or something higher didn’t work.

I just did what the workbench says I can do.

The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.

On Mac
Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.

And it works 😄

answered Nov 26, 2019 at 3:55

Thet Htun's user avatar

Thet HtunThet Htun

4614 silver badges13 bronze badges

There are three likely causes for this error message

  1. Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
  2. Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
  3. More rarely, it can happen when the client is attempting the initial connection to the server

For more detail read >>

Cause 2 :

SET GLOBAL interactive_timeout=60;

from its default of 30 seconds to 60 seconds or longer

Cause 3 :

SET GLOBAL connect_timeout=60;

answered Dec 8, 2016 at 6:30

Nanhe Kumar's user avatar

Nanhe KumarNanhe Kumar

15.4k5 gold badges78 silver badges70 bronze badges

1

You should set the ‘interactive_timeout’ and ‘wait_timeout’ properties in the mysql config file to the values you need.

answered May 12, 2012 at 12:19

Maksym Polshcha's user avatar

Maksym PolshchaMaksym Polshcha

17.9k8 gold badges52 silver badges77 bronze badges

1

If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.

My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli.
You have three possibilities:

  • increase net_buffer_length inside mysql -> this would need a server restart
  • create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
  • create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server’s net_buffer_length -> I think this is the best solution, although slower no server restart is needed.

I used following mysqldump command:
mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile

answered Jan 8, 2016 at 11:07

Matt V's user avatar

Matt VMatt V

1131 silver badge5 bronze badges

On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me

  1. change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor
    enter image description here

2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:ProgramDataMySQLMySQL Server 8.0my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file.
3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.

answered Mar 24, 2022 at 6:15

Avinash's user avatar

AvinashAvinash

1962 silver badges4 bronze badges

0

Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.

Issue the below command from your shell:

sudo mysql_upgrade -u root -p

Jamal's user avatar

Jamal

7617 gold badges22 silver badges32 bronze badges

answered May 19, 2014 at 20:16

Shoaib Khan's user avatar

Shoaib KhanShoaib Khan

89914 silver badges26 bronze badges

2

Sometimes your SQL-Server gets into deadlocks, I’ve been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart.
Try executing query again.

answered Feb 10, 2021 at 13:28

oshin pojta's user avatar

I know its old but on mac

1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.

answered Mar 27, 2015 at 6:53

Aamir Mahmood's user avatar

Aamir MahmoodAamir Mahmood

2,7043 gold badges26 silver badges47 bronze badges

1

Change «read time out» time in Edit->Preferences->SQL editor->MySQL session

answered Apr 21, 2016 at 9:25

user6234739's user avatar

Try please to uncheck limit rows in in Edit → Preferences →SQL Queries

because You should set the ‘interactive_timeout’ and ‘wait_timeout’ properties in the mysql config file to the values you need.

answered Jul 24, 2014 at 9:59

user2586714's user avatar

user2586714user2586714

1491 gold badge1 silver badge7 bronze badges

I got the same issue when loading a .csv file.
Converted the file to .sql.

Using below command I manage to work around this issue.

mysql -u <user> -p -D <DB name> < file.sql

Hope this would help.

answered Sep 8, 2016 at 6:19

VinRocka's user avatar

VinRockaVinRocka

2994 silver badges15 bronze badges

Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000.
The error no longer occurred.

answered Sep 1, 2018 at 2:50

Kairat Koibagarov's user avatar

I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).

I tried to run the create table statement again without the foreign key declarations and found it worked.

Then after creating the table, I added the foreign key constrains using ALTER TABLE query.

Hope this will help someone.

answered Dec 23, 2016 at 7:22

Nimeshka Srimal's user avatar

Nimeshka SrimalNimeshka Srimal

7,8725 gold badges42 silver badges57 bronze badges

This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.

answered Feb 26, 2017 at 15:35

Phyllis Sutherland's user avatar

Go to:

Edit -> Preferences -> SQL Editor

In there you can see three fields in the «MySQL Session» group, where you can now set the new connection intervals (in seconds).

answered May 5, 2017 at 13:23

Max's user avatar

Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.

answered May 11, 2017 at 15:38

wuro's user avatar

I had the same problem — but for me the solution was a DB user with too strict permissions.
I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore

answered Aug 31, 2017 at 17:35

naabster's user avatar

naabsternaabster

1,49412 silver badges14 bronze badges

Check if the indexes are in place first.

SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'

answered Sep 22, 2017 at 3:58

Gayan Dasanayake's user avatar

Gayan DasanayakeGayan Dasanayake

1,9132 gold badges17 silver badges22 bronze badges

I ran into this while running a stored proc- which was creating lots of rows into a table in the database.
I could see the error come right after the time crossed the 30 sec boundary.

I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.

I am guessing it was some client side connection that I could not spot in Workbench.
Maybe this will help someone else as well ?

answered Dec 19, 2017 at 21:19

RN.'s user avatar

RN.RN.

9974 gold badges14 silver badges31 bronze badges

If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.

Do the same step for other primary keys on other tables.

answered Jun 25, 2018 at 8:21

Matthew E's user avatar

Matthew EMatthew E

6056 silver badges6 bronze badges

There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:

Workbench Edit → Preferences → SQL Editor → DBMS

Workbench Edit → Preferences → SSH → Timeouts

My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don’t forget to restart MySQL Workbench!

Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.

Hope this helps!

answered May 6, 2019 at 17:36

NekoKikoushi's user avatar

Three things to be followed and make sure:

  1. Whether multiple queries show lost connection?
  2. how you use set query in MySQL?
  3. how delete + update query simultaneously?

Answers:

  1. Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
  2. Always SET value at the top but after DELETE if its condition doesn’t involve SET value.
  3. Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES

RalfFriedl's user avatar

RalfFriedl

1,1343 gold badges10 silver badges12 bronze badges

answered Sep 22, 2019 at 16:10

Koyel Sharma's user avatar

I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query

Check mysql error log files in path /var/log/mysql (linux)

In my case reassigning Mysql owner to the Mysql system folder worked for me

chown -R mysql:mysql /var/lib/mysql

answered Jan 23, 2021 at 19:29

franciscorode's user avatar

franciscorodefranciscorode

6051 gold badge9 silver badges15 bronze badges

Establish connection first
mysql --host=host.com --port=3306 -u username -p
then select your db use dbname
then source dumb source C:dumpfile.sql.
After it’s done q

answered Oct 29, 2021 at 5:32

Swaleh Matongwa's user avatar

I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench.
I noticed also that it appears whenever I run long query.

Is there away to increase the timeout value?

Nimeshka Srimal's user avatar

asked May 12, 2012 at 12:14

user836026's user avatar

user836026user836026

10.5k14 gold badges72 silver badges129 bronze badges

New versions of MySQL WorkBench have an option to change specific timeouts.

For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600

Changed the value to 6000.

Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.

Marko's user avatar

Marko

20.4k13 gold badges48 silver badges64 bronze badges

answered Oct 8, 2012 at 22:49

eric william nord's user avatar

15

If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:

[mysqld]
max_allowed_packet=16M

By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.

While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.

For WAMP users: you’ll find the flag in the [wampmysqld] section.

Community's user avatar

answered Jul 3, 2014 at 13:36

Harti's user avatar

4

Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) — for example: --net_read_timeout=100.

For reference see here and here.

answered May 12, 2012 at 12:17

Yahia's user avatar

YahiaYahia

69.4k9 gold badges114 silver badges144 bronze badges

2

SET @@local.net_read_timeout=360;

Warning: The following will not work when you are applying it in remote connection:

SET @@global.net_read_timeout=360;

Edit: 360 is the number of seconds

Vince V.'s user avatar

Vince V.

3,1192 gold badges30 silver badges45 bronze badges

answered Apr 20, 2015 at 3:58

user1313024's user avatar

user1313024user1313024

2894 silver badges3 bronze badges

2

Add the following into /etc/mysql/my.cnf file:

innodb_buffer_pool_size = 64M

example:

key_buffer              = 16M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 8
innodb_buffer_pool_size = 64M

Umair Ayub's user avatar

Umair Ayub

18.9k14 gold badges71 silver badges146 bronze badges

answered Apr 17, 2015 at 15:11

MysqlMan's user avatar

MysqlManMysqlMan

2272 silver badges2 bronze badges

2

In my case, setting the connection timeout interval to 6000 or something higher didn’t work.

I just did what the workbench says I can do.

The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.

On Mac
Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.

And it works 😄

answered Nov 26, 2019 at 3:55

Thet Htun's user avatar

Thet HtunThet Htun

4614 silver badges13 bronze badges

There are three likely causes for this error message

  1. Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
  2. Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
  3. More rarely, it can happen when the client is attempting the initial connection to the server

For more detail read >>

Cause 2 :

SET GLOBAL interactive_timeout=60;

from its default of 30 seconds to 60 seconds or longer

Cause 3 :

SET GLOBAL connect_timeout=60;

answered Dec 8, 2016 at 6:30

Nanhe Kumar's user avatar

Nanhe KumarNanhe Kumar

15.4k5 gold badges78 silver badges70 bronze badges

1

You should set the ‘interactive_timeout’ and ‘wait_timeout’ properties in the mysql config file to the values you need.

answered May 12, 2012 at 12:19

Maksym Polshcha's user avatar

Maksym PolshchaMaksym Polshcha

17.9k8 gold badges52 silver badges77 bronze badges

1

If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.

My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli.
You have three possibilities:

  • increase net_buffer_length inside mysql -> this would need a server restart
  • create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
  • create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server’s net_buffer_length -> I think this is the best solution, although slower no server restart is needed.

I used following mysqldump command:
mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile

answered Jan 8, 2016 at 11:07

Matt V's user avatar

Matt VMatt V

1131 silver badge5 bronze badges

On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me

  1. change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor
    enter image description here

2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:ProgramDataMySQLMySQL Server 8.0my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file.
3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.

answered Mar 24, 2022 at 6:15

Avinash's user avatar

AvinashAvinash

1962 silver badges4 bronze badges

0

Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.

Issue the below command from your shell:

sudo mysql_upgrade -u root -p

Jamal's user avatar

Jamal

7617 gold badges22 silver badges32 bronze badges

answered May 19, 2014 at 20:16

Shoaib Khan's user avatar

Shoaib KhanShoaib Khan

89914 silver badges26 bronze badges

2

Sometimes your SQL-Server gets into deadlocks, I’ve been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart.
Try executing query again.

answered Feb 10, 2021 at 13:28

oshin pojta's user avatar

I know its old but on mac

1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.

answered Mar 27, 2015 at 6:53

Aamir Mahmood's user avatar

Aamir MahmoodAamir Mahmood

2,7043 gold badges26 silver badges47 bronze badges

1

Change «read time out» time in Edit->Preferences->SQL editor->MySQL session

answered Apr 21, 2016 at 9:25

user6234739's user avatar

Try please to uncheck limit rows in in Edit → Preferences →SQL Queries

because You should set the ‘interactive_timeout’ and ‘wait_timeout’ properties in the mysql config file to the values you need.

answered Jul 24, 2014 at 9:59

user2586714's user avatar

user2586714user2586714

1491 gold badge1 silver badge7 bronze badges

I got the same issue when loading a .csv file.
Converted the file to .sql.

Using below command I manage to work around this issue.

mysql -u <user> -p -D <DB name> < file.sql

Hope this would help.

answered Sep 8, 2016 at 6:19

VinRocka's user avatar

VinRockaVinRocka

2994 silver badges15 bronze badges

Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000.
The error no longer occurred.

answered Sep 1, 2018 at 2:50

Kairat Koibagarov's user avatar

I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).

I tried to run the create table statement again without the foreign key declarations and found it worked.

Then after creating the table, I added the foreign key constrains using ALTER TABLE query.

Hope this will help someone.

answered Dec 23, 2016 at 7:22

Nimeshka Srimal's user avatar

Nimeshka SrimalNimeshka Srimal

7,8725 gold badges42 silver badges57 bronze badges

This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.

answered Feb 26, 2017 at 15:35

Phyllis Sutherland's user avatar

Go to:

Edit -> Preferences -> SQL Editor

In there you can see three fields in the «MySQL Session» group, where you can now set the new connection intervals (in seconds).

answered May 5, 2017 at 13:23

Max's user avatar

Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.

answered May 11, 2017 at 15:38

wuro's user avatar

I had the same problem — but for me the solution was a DB user with too strict permissions.
I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore

answered Aug 31, 2017 at 17:35

naabster's user avatar

naabsternaabster

1,49412 silver badges14 bronze badges

Check if the indexes are in place first.

SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'

answered Sep 22, 2017 at 3:58

Gayan Dasanayake's user avatar

Gayan DasanayakeGayan Dasanayake

1,9132 gold badges17 silver badges22 bronze badges

I ran into this while running a stored proc- which was creating lots of rows into a table in the database.
I could see the error come right after the time crossed the 30 sec boundary.

I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.

I am guessing it was some client side connection that I could not spot in Workbench.
Maybe this will help someone else as well ?

answered Dec 19, 2017 at 21:19

RN.'s user avatar

RN.RN.

9974 gold badges14 silver badges31 bronze badges

If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.

Do the same step for other primary keys on other tables.

answered Jun 25, 2018 at 8:21

Matthew E's user avatar

Matthew EMatthew E

6056 silver badges6 bronze badges

There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:

Workbench Edit → Preferences → SQL Editor → DBMS

Workbench Edit → Preferences → SSH → Timeouts

My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don’t forget to restart MySQL Workbench!

Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.

Hope this helps!

answered May 6, 2019 at 17:36

NekoKikoushi's user avatar

Three things to be followed and make sure:

  1. Whether multiple queries show lost connection?
  2. how you use set query in MySQL?
  3. how delete + update query simultaneously?

Answers:

  1. Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
  2. Always SET value at the top but after DELETE if its condition doesn’t involve SET value.
  3. Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES

RalfFriedl's user avatar

RalfFriedl

1,1343 gold badges10 silver badges12 bronze badges

answered Sep 22, 2019 at 16:10

Koyel Sharma's user avatar

I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query

Check mysql error log files in path /var/log/mysql (linux)

In my case reassigning Mysql owner to the Mysql system folder worked for me

chown -R mysql:mysql /var/lib/mysql

answered Jan 23, 2021 at 19:29

franciscorode's user avatar

franciscorodefranciscorode

6051 gold badge9 silver badges15 bronze badges

Establish connection first
mysql --host=host.com --port=3306 -u username -p
then select your db use dbname
then source dumb source C:dumpfile.sql.
After it’s done q

answered Oct 29, 2021 at 5:32

Swaleh Matongwa's user avatar

When you run MySQL queries, sometimes you may encounter an error saying you lost connection to the MySQL server as follows:

Error Code: 2013. Lost connection to MySQL server during query

The error above commonly happens when you run a long or complex MySQL query that runs for more than a few seconds.

To fix the error, you may need to change the timeout-related global settings in your MySQL database server.

Increase the connection timeout from the command line using –connect-timeout option

If you’re accessing MySQL from the command line, then you can increase the number of seconds MySQL will wait for a connection response using the --connect-timeout option.

By default, MySQL will wait for 10 seconds before responding with a connection timeout error.

You can increase the number to 120 seconds to wait for two minutes:

mysql -uroot -proot --connect-timeout 120

You can adjust the number 120 above to the number of seconds you’d like to wait for a connection response.

Once you’re inside the mysql console, try running your query again to see if it’s completed successfully.

Using the --connect-timeout option changes the timeout seconds temporarily. It only works for the current MySQL session you’re running, so you need to use the option each time you want the connection timeout to be longer.

If you want to make a permanent change to the connection timeout variable, then you need to adjust the settings from either your MySQL database server or the GUI tool you used to access your database server.

Let’s see how to change the timeout global variables in your MySQL database server first.

Adjust the timeout global variables in your MySQL database server

MySQL database stores timeout-related global variables that you can access using the following query:

SHOW VARIABLES LIKE "%timeout";

Here’s the result from my local database. The highlighted variables are the ones you need to change to let MySQL run longer queries:

+-----------------------------------+----------+
| Variable_name                     | Value    |
+-----------------------------------+----------+
| connect_timeout                   | 10       |
| delayed_insert_timeout            | 300      |
| have_statement_timeout            | YES      |
| innodb_flush_log_at_timeout       | 1        |
| innodb_lock_wait_timeout          | 50       |
| innodb_rollback_on_timeout        | OFF      |
| interactive_timeout               | 28800    |
| lock_wait_timeout                 | 31536000 |
| mysqlx_connect_timeout            | 30       |
| mysqlx_idle_worker_thread_timeout | 60       |
| mysqlx_interactive_timeout        | 28800    |
| mysqlx_port_open_timeout          | 0        |
| mysqlx_read_timeout               | 30       |
| mysqlx_wait_timeout               | 28800    |
| mysqlx_write_timeout              | 60       |
| net_read_timeout                  | 30       |
| net_write_timeout                 | 60       |
| replica_net_timeout               | 60       |
| rpl_stop_replica_timeout          | 31536000 |
| rpl_stop_slave_timeout            | 31536000 |
| slave_net_timeout                 | 60       |
| wait_timeout                      | 28800    |
+-----------------------------------+----------+

To change the variable values, you can use the SET GLOBAL query as shown below:

SET GLOBAL connect_timeout = 600; 

The above query should adjust the connect_timeout variable value to 600 seconds. You can adjust the numbers as you see fit.

Adjust the timeout variables in your MySQL configuration files

Alternatively, if you’re using a MySQL configuration file to control the settings of your connections, then you can edit the my.cnf file (Mac) or my.ini file (Windows) used by your MySQL connection.

Open that configuration file using the text editor of your choice and try to find the following variables in mysqld :

[mysqld]
connect_timeout = 10
net_read_timeout = 30
wait_timeout = 28800
interactive_timeout = 28800

The wait_timeout and interactive_timeout variables shouldn’t cause any problem because they usually have 28800 seconds (or 8 hours) as their default value.

To prevent the timeout error, you need to increase the connect_timeout and net_read_timeout variable values. I’d suggest setting it to at least 600 seconds (10 minutes)

If you’re using GUI MySQL tools like MySQL Workbench, Sequel Ace, or PHPMyAdmin, then you can also find timeout-related variables that are configured by these tools in their settings or preferences menu.

For example, in MySQL Workbench for Windows, you can find the timeout-related settings in Edit > Preferences > SQL Editor as shown below:

If you’re using Mac, then the menu should be in MySQLWorkbench > Preferences > SQL Editor as shown below:

If you’re using Sequel Ace like me, then you can find the connection timeout option in the Preferences > Network menu.

Here’s a screenshot from Sequel Ace Network settings:

For other GUI tools, you need to find the option yourself. You can try searching the term [tool name] connection timeout settings in Google to find the option.

And those are the four solutions you can try to fix the MySQL connection lost during query issue.

I hope this tutorial has been helpful for you 🙏

Судя по:

Error in `/usr/sbin/mysqld': malloc(): memory corruption: 0x00007fcbfc124080

1. сделать копию /var/lib/mysql на другой накопитель

2. Исследуем и решаем:
2.1. вариант1 — битая память — прогнать memtest, может перегрев системы? устраняем или, если обе проблемы не подтверждаются, то идем дальше. Хотя тут похоже сторонний виртуальный сервер. Но проблема может быть.
Если проблема с ОЗУ, то протестировать внутри ОС можно созданием сжатых архивов и проверкой их целостности, в случае проблем с ОЗУ рано или поздно появятся ошибки контрольных сумм.

2.2. вариант2 — либо испорчены файлы данных, и mysql становится плохо из-за кривого кода. файлы могут быть испорчены некорректным завершением работы сервера либо проблемами с блочным устройством:

2018-08-20T05:10:47.359613Z 0 [ERROR] InnoDB: Could not find a valid tablespace file for `kubium/game`. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting-datadict.html for how to resolve the issue.
2018-08-20T05:10:47.359626Z 0 [Warning] InnoDB: Ignoring tablespace `kubium/game` because it could not be opened.

— это может быть косвенным признаком проблем с файлами.
2.2.1. проверяем состояние блочных устройств smartctl — наличие offline uncorrectable или relocated sectos — могут быть причиной порчи данных — замена накопителя. Для чужого хостинга это недоступно. Можно косвенно проверить чтением блочного устройства /dev/vda
2.2.2. проверяем fsck файловую систему, наличие ошибок в файловой системы может указывать на повреждение содержания файлов БД. чиним и молимся, что важнейшие файлы не были задеты.
2.2.3. проверяем структуру innodb/myisam файлов, для этого используем штатные средства диагностики или вспомогательные утилиты, например «Percona Data Recovery Tool for InnoDB can help recover corrupted or deleted InnoDB tables. https://launchpad.net/percona-data-recovery-tool-f…» если проблемы — пытаемся чинить.
Простой старый способ решения некоторых проблем — это dump базы в sql файл , и импорт заново в базу. старую можно переименовать.
2.2.4. проблема может быть вызвана повреждением файлов индексов, в этом случае пересоздание индексов может все решить.

2.3. вариант3 — похожие проблемы могут наблюдаться при подсовывании двоичных файлов баз от более свежей версии mysql — проверяем эту версию.
Можно попробовать обновить версию mysql или сменить ее на mariadb, возможно некоторые проблемы уже решены.

На машине немного памяти — 1Гб, при исчерпании свободного ОЗУ в системе запускается OOM Killer, который убивает процессы в системе, вполне мог убить процесс mysql прямо посередине критичного изменения файлов БД. Это можно найти в логах.

Я получил код Код ошибки: 2013. Потерянное подключение к MySQL-серверу во время запроса при попытке добавить индекс в таблицу с помощью MySQL Workbench.
Я также заметил, что он появляется, когда я запускаю длинный запрос.

Есть ли возможность увеличить значение таймаута?

4b9b3361

Ответ 1

Новые версии MySQL WorkBench имеют возможность изменять определенные тайм-ауты.

Для меня это было в разделе Редактировать → Настройки → Редактор SQL → Время ожидания подключения к СУБД (в секундах): 600

Изменено значение до 6000.

Также не проверенные предельные строки, как ограничение лимита в каждый раз, когда я хочу искать весь набор данных, становятся утомительными.

Ответ 2

Запустите сервер БД с помощью опции comandline net_read_timeout/wait_timeout и подходящего значения (в секундах) — например: --net_read_timeout=100.

Для справки см. здесь и здесь.

Ответ 3

Если у вашего запроса есть данные blob, эту проблему можно устранить, применив my.ini change как предлагается в этом ответе:

[mysqld]
max_allowed_packet=16M

По умолчанию это будет 1M (допустимое максимальное значение равно 1024M). Если заданное значение не кратно 1024 КБ, оно будет автоматически округлено до ближайшего кратного 1024 КБ.

В то время как связанный поток относится к ошибке MySQL 2006 года, установка max_allowed_packet с 1M до 16M исправила ошибку 2013, которая появилась для меня при запуске длинного запроса.

Для пользователей WAMP: вы найдете флаг в разделе [wampmysqld].

Ответ 4

Добавьте в файл /etc/mysql/cnf следующее:

innodb_buffer_pool_size = 64M

Пример:

key_buffer              = 16M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 8
innodb_buffer_pool_size = 64M

Ответ 5

SET @@local.net_read_timeout=360;

Предупреждение. Следующие действия не будут работать, если вы применяете его в удаленном соединении:

SET @@global.net_read_timeout=360;

Ответ 6

Для этого сообщения об ошибке есть три причины

  1. Обычно это указывает на проблемы с подключением к сети, и вы должны проверить состояние своей сети, если эта ошибка возникает часто
  2. Иногда форма «во время запроса» возникает, когда миллионы строк отправляются как часть одного или нескольких запросов.
  3. Реже это может произойти, когда клиент пытается выполнить первоначальное подключение к серверу

Для более подробной информации >>

Причина 2:

SET GLOBAL interactive_timeout=60;

от значения по умолчанию от 30 секунд до 60 секунд или дольше

Причина 3:

SET GLOBAL connect_timeout=60;

Ответ 7

Вы должны установить свойства «interactive_timeout» и «wait_timeout» в файле конфигурации mysql для значений, которые вам нужны.

Ответ 8

Спасибо, это сработало.
Но с обновлениями mysqldb настройка стала:

max_allowed_packet

net_write_timeout

net_read_timeout

mysql doc

Ответ 9

Просто выполните обновление MySQL, которое будет перестроить механизм innoDB вместе с перестройкой многих таблиц, необходимых для правильной работы MySQL, таких как performance_schema, information_schema и т.д.

Выпустите следующую команду из своей оболочки:

sudo mysql_upgrade -u root -p

Ответ 10

Я знаю его старый, но на mac

1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.

Ответ 11

Измените время ожидания чтения в Edit- > Preferences- > SQL-редакторе- > сеансе MySQL

Ответ 12

Попробуйте снять флажки с ограничениями строк в разделе «Редактировать» → «Настройки» → «Запросы SQL»

потому что вы должны установить свойства «interactive_timeout» и «wait_timeout» в конфигурационном файле mysql для значений, которые вам нужны.

Ответ 13

Если вы столкнулись с этой проблемой во время восстановления большого файла дампа и можете исключить проблему, связанную с сетью (например, выполнение на локальном хосте), может оказаться полезным мое решение.

Мой mysqldump провел хотя бы один INSERT, который был слишком большим для вычисления mysql. Вы можете просмотреть эту переменную, набрав show variables like "net_buffer_length"; внутри вашего mysql-cli.
У вас есть три возможности:

  • увеличить net_buffer_length внутри mysql → для этого потребуется перезагрузка сервера
  • создать дамп с --skip-extended-insert, для каждой вставки используется одна строка → хотя эти дампы гораздо приятнее читать, это не подходит для больших дампов > 1 ГБ, потому что оно имеет тенденцию быть очень медленным
  • создать дамп с расширенными вставками (который по умолчанию), но ограничить net-buffer_length, например. с --net-buffer_length NR_OF_BYTES где NR_OF_BYTES меньше, чем сервер net_buffer_length → Я думаю, что это лучшее решение, хотя медленнее перезагрузка сервера не требуется.

Я использовал следующую команду mysqldump:  mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile

Ответ 14

У меня возникла такая же проблема при загрузке CSV файла.
Преобразовал файл в .sql.

Используя команду ниже, мне удается обойти эту проблему.

mysql -u <user> -p -D <DB name> < file.sql

Надеюсь, это поможет.

Ответ 15

Если все остальные решения здесь не работают — проверьте ваш syslog (/var/log/syslog или аналогичный), чтобы узнать, не исчерпан ли ваш сервер во время запроса.

Если эта проблема возникла, когда innodb_buffer_pool_size был установлен слишком близко к физической памяти без настроенного файла подкачки. MySQL рекомендует для определенного сервера базы данных innodb_buffer_pool_size максимум около 80% физической памяти, я установил его около 90%, Ядро убивало процесс mysql. Перемещенный файл innodb_buffer_pool_size возвращается примерно до 80%, и это устраняет проблему.

Ответ 16

Я столкнулся с этой же проблемой. Я считаю, что это происходит, когда у вас есть внешние ключи для больших таблиц (что требует времени).

Я попытался снова запустить инструкцию create table без объявлений внешнего ключа и нашел, что это сработало.

Затем после создания таблицы я добавил ограничения внешнего ключа, используя запрос ALTER TABLE.

Надеюсь, это поможет кому-то.

Ответ 17

Это произошло со мной, потому что мой innodb_buffer_pool_size был установлен больше, чем размер оперативной памяти, доступный на сервере. Из-за этого все из-за этого прерывается, и эта ошибка возникает. Исправление состоит в том, чтобы обновить my.cnf с правильной настройкой для innodb_buffer_pool_size.

Ответ 18

Перейдите в Workbench Edit → Preferences → SQL Editor → Время ожидания подключения к СУБД: до 3000. Ошибка больше не возникает.

Ответ 19

Перейдите к:

Изменить → Настройки → Редактор SQL

Здесь вы можете увидеть три поля в группе «Сессия MySQL», где теперь вы можете установить новые интервалы подключения (в секундах).

Ответ 20

Оказывается, наше правило брандмауэра блокирует мое подключение к MYSQL. После того, как политика брандмауэра отменена, чтобы разрешить соединение, я смог успешно импортировать схему.

Ответ 21

У меня была такая же проблема — но для меня решение было пользователем БД со слишком строгими разрешениями.
Я должен был разрешить возможность Execute в таблице mysql. После того, как разрешилось, что у меня больше не было отбрасываемых соединений

Ответ 22

Проверьте, установлены ли индексы.

SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'

Ответ 23

Я столкнулся с этим при запуске сохраненного proc-, который создавал много строк в таблице в базе данных. Я мог видеть, что ошибка наступила сразу после того, как время пересекло границу 30 секунд.

Я попробовал все предложения в других ответах. Я уверен, что некоторые из них помогли, however-, что действительно заставило его работать для меня, это переход на SequelPro из Workbench.

Я предполагаю, что это была некоторая клиентская связь, которую я не мог обнаружить в Workbench. Может быть, это тоже поможет кому-то другому?

Ответ 24

Если вы используете SQL Work Bench, вы можете попробовать использовать индексирование, добавив индекс в свои таблицы, добавить индекс, нажать на гаечный ключ (гаечный ключ) в таблице, он должен открыть настройку для таблицы, ниже, нажмите на индексный указатель, введите имя индекса и установите индекс для индекса. В столбцах индекса выберите основной столбец в своей таблице.

Сделайте тот же шаг для других первичных ключей на других таблицах.

Ответ 25

Кажется, здесь нет ответа для тех, кто использует SSH для подключения к своей базе данных MySQL. Вам нужно проверить два места, а не 1, как предлагают другие ответы:

Редактирование рабочей среды → Настройки → Редактор SQL → СУБД

Рабочее место Правка → Настройки → SSH → Тайм-ауты

Мои тайм-ауты SSH по умолчанию были установлены очень низкими и вызывали некоторые (но, очевидно, не все) мои проблемы тайм-аута. После, не забудьте перезапустить MySQL Workbench!

Наконец, возможно, стоит обратиться к администратору БД и попросить его увеличить свойства wait_timeout и interactive_timeout в самом mysql через my.conf + mysql restart или выполнить глобальный набор, если перезапуск mysql не является опцией.

Надеюсь это поможет!

Ответ 26

Три вещи, которым нужно следовать и убедитесь:

  1. Если несколько запросов показывают потерянное соединение?
  2. как вы используете набор запросов в MySQL?
  3. как удалить + обновить запрос одновременно?

Ответы:

  1. Всегда пытайтесь удалить определитель, поскольку MySQL создает свой собственный определитель, и если несколько таблиц, участвующих в обновлении, пытаются сделать один запрос, так как иногда несколько запросов показывают потерянное соединение
  2. Всегда устанавливайте значение сверху, но после УДАЛИТЬ, если его условие не включает значение SET.
  3. Используйте УДАЛИТЬ ПЕРВЫЕ, ТОГДА ОБНОВЛЕНИЯ, ЕСЛИ ОБА ИХ ОПЕРАЦИИ ИСПОЛЬЗУЮТСЯ НА РАЗНЫХ ТАБЛИЦАХ

Ответ 27

проверить

OOM on /var/log/messages ,
modify innodb_buffer_pool_size value ; when load data , use 50% of os mem ; 

Надеюсь, что это поможет

Ответ 28

Это обычно означает, что у вас есть «несовместимости с текущей версией MySQL Server», см. mysql_upgrade. Я столкнулся с этой проблемой и просто должен был работать:

mysql_upgrade —password
В документации указано, что «mysql_upgrade должен выполняться каждый раз при обновлении MySQL».

  • Ошибка los на роутере ростелеком что это значит
  • Ошибка los на роутере мтс
  • Ошибка los на роутере мгтс
  • Ошибка los на модеме ростелеком
  • Ошибка los на модеме белтелеком исправить