Ошибка upstream request timeout

I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns it from Riak to the app, I get an error in the Nginx log:

upstream timed out (110: Connection timed out) while reading
response header from upstream

If I query my upstream directly without nginx proxy, with the same request, I get the required data.

The Nginx timeout occurs once the proxy is put in.

**nginx.conf**

http {
    keepalive_timeout 10m;
    proxy_connect_timeout  600s;
    proxy_send_timeout  600s;
    proxy_read_timeout  600s;
    fastcgi_send_timeout 600s;
    fastcgi_read_timeout 600s;
    include /etc/nginx/sites-enabled/*.conf;
}

**virtual host conf**

upstream ss_api {
  server 127.0.0.1:3000 max_fails=0  fail_timeout=600;
}

server {
  listen 81;
  server_name xxxxx.com; # change to match your URL

  location / {
    # match the name of upstream directive which is defined above
    proxy_pass http://ss_api; 
    proxy_set_header  Host $http_host;
    proxy_set_header  X-Real-IP  $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_cache cloud;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_cache_bypass $http_authorization;
    proxy_cache_bypass http://ss_api/account/;
    add_header X-Cache-Status $upstream_cache_status;
  }
}

Nginx has a bunch of timeout directives. I don’t know if I’m missing something important. Any help would be highly appreciated….

bschlueter's user avatar

bschlueter

3,7981 gold badge30 silver badges48 bronze badges

asked Sep 11, 2013 at 12:01

user2768537's user avatar

1

This happens because your upstream takes too long to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error.
Just include and increase proxy_read_timeout in location config block.
Same thing happened to me and I used 1 hour timeout for an internal app at work:

proxy_read_timeout 3600;

With this, NGINX will wait for an hour (3600s) for its upstream to return something.

Armen Michaeli's user avatar

answered Sep 13, 2017 at 19:51

Sergio Gonzalez's user avatar

Sergio GonzalezSergio Gonzalez

1,7521 gold badge11 silver badges12 bronze badges

3

You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.

I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here:
https://stackoverflow.com/a/36589120/479632

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

        # these two lines here
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_pass http://localhost:5000;
    }
}

Unfortunately I can’t explain why this works and didn’t manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I’d be very interested to hear it.

Community's user avatar

answered Apr 13, 2016 at 5:17

Almund's user avatar

AlmundAlmund

5,5883 gold badges30 silver badges35 bronze badges

13

First figure out which upstream is slowing by consulting the nginx error log
file and adjust the read time out accordingly
in my case it was fastCGI

2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx"

So i have to adjust the fastcgi_read_timeout in my server configuration

 location ~ .php$ {
     fastcgi_read_timeout 240;
     ...
 }

See: original post

Finwe's user avatar

Finwe

6,2962 gold badges28 silver badges44 bronze badges

answered Sep 27, 2017 at 14:19

Ruberandinda Patience's user avatar

2

In your case it helps a little optimization in proxy, or you can use «# time out settings»

location / 
{        

  # time out settings
  proxy_connect_timeout 159s;
  proxy_send_timeout   600;
  proxy_read_timeout   600;
  proxy_buffer_size    64k;
  proxy_buffers     16 32k;
  proxy_busy_buffers_size 64k;
  proxy_temp_file_write_size 64k;
  proxy_pass_header Set-Cookie;
  proxy_redirect     off;
  proxy_hide_header  Vary;
  proxy_set_header   Accept-Encoding '';
  proxy_ignore_headers Cache-Control Expires;
  proxy_set_header   Referer $http_referer;
  proxy_set_header   Host   $host;
  proxy_set_header   Cookie $http_cookie;
  proxy_set_header   X-Real-IP  $remote_addr;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

answered Dec 19, 2013 at 10:13

Dimitrios's user avatar

DimitriosDimitrios

1,13311 silver badges10 bronze badges

4

I would recommend to look at the error_logs, specifically at the upstream part where it shows specific upstream that is timing out.

Then based on that you can adjust proxy_read_timeout, fastcgi_read_timeout or uwsgi_read_timeout.

Also make sure your config is loaded.

More details here Nginx upstream timed out (why and how to fix)

Eje's user avatar

Eje

3544 silver badges8 bronze badges

answered Apr 22, 2017 at 17:36

gansbrest's user avatar

gansbrestgansbrest

7591 gold badge8 silver badges11 bronze badges

1

I think this error can happen for various reasons, but it can be specific to the module you’re using. For example I saw this using the uwsgi module, so had to set «uwsgi_read_timeout».

answered Oct 10, 2013 at 10:50

Richard's user avatar

RichardRichard

1,79316 silver badges17 bronze badges

1

As many others have pointed out here, increasing the timeout settings for NGINX can solve your issue.

However, increasing your timeout settings might not be as straightforward as many of these answers suggest. I myself faced this issue and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This did not help me a single bit; there was no apparent change in NGINX’ timeout settings. Now, many hours later, I finally managed to fix this problem.

The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn’t exist, you should create it). I used the same settings as suggested in the thread:

proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;

answered Feb 9, 2019 at 9:54

Andreas Forslöw's user avatar

Please also check the keepalive_timeout of the upstream server.

I got a similar issue: random 502, with Connection reset by peer errors in nginx logs, happening when server was on heavy load. Eventually found it was caused by a mismatch between nginx’ and upstream’s (gunicorn in my case) keepalive_timeout values. Nginx was at 75s and upstream only a few seconds. This caused upstream to sometimes fall in timeout and drop the connection, while nginx didn’t understand why.

Raising the upstream server value to match nginx’ one solved the issue.

answered Jul 9, 2021 at 15:29

Eino Gourdin's user avatar

Eino GourdinEino Gourdin

4,0792 gold badges38 silver badges66 bronze badges

If you’re using an AWS EC2 instance running Linux like I am you may also need to restart Nginx for the changes to take effect after adding proxy_read_timeout 3600; to etc/nginx/nginx.conf, I did: sudo systemctl restart nginx

answered Jul 15, 2022 at 18:17

Amon's user avatar

AmonAmon

2,6915 gold badges30 silver badges52 bronze badges

I had the same problem and resulted that was an «every day» error in the rails controller. I don’t know why, but on production, puma runs the error again and again causing the message:

upstream timed out (110: Connection timed out) while reading response header from upstream

Probably because Nginx tries to get the data from puma again and again.The funny thing is that the error caused the timeout message even if I’m calling a different action in the controller, so, a single typo blocks all the app.

Check your log/puma.stderr.log file to see if that is the situation.

answered Dec 26, 2016 at 19:28

aarkerio's user avatar

aarkerioaarkerio

2,1552 gold badges20 silver badges34 bronze badges

Hopefully it helps someone:
I ran into this error and the cause was wrong permission on the log folder for phpfpm, after changing it so phpfpm could write to it, everything was fine.

answered Jan 3, 2019 at 1:08

Maurício Otta's user avatar

From our side it was using spdy with proxy cache. When the cache expires we get this error till the cache has been updated.

answered Jun 18, 2014 at 21:26

timhaak's user avatar

timhaaktimhaak

2,4032 gold badges13 silver badges7 bronze badges

For proxy_upstream timeout, I tried the above setting but these didn’t work.

Setting resolver_timeout worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).

http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout

Eje's user avatar

Eje

3544 silver badges8 bronze badges

answered Nov 25, 2019 at 13:44

David Mercer's user avatar

we faced issue while saving content (customt content type) giving timeout error. Fixed this by adding all above timeouts, http client config to 600s and increasing memory for php process to 3gb.

answered Dec 10, 2021 at 5:44

Jagdish Bhadra's user avatar

1

If you are using wsl2 on windows 10, check your version by this command:

wsl -l -v

you should see 2 under the version.
if you don’t, you need to install wsl_update_x64.

Dijkgraaf's user avatar

Dijkgraaf

10.9k17 gold badges39 silver badges53 bronze badges

answered Jan 22, 2022 at 6:25

salman's user avatar

I test proxy_read_timeout 100s and find timeout 100s in access log,set 210s appeared,so you can set 600s or long with your web.
proxy_read_timeout
accesslog

answered Mar 10 at 5:17

Cooperd's user avatar

new add a line config to location or nginx.conf, for example:
proxy_read_timeout 900s;

answered Mar 19, 2021 at 10:57

leiting.liu's user avatar

1

I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns it from Riak to the app, I get an error in the Nginx log:

upstream timed out (110: Connection timed out) while reading
response header from upstream

If I query my upstream directly without nginx proxy, with the same request, I get the required data.

The Nginx timeout occurs once the proxy is put in.

**nginx.conf**

http {
    keepalive_timeout 10m;
    proxy_connect_timeout  600s;
    proxy_send_timeout  600s;
    proxy_read_timeout  600s;
    fastcgi_send_timeout 600s;
    fastcgi_read_timeout 600s;
    include /etc/nginx/sites-enabled/*.conf;
}

**virtual host conf**

upstream ss_api {
  server 127.0.0.1:3000 max_fails=0  fail_timeout=600;
}

server {
  listen 81;
  server_name xxxxx.com; # change to match your URL

  location / {
    # match the name of upstream directive which is defined above
    proxy_pass http://ss_api; 
    proxy_set_header  Host $http_host;
    proxy_set_header  X-Real-IP  $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_cache cloud;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_cache_bypass $http_authorization;
    proxy_cache_bypass http://ss_api/account/;
    add_header X-Cache-Status $upstream_cache_status;
  }
}

Nginx has a bunch of timeout directives. I don’t know if I’m missing something important. Any help would be highly appreciated….

bschlueter's user avatar

bschlueter

3,7981 gold badge30 silver badges48 bronze badges

asked Sep 11, 2013 at 12:01

user2768537's user avatar

1

This happens because your upstream takes too long to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error.
Just include and increase proxy_read_timeout in location config block.
Same thing happened to me and I used 1 hour timeout for an internal app at work:

proxy_read_timeout 3600;

With this, NGINX will wait for an hour (3600s) for its upstream to return something.

Armen Michaeli's user avatar

answered Sep 13, 2017 at 19:51

Sergio Gonzalez's user avatar

Sergio GonzalezSergio Gonzalez

1,7521 gold badge11 silver badges12 bronze badges

3

You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.

I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here:
https://stackoverflow.com/a/36589120/479632

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

        # these two lines here
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_pass http://localhost:5000;
    }
}

Unfortunately I can’t explain why this works and didn’t manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I’d be very interested to hear it.

Community's user avatar

answered Apr 13, 2016 at 5:17

Almund's user avatar

AlmundAlmund

5,5883 gold badges30 silver badges35 bronze badges

13

First figure out which upstream is slowing by consulting the nginx error log
file and adjust the read time out accordingly
in my case it was fastCGI

2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx"

So i have to adjust the fastcgi_read_timeout in my server configuration

 location ~ .php$ {
     fastcgi_read_timeout 240;
     ...
 }

See: original post

Finwe's user avatar

Finwe

6,2962 gold badges28 silver badges44 bronze badges

answered Sep 27, 2017 at 14:19

Ruberandinda Patience's user avatar

2

In your case it helps a little optimization in proxy, or you can use «# time out settings»

location / 
{        

  # time out settings
  proxy_connect_timeout 159s;
  proxy_send_timeout   600;
  proxy_read_timeout   600;
  proxy_buffer_size    64k;
  proxy_buffers     16 32k;
  proxy_busy_buffers_size 64k;
  proxy_temp_file_write_size 64k;
  proxy_pass_header Set-Cookie;
  proxy_redirect     off;
  proxy_hide_header  Vary;
  proxy_set_header   Accept-Encoding '';
  proxy_ignore_headers Cache-Control Expires;
  proxy_set_header   Referer $http_referer;
  proxy_set_header   Host   $host;
  proxy_set_header   Cookie $http_cookie;
  proxy_set_header   X-Real-IP  $remote_addr;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

answered Dec 19, 2013 at 10:13

Dimitrios's user avatar

DimitriosDimitrios

1,13311 silver badges10 bronze badges

4

I would recommend to look at the error_logs, specifically at the upstream part where it shows specific upstream that is timing out.

Then based on that you can adjust proxy_read_timeout, fastcgi_read_timeout or uwsgi_read_timeout.

Also make sure your config is loaded.

More details here Nginx upstream timed out (why and how to fix)

Eje's user avatar

Eje

3544 silver badges8 bronze badges

answered Apr 22, 2017 at 17:36

gansbrest's user avatar

gansbrestgansbrest

7591 gold badge8 silver badges11 bronze badges

1

I think this error can happen for various reasons, but it can be specific to the module you’re using. For example I saw this using the uwsgi module, so had to set «uwsgi_read_timeout».

answered Oct 10, 2013 at 10:50

Richard's user avatar

RichardRichard

1,79316 silver badges17 bronze badges

1

As many others have pointed out here, increasing the timeout settings for NGINX can solve your issue.

However, increasing your timeout settings might not be as straightforward as many of these answers suggest. I myself faced this issue and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This did not help me a single bit; there was no apparent change in NGINX’ timeout settings. Now, many hours later, I finally managed to fix this problem.

The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn’t exist, you should create it). I used the same settings as suggested in the thread:

proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;

answered Feb 9, 2019 at 9:54

Andreas Forslöw's user avatar

Please also check the keepalive_timeout of the upstream server.

I got a similar issue: random 502, with Connection reset by peer errors in nginx logs, happening when server was on heavy load. Eventually found it was caused by a mismatch between nginx’ and upstream’s (gunicorn in my case) keepalive_timeout values. Nginx was at 75s and upstream only a few seconds. This caused upstream to sometimes fall in timeout and drop the connection, while nginx didn’t understand why.

Raising the upstream server value to match nginx’ one solved the issue.

answered Jul 9, 2021 at 15:29

Eino Gourdin's user avatar

Eino GourdinEino Gourdin

4,0792 gold badges38 silver badges66 bronze badges

If you’re using an AWS EC2 instance running Linux like I am you may also need to restart Nginx for the changes to take effect after adding proxy_read_timeout 3600; to etc/nginx/nginx.conf, I did: sudo systemctl restart nginx

answered Jul 15, 2022 at 18:17

Amon's user avatar

AmonAmon

2,6915 gold badges30 silver badges52 bronze badges

I had the same problem and resulted that was an «every day» error in the rails controller. I don’t know why, but on production, puma runs the error again and again causing the message:

upstream timed out (110: Connection timed out) while reading response header from upstream

Probably because Nginx tries to get the data from puma again and again.The funny thing is that the error caused the timeout message even if I’m calling a different action in the controller, so, a single typo blocks all the app.

Check your log/puma.stderr.log file to see if that is the situation.

answered Dec 26, 2016 at 19:28

aarkerio's user avatar

aarkerioaarkerio

2,1552 gold badges20 silver badges34 bronze badges

Hopefully it helps someone:
I ran into this error and the cause was wrong permission on the log folder for phpfpm, after changing it so phpfpm could write to it, everything was fine.

answered Jan 3, 2019 at 1:08

Maurício Otta's user avatar

From our side it was using spdy with proxy cache. When the cache expires we get this error till the cache has been updated.

answered Jun 18, 2014 at 21:26

timhaak's user avatar

timhaaktimhaak

2,4032 gold badges13 silver badges7 bronze badges

For proxy_upstream timeout, I tried the above setting but these didn’t work.

Setting resolver_timeout worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).

http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout

Eje's user avatar

Eje

3544 silver badges8 bronze badges

answered Nov 25, 2019 at 13:44

David Mercer's user avatar

we faced issue while saving content (customt content type) giving timeout error. Fixed this by adding all above timeouts, http client config to 600s and increasing memory for php process to 3gb.

answered Dec 10, 2021 at 5:44

Jagdish Bhadra's user avatar

1

If you are using wsl2 on windows 10, check your version by this command:

wsl -l -v

you should see 2 under the version.
if you don’t, you need to install wsl_update_x64.

Dijkgraaf's user avatar

Dijkgraaf

10.9k17 gold badges39 silver badges53 bronze badges

answered Jan 22, 2022 at 6:25

salman's user avatar

I test proxy_read_timeout 100s and find timeout 100s in access log,set 210s appeared,so you can set 600s or long with your web.
proxy_read_timeout
accesslog

answered Mar 10 at 5:17

Cooperd's user avatar

new add a line config to location or nginx.conf, for example:
proxy_read_timeout 900s;

answered Mar 19, 2021 at 10:57

leiting.liu's user avatar

1

Our experts have had an average response time of 9.78 minutes in Apr 2023 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Nginx “upstream timeout (110: Connection timed out)” error appears when nginx is not able to receive an answer from the webserver.

As a part of our Server Management Services, Our Support Engineers helps webmasters fix Nginx-related errors regularly.

Let us today discuss the possible reasons and fixes for this error.

What causes Nginx “upstream timed out” error

The upstream timeout error generally triggers when the upstream takes too much to answer the request and NGINX thinks the upstream already failed in processing the request. A typical error message looks like this:

Nginx upstream timed out

Some of the common reasons for this error include:

  • Server resource usage
  • PHP memory limits
  • Server software timeouts

Let us now discuss how our Support Engineers fix this error in each of the cases.

How to fix Nginx “upstream timed out” error

Server resource usage

One of the most common reasons for this error is server resource usage. Often heavy load makes the server slow to respond to requests.

When it takes too much time to respond, in a reverse proxy setup Nginx thinks that the request already failed.

We already have some articles discussing the steps to troubleshoot server load here.

Our Support Engineers also make sure that there is enough RAM on the server. To check that they use the tophtop or free -m commands.

In addition, we also suggest optimizing the website by installing a good caching plugin. This helps to reduce the overall resource usage on the server.

PHP memory limits

At times, this error could be related only to specific PHP codes.  Our Support Engineers cross-check the PHP FPM error log in such cases for a more detailed analysis of the error.

Sometimes, PHP would be using too much RAM and the PHP FPM process gets killed. In such cases, we would recommend to make sure that the PHP memory limit is not too high compared to the actual available memory on the Droplet.

For example, if you have 1GB of RAM available your PHP memory limit should not be more than 64MB.

Server software timeouts

Nginx upstream errors can also occur when a web server takes more time to complete the request.

By that time, the caching server will reach its timeout values(timeout for the connection between proxy and upstream server).

Slow queries can lead to such problems.

Our Support Engineers will fine tune the following Nginx timeout values in the Nginx configuration file.

proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
fastcgi_send_timeout 1200s;
fastcgi_read_timeout 1200s;

Once the timeout values are added, need to reload Nginx to save these parameters.

Conclusion

In short, Nginx upstream timed out triggers due to a number of reasons that include server resource usage and software timeouts. Today, we saw how our Support Engineers fix this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

SEE SERVER ADMIN PLANS

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

fIt has this log associated with the curl.

2019-06-03T23:24:59.708351Z	debug	adapters	HandleAuthorization: &InstanceMsg{Subject:&SubjectMsg{User:,Groups:,Properties:map[string]*istio_policy_v1beta11.Value{api_key: &Value{Value:&Value_StringValue{StringValue:<THE_API_KEY>,},},json_claims: &Value{Value:&Value_StringValue{StringValue:,},},},},Action:&ActionMsg{Namespace:default,Service:helloworld.default.svc.cluster.local,Method:GET,Path:/hello,Properties:map[string]*istio_policy_v1beta11.Value{},},Name:apigee-authorization.instance.istio-system,}	{"adapter": "<project>~dev"}
2019-06-03T23:24:59.708450Z	debug	adapters	HandleAuthorization: Subject: authorization.Subject{User:"", Groups:"", Properties:map[string]interface {}{"api_key":"<THE_API_KEY>...", "json_claims":""}}, Action: authorization.Action{Namespace:"default", Service:"helloworld.default.svc.cluster.local", Method:"GET", Path:"/hello", Properties:map[string]interface {}{}}	{"adapter": "<project>~dev"}
2019-06-03T23:24:59.708462Z	debug	adapters	Authenticate: key: <THE_API_KEY>..., claims: map[string]interface {}{}	{"adapter": "<project>~dev"}
2019-06-03T23:24:59.708801Z	debug	adapters	using api key from request	{"adapter": "<project>~dev"}
2019-06-03T23:24:59.708887Z	debug	adapters	Authenticate success: &{<some_hex_number> <THE_API_KEY>...  hello-istio-app [hello-istio-product] 2019-06-03 23:36:33 +0000 UTC  [] <THE_API_KEY>...}	{"adapter": "<project>~dev"}
2019-06-03T23:25:04.731803Z	debug	adapters	HandleAuthorization: &InstanceMsg{Subject:&SubjectMsg{User:,Groups:,Properties:map[string]*istio_policy_v1beta11.Value{api_key: &Value{Value:&Value_StringValue{StringValue:<THE_API_KEY>,},},json_claims: &Value{Value:&Value_StringValue{StringValue:,},},},},Action:&ActionMsg{Namespace:default,Service:helloworld.default.svc.cluster.local,Method:GET,Path:/hello,Properties:map[string]*istio_policy_v1beta11.Value{},},Name:apigee-authorization.instance.istio-system,}	{"adapter": "<project>~dev"}
2019-06-03T23:25:04.732029Z	debug	adapters	HandleAuthorization: Subject: authorization.Subject{User:"", Groups:"", Properties:map[string]interface {}{"json_claims":"", "api_key":"<THE_API_KEY>..."}}, Action: authorization.Action{Namespace:"default", Service:"helloworld.default.svc.cluster.local", Method:"GET", Path:"/hello", Properties:map[string]interface {}{}}	{"adapter": "<project>~dev"}
2019-06-03T23:25:04.732059Z	debug	adapters	Authenticate: key: <THE_API_KEY>..., claims: map[string]interface {}{}	{"adapter": "<project>~dev"}
2019-06-03T23:25:04.732329Z	debug	adapters	using api key from request	{"adapter": "<project>~dev"}
2019-06-03T23:25:04.732505Z	debug	adapters	Authenticate success: &{<some_hex_number> <THE_API_KEY>...  hello-istio-app [hello-istio-product] 2019-06-03 23:36:33 +0000 UTC  [] <THE_API_KEY>...}	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.709623Z	debug	adapters	HandleAnalytics: [&InstanceMsg{ApiProxy:helloworld.default.svc.cluster.local,ResponseStatusCode:503,ClientIp:&v1beta1.IPAddress{Value:[0 0 0 0 0 0 0 0 0 0 255 255 10 32 2 123],},RequestVerb:GET,RequestUri:/hello,RequestPath:,Useragent:curl/7.58.0,ClientReceivedStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:24:59.705691415Z,},ClientReceivedEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:24:59.705691415Z,},ClientSentStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.706293766Z,},ClientSentEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.706293766Z,},TargetSentStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:24:59.705691415Z,},TargetSentEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:24:59.705691415Z,},TargetReceivedStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.706293766Z,},TargetReceivedEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.706293766Z,},ApiClaims:map[string]string{json_claims: ,},ApiKey:<THE_API_KEY>,Name:apigee-analytics.instance.istio-system,}]	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.709717Z	debug	adapters	HandleAnalytics: 1 instances	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.709766Z	debug	adapters	Authenticate: key: <THE_API_KEY>..., claims: map[string]interface {}{}	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.710456Z	debug	adapters	using api key from request	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.710536Z	debug	adapters	Authenticate success: &{<some_hex_number> <THE_API_KEY>...  hello-istio-app [hello-istio-product] 2019-06-03 23:36:33 +0000 UTC  [] <THE_API_KEY>...}	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.710996Z	debug	adapters	new bucket created: /tmp/apigee-istio/analytics/<project>/dev/temp/<project>~dev/1559604305-749020971	{"adapter": "<project>~dev"}
2019-06-03T23:25:05.711795Z	debug	adapters	1 records written to /tmp/apigee-istio/analytics/<project>/dev/temp/<project>~dev/1559604305-749020971	{"adapter": "<project>~dev"}
2019-06-03T23:25:09.748680Z	debug	adapters	HandleAuthorization: &InstanceMsg{Subject:&SubjectMsg{User:,Groups:,Properties:map[string]*istio_policy_v1beta11.Value{api_key: &Value{Value:&Value_StringValue{StringValue:<THE_API_KEY>,},},json_claims: &Value{Value:&Value_StringValue{StringValue:,},},},},Action:&ActionMsg{Namespace:default,Service:helloworld.default.svc.cluster.local,Method:GET,Path:/hello,Properties:map[string]*istio_policy_v1beta11.Value{},},Name:apigee-authorization.instance.istio-system,}	{"adapter": "<project>~dev"}
2019-06-03T23:25:09.748884Z	debug	adapters	HandleAuthorization: Subject: authorization.Subject{User:"", Groups:"", Properties:map[string]interface {}{"api_key":"<THE_API_KEY>...", "json_claims":""}}, Action: authorization.Action{Namespace:"default", Service:"helloworld.default.svc.cluster.local", Method:"GET", Path:"/hello", Properties:map[string]interface {}{}}	{"adapter": "<project>~dev"}
2019-06-03T23:25:09.748923Z	debug	adapters	Authenticate: key: <THE_API_KEY>..., claims: map[string]interface {}{}	{"adapter": "<project>~dev"}
2019-06-03T23:25:09.749212Z	debug	adapters	using api key from request	{"adapter": "<project>~dev"}
2019-06-03T23:25:09.749350Z	debug	adapters	Authenticate success: &{<some_hex_number> <THE_API_KEY>...  hello-istio-app [hello-istio-product] 2019-06-03 23:36:33 +0000 UTC  [] <THE_API_KEY>...}	{"adapter": "<project>~dev"}
2019-06-03T23:25:10.732370Z	debug	adapters	HandleAnalytics: [&InstanceMsg{ApiProxy:helloworld.default.svc.cluster.local,ResponseStatusCode:503,ClientIp:&v1beta1.IPAddress{Value:[0 0 0 0 0 0 0 0 0 0 255 255 10 32 2 123],},RequestVerb:GET,RequestUri:/hello,RequestPath:,Useragent:curl/7.58.0,ClientReceivedStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.729572118Z,},ClientReceivedEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.729572118Z,},ClientSentStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.729405666Z,},ClientSentEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.729405666Z,},TargetSentStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.729572118Z,},TargetSentEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:04.729572118Z,},TargetReceivedStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.729405666Z,},TargetReceivedEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.729405666Z,},ApiClaims:map[string]string{json_claims: ,},ApiKey:<THE_API_KEY>,Name:apigee-analytics.instance.istio-system,}]	{"adapter": "<project>~dev"}
2019-06-03T23:25:10.732529Z	debug	adapters	HandleAnalytics: 1 instances	{"adapter": "<project>~dev"}
2019-06-03T23:25:10.732557Z	debug	adapters	Authenticate: key: <THE_API_KEY>..., claims: map[string]interface {}{}	{"adapter": "<project>~dev"}
2019-06-03T23:25:10.732742Z	debug	adapters	using api key from request	{"adapter": "<project>~dev"}
2019-06-03T23:25:10.732850Z	debug	adapters	Authenticate success: &{<some_hex_number> <THE_API_KEY>...  hello-istio-app [hello-istio-product] 2019-06-03 23:36:33 +0000 UTC  [] <THE_API_KEY>...}	{"adapter": "<project>~dev"}
2019-06-03T23:25:10.733521Z	debug	adapters	1 records written to /tmp/apigee-istio/analytics/<project>/dev/temp/<project>~dev/1559604305-749020971	{"adapter": "<project>~dev"}
2019-06-03T23:25:15.746881Z	debug	adapters	HandleAnalytics: [&InstanceMsg{ApiProxy:helloworld.default.svc.cluster.local,ResponseStatusCode:503,ClientIp:&v1beta1.IPAddress{Value:[0 0 0 0 0 0 0 0 0 0 255 255 10 32 2 123],},RequestVerb:GET,RequestUri:/hello,RequestPath:,Useragent:curl/7.58.0,ClientReceivedStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.742385115Z,},ClientReceivedEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.742385115Z,},ClientSentStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:14.743136165Z,},ClientSentEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:14.743136165Z,},TargetSentStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.742385115Z,},TargetSentEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:09.742385115Z,},TargetReceivedStartTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:14.743136165Z,},TargetReceivedEndTimestamp:&v1beta1.TimeStamp{Value:2019-06-03T23:25:14.743136165Z,},ApiClaims:map[string]string{json_claims: ,},ApiKey:<THE_API_KEY>,Name:apigee-analytics.instance.istio-system,}]	{"adapter": "<project>~dev"}
2019-06-03T23:25:15.747816Z	debug	adapters	HandleAnalytics: 1 instances	{"adapter": "<project>~dev"}
2019-06-03T23:25:15.747910Z	debug	adapters	Authenticate: key: <THE_API_KEY>..., claims: map[string]interface {}{}	{"adapter": "<project>~dev"}
2019-06-03T23:25:15.748425Z	debug	adapters	using api key from request	{"adapter": "<project>~dev"}
2019-06-03T23:25:15.748549Z	debug	adapters	Authenticate success: &{<some_hex_number> <THE_API_KEY>...  hello-istio-app [hello-istio-product] 2019-06-03 23:36:33 +0000 UTC  [] <THE_API_KEY>...}	{"adapter": "<project>~dev"}
2019-06-03T23:25:15.748981Z	debug	adapters	1 records written to /tmp/apigee-istio/analytics/<project>/dev/temp/<project>~dev/1559604305-749020971	{"adapter": "<project>~dev"}

(I have scrubbed some sensitive information in <>)
It also generates this message every 2 min, independent of what I’m doing:

2019-06-03T23:24:34.327067Z	debug	adapters	Looper work running	{"adapter": "<project>~dev"}
2019-06-03T23:24:34.327258Z	debug	adapters	retrieving products from: https://<project>-dev.apigee.net/istio-auth/products	{"adapter": "<project>~dev"}
2019-06-03T23:24:34.824506Z	error	adapters	unable to unmarshal JSON response '{… <followed by a super long json string>
2019-06-03T23:24:34.827157Z	error	adapters	Error retrieving products: invalid character '"' after array element	{"adapter": "<project>~dev"}
2019-06-03T23:24:34.827192Z	debug	adapters	Looper work scheduled to run in 2m0s	{"adapter": "<project>~dev"}

The third line here is followed by a super long json string which looks like a log of Apigee API product creation (who created what when).

I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns it from Riak to the app, I get an error in the Nginx log:

upstream timed out (110: Connection timed out) while reading
response header from upstream

If I query my upstream directly without nginx proxy, with the same request, I get the required data.

The Nginx timeout occurs once the proxy is put in.

**nginx.conf**

http {
    keepalive_timeout 10m;
    proxy_connect_timeout  600s;
    proxy_send_timeout  600s;
    proxy_read_timeout  600s;
    fastcgi_send_timeout 600s;
    fastcgi_read_timeout 600s;
    include /etc/nginx/sites-enabled/*.conf;
}

**virtual host conf**

upstream ss_api {
  server 127.0.0.1:3000 max_fails=0  fail_timeout=600;
}

server {
  listen 81;
  server_name xxxxx.com; # change to match your URL

  location / {
    # match the name of upstream directive which is defined above
    proxy_pass http://ss_api; 
    proxy_set_header  Host $http_host;
    proxy_set_header  X-Real-IP  $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_cache cloud;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_cache_bypass $http_authorization;
    proxy_cache_bypass http://ss_api/account/;
    add_header X-Cache-Status $upstream_cache_status;
  }
}

Nginx has a bunch of timeout directives. I don’t know if I’m missing something important. Any help would be highly appreciated….

bschlueter's user avatar

bschlueter

3,6601 gold badge30 silver badges46 bronze badges

asked Sep 11, 2013 at 12:01

user2768537's user avatar

1

This happens because your upstream takes too long to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error.
Just include and increase proxy_read_timeout in location config block.
Same thing happened to me and I used 1 hour timeout for an internal app at work:

proxy_read_timeout 3600;

With this, NGINX will wait for an hour (3600s) for its upstream to return something.

Armen Michaeli's user avatar

answered Sep 13, 2017 at 19:51

Sergio Gonzalez's user avatar

Sergio GonzalezSergio Gonzalez

1,6141 gold badge11 silver badges12 bronze badges

3

You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.

I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here:
https://stackoverflow.com/a/36589120/479632

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

        # these two lines here
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_pass http://localhost:5000;
    }
}

Unfortunately I can’t explain why this works and didn’t manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I’d be very interested to hear it.

Community's user avatar

answered Apr 13, 2016 at 5:17

Almund's user avatar

AlmundAlmund

5,3693 gold badges30 silver badges33 bronze badges

10

First figure out which upstream is slowing by consulting the nginx error log
file and adjust the read time out accordingly
in my case it was fastCGI

2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx"

So i have to adjust the fastcgi_read_timeout in my server configuration

 location ~ .php$ {
     fastcgi_read_timeout 240;
     ...
 }

See: original post

Finwe's user avatar

Finwe

6,0612 gold badges30 silver badges44 bronze badges

answered Sep 27, 2017 at 14:19

Ruberandinda Patience's user avatar

2

In your case it helps a little optimization in proxy, or you can use «# time out settings»

location / 
{        

  # time out settings
  proxy_connect_timeout 159s;
  proxy_send_timeout   600;
  proxy_read_timeout   600;
  proxy_buffer_size    64k;
  proxy_buffers     16 32k;
  proxy_busy_buffers_size 64k;
  proxy_temp_file_write_size 64k;
  proxy_pass_header Set-Cookie;
  proxy_redirect     off;
  proxy_hide_header  Vary;
  proxy_set_header   Accept-Encoding '';
  proxy_ignore_headers Cache-Control Expires;
  proxy_set_header   Referer $http_referer;
  proxy_set_header   Host   $host;
  proxy_set_header   Cookie $http_cookie;
  proxy_set_header   X-Real-IP  $remote_addr;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

answered Dec 19, 2013 at 10:13

Dimitrios's user avatar

DimitriosDimitrios

1,12311 silver badges10 bronze badges

4

I would recommend to look at the error_logs, specifically at the upstream part where it shows specific upstream that is timing out.

Then based on that you can adjust proxy_read_timeout, fastcgi_read_timeout or uwsgi_read_timeout.

Also make sure your config is loaded.

More details here Nginx upstream timed out (why and how to fix)

Eje's user avatar

Eje

3584 silver badges8 bronze badges

answered Apr 22, 2017 at 17:36

gansbrest's user avatar

gansbrestgansbrest

7331 gold badge8 silver badges11 bronze badges

1

I think this error can happen for various reasons, but it can be specific to the module you’re using. For example I saw this using the uwsgi module, so had to set «uwsgi_read_timeout».

answered Oct 10, 2013 at 10:50

Richard's user avatar

RichardRichard

1,78316 silver badges17 bronze badges

1

As many others have pointed out here, increasing the timeout settings for NGINX can solve your issue.

However, increasing your timeout settings might not be as straightforward as many of these answers suggest. I myself faced this issue and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This did not help me a single bit; there was no apparent change in NGINX’ timeout settings. Now, many hours later, I finally managed to fix this problem.

The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn’t exist, you should create it). I used the same settings as suggested in the thread:

proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;

answered Feb 9, 2019 at 9:54

Andreas Forslöw's user avatar

Please also check the keepalive_timeout of the upstream server.

I got a similar issue: random 502, with Connection reset by peer errors in nginx logs, happening when server was on heavy load. Eventually found it was caused by a mismatch between nginx’ and upstream’s (gunicorn in my case) keepalive_timeout values. Nginx was at 75s and upstream only a few seconds. This caused upstream to sometimes fall in timeout and drop the connection, while nginx didn’t understand why.

Raising the upstream server value to match nginx’ one solved the issue.

answered Jul 9, 2021 at 15:29

Eino Gourdin's user avatar

Eino GourdinEino Gourdin

3,8792 gold badges41 silver badges66 bronze badges

I had the same problem and resulted that was an «every day» error in the rails controller. I don’t know why, but on production, puma runs the error again and again causing the message:

upstream timed out (110: Connection timed out) while reading response header from upstream

Probably because Nginx tries to get the data from puma again and again.The funny thing is that the error caused the timeout message even if I’m calling a different action in the controller, so, a single typo blocks all the app.

Check your log/puma.stderr.log file to see if that is the situation.

answered Dec 26, 2016 at 19:28

aarkerio's user avatar

aarkerioaarkerio

2,0732 gold badges20 silver badges33 bronze badges

Hopefully it helps someone:
I ran into this error and the cause was wrong permission on the log folder for phpfpm, after changing it so phpfpm could write to it, everything was fine.

answered Jan 3, 2019 at 1:08

Maurício Otta's user avatar

If you’re using an AWS EC2 instance running Linux like I am you may also need to restart Nginx for the changes to take effect after adding proxy_read_timeout 3600; to etc/nginx/nginx.conf, I did: sudo systemctl restart nginx

answered Jul 15, 2022 at 18:17

Amon's user avatar

AmonAmon

2,5835 gold badges29 silver badges47 bronze badges

From our side it was using spdy with proxy cache. When the cache expires we get this error till the cache has been updated.

answered Jun 18, 2014 at 21:26

timhaak's user avatar

timhaaktimhaak

2,4132 gold badges13 silver badges7 bronze badges

For proxy_upstream timeout, I tried the above setting but these didn’t work.

Setting resolver_timeout worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).

http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout

Eje's user avatar

Eje

3584 silver badges8 bronze badges

answered Nov 25, 2019 at 13:44

David Mercer's user avatar

we faced issue while saving content (customt content type) giving timeout error. Fixed this by adding all above timeouts, http client config to 600s and increasing memory for php process to 3gb.

answered Dec 10, 2021 at 5:44

Jagdish Bhadra's user avatar

1

If you are using wsl2 on windows 10, check your version by this command:

wsl -l -v

you should see 2 under the version.
if you don’t, you need to install wsl_update_x64.

Dijkgraaf's user avatar

Dijkgraaf

10.5k17 gold badges39 silver badges53 bronze badges

answered Jan 22, 2022 at 6:25

salman's user avatar

new add a line config to location or nginx.conf, for example:
proxy_read_timeout 900s;

answered Mar 19, 2021 at 10:57

leiting.liu's user avatar

1

Our experts have had an average response time of 11.84 minutes in Dec 2022 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Nginx “upstream timeout (110: Connection timed out)” error appears when nginx is not able to receive an answer from the webserver.

As a part of our Server Management Services, Our Support Engineers helps webmasters fix Nginx-related errors regularly.

Let us today discuss the possible reasons and fixes for this error.

What causes Nginx “upstream timed out” error

The upstream timeout error generally triggers when the upstream takes too much to answer the request and NGINX thinks the upstream already failed in processing the request. A typical error message looks like this:

Nginx upstream timed out

Some of the common reasons for this error include:

  • Server resource usage
  • PHP memory limits
  • Server software timeouts

Let us now discuss how our Support Engineers fix this error in each of the cases.

How to fix Nginx “upstream timed out” error

Server resource usage

One of the most common reasons for this error is server resource usage. Often heavy load makes the server slow to respond to requests.

When it takes too much time to respond, in a reverse proxy setup Nginx thinks that the request already failed.

We already have some articles discussing the steps to troubleshoot server load here.

Our Support Engineers also make sure that there is enough RAM on the server. To check that they use the tophtop or free -m commands.

In addition, we also suggest optimizing the website by installing a good caching plugin. This helps to reduce the overall resource usage on the server.

PHP memory limits

At times, this error could be related only to specific PHP codes.  Our Support Engineers cross-check the PHP FPM error log in such cases for a more detailed analysis of the error.

Sometimes, PHP would be using too much RAM and the PHP FPM process gets killed. In such cases, we would recommend to make sure that the PHP memory limit is not too high compared to the actual available memory on the Droplet.

For example, if you have 1GB of RAM available your PHP memory limit should not be more than 64MB.

Server software timeouts

Nginx upstream errors can also occur when a web server takes more time to complete the request.

By that time, the caching server will reach its timeout values(timeout for the connection between proxy and upstream server).

Slow queries can lead to such problems.

Our Support Engineers will fine tune the following Nginx timeout values in the Nginx configuration file.

proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
fastcgi_send_timeout 1200s;
fastcgi_read_timeout 1200s;

Once the timeout values are added, need to reload Nginx to save these parameters.

Conclusion

In short, Nginx upstream timed out triggers due to a number of reasons that include server resource usage and software timeouts. Today, we saw how our Support Engineers fix this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

SEE SERVER ADMIN PLANS

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

Практика показывает, что ошибка upstream timed out (110: Connection timed out) может возникать в двух случаях. Причем название самой ошибки указывает на решение — необходимо увеличить время ожидания в настройках веб-сервера.

Помогаем

Unrecognizable

Nginx в качестве proxy или reverse proxy

В этом случае ошибка может возникать, если истекло время ожидания на чтение ответа от прокси-сервера.

То есть Nginx отправил запрос и не дождался ответа. Если вы уверены, что ваше веб-приложение работает корректно, то необходимо увеличить этот таймаут в файле конфигурации nginx.conf в секции location:

location / {

...
proxy_send_timeout 150;
proxy_read_timeout 150;
...

}

Установка времени ожидания на отправку и чтение ответа, в секундах

Nginx с подключенными FastCGI-серверами

В этом случае ошибка возникает, если истекло время ожидания на чтение ответа от подключенных сервисов или приложений, PHP-FPM, к примеру.

Ставайте досвідченим фахівцем з фінансів на рівні директора!

РЕЄСТРУЙТЕСЯ!

Chief financial officer

Решение такое же банальное, как и в первом случае — необходимо увелчить время ожидания:

location ~* .php$ {

include fastcgi_params;
...
fastcgi_read_timeout 150;
...

}

Время ожидания на чтение ответа, в секундах

Самое главное

Прежде чем увеличить время ожидания, которое в данном случае по умолчанию составляет 60 с, следует проверить работоспособность всех компонентов и модулей. Если же все работает как нужно, то увеличение таймаута будет самым простым решением проблемы.

Этот текст был написан несколько лет назад. С тех пор упомянутые здесь инструменты и софт могли получить обновления. Пожалуйста, проверяйте их актуальность.

upstream timed out 101- ошибка, которая возникает при превышении лимита ожидания выполнения скрипта веб-сервером. Часто при таймауте соединения клиент будет видеть 504 ошибку.

upstream timed out 110 connection timed out в Nginx

В логах ошибки будут выглядеть так:

grep ‘upstream timed out’ /var/log/nginx/error.log

2018/08/13 17:01:03 [error] 32147#32147: *1197966 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 123.123.123.123, server: example.com, request: «POST /api.php/shop.product.update?id=3268 HTTP/1.1», upstream: «fastcgi://unix:/var/run/php7.0-example.sock», host: «example.com»

Если такие записи появляются прежде всего нужно установить причины. Возможны два варианта: 1) код так работать не должен — ждет ответа от какого-то недоступного ресурса или базы 2) длительное выполнение ожидаемо.

Во втором случае достаточно увеличить лимиты. Обычно так приходится делать в случае с выгрузками товаров на сайт, которые могут выполняться в течение нескольких часов.

Как исправить ошибку если используется PHP-FPM

Если скрипты выполняет fpm — меняется значение параметра fastcgi_read_timeout 400; (о директиве)

Значение в секундах можно значительно увеличивать, обычно достаточно 400.

location / {
index index.php;
try_files $uri $uri/ =404;

fastcgi_connect_timeout 20;
fastcgi_send_timeout 120;
fastcgi_read_timeout 400;

}

Лимит может превышаться при длительном выполнении PHP скриптов.

Как исправить ошибку если используется Apache

При проксировании требуется добавить директиву proxy_read_timeout 150;

Значение также как в первом случае задается в секундах и означает лимит времени между операциями чтения.

location / {
index index.php;
try_files $uri $uri/ =404;

proxy_read_timeout 150;

}

После изменения конфигурации веб-сервер требуется перезапустить командой nginx -s reload.

Периодически появляются ошибки «upstream timed out (110: Connection timed out) while connecting to upstream», особенно когда какой нибудь бот индексирует страницы. Понятно, что это означает, что сервер вовремя не вернул результат. Но сами страницы отдаются очень быстро, на них нету slow queries, да и сервер особо не нагружен. Что ещё может вызывать такие ошибки? Куда можно копать?


  • Вопрос задан

    более трёх лет назад

  • 4150 просмотров

Пригласить эксперта

Варианты:
1) на бекенде к которому обращается nginx кончаются коннекты
2) На сервере кончаются дескрипторы

Как обычно информации мало, в проблеме так не разобраться.

Как правило ошибка лечится увеличением
proxy_read_timeout 300s;
еще посмотрите вот эти директивы:
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
Они все по дефолту 60 сек
+ если у вас php-fpm, могут быть
fastcgi_send_timeout
fastcgi_read_timeout
Смотрите всё, тестируйте, проверяйте


  • Показать ещё
    Загружается…

30 янв. 2023, в 08:15

20000 руб./за проект

30 янв. 2023, в 08:09

1000 руб./за проект

30 янв. 2023, в 07:48

500 руб./за проект

Минуточку внимания

My app runs properly when I open /. I get this error when I open ?url=https://example.com that triggers the use of Headless Chrome.

I used kubectl get pods and then kubectl logs [POD_NAME] -c user-container, and I see crashes in my pods:

(node:16) UnhandledPromiseRejectionWarning: Error: net::ERR_CONNECTION_CLOSED at https://example.com
    at navigate (/usr/src/app/node_modules/puppeteer/lib/Page.js:592:37)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:188:7)
(node:16) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function wit
hout a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:16) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will
terminate the Node.js process with a non-zero exit code.
(node:16) UnhandledPromiseRejectionWarning: Error: net::ERR_CONNECTION_CLOSED at https://example.com
    at navigate (/usr/src/app/node_modules/puppeteer/lib/Page.js:592:37)    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:188:7)
(node:16) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)

By default, can the code that runs in my container fetch resources from the internet?

In any case, we should improve this upstream request timeout error message.

Upstream timed out (110: Connection timed out) while reading response header from upstream, that was the exact message I saw on my system logs today.

I knew I already fixed this many times, but haven’t any documentation about it, that’s why today I’m writing this post, to have a definitive and fast way to search for Nginx upstream timed out error if it happens again in the future.

This error can be seen and searched over the internet in the following ways:

  • nginx upstream timed out error and fix
  • upstream timed out (110: Connection timed out) while reading response header from upstream
  • nginx error while reading response header from upstream

This nginx upstream timed out error was found on this same plain box that I use for nixcp.com website. After investigating I found that it was caused by a low value on my php-ftpm configuration.

In order to fix it I just altered the fastcgi_read_timeout variable, this is my current configuration:

location ~* .php$ {
      include fastcgi_params;
      fastcgi_index index.php;
      fastcgi_read_timeout 160;
      fastcgi_pass 127.0.0.1:9001;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}

Solution for reverse proxy on cPanel or plain servers

If you have a different configuration like a reverse proxy (which can be done usign Nginx plugins for cPanel), you can alter the timeout variables too by setting a higher value for proxy_read_timeout directive.

Just add this directive or modify to a higher value inside your nginx vhost configuration, as you see below.

location / {
...
...
proxy_read_timeout 160;
...
...
}

For both solutions restart Nginx to apply changes:

service nginx restart

The propossed value of “160” is a value that worked for me, you may need to increase it or decrease it depending on how your apps run.
That’s all, at this time you should know how to fix this common Nginx upstream timed out (110: Connection timed out) while reading response header from upstream error.

Do you know other ways to fix this? Please share your knowledge with us.

Further reading:

  • Nginx official docs for ngx_http_upstream_module

Nginx

Практика показывает, что ошибка upstream timed out (110: Connection timed out) может возникать в двух случаях. Причем название самой ошибки указывает на решение — необходимо увеличить время ожидания в настройках веб-сервера.

Nginx в качестве proxy или reverse proxy
В этом случае ошибка может возникать, если истекло время ожидания на чтение ответа от прокси-сервера.

То есть Nginx отправил запрос и не дождался ответа. Если вы уверены, что ваше веб-приложение работает корректно, то необходимо увеличить этот таймаут в файле конфигурации nginx.conf в секции location:

location / {
...
proxy_send_timeout 150;
proxy_read_timeout 150;
...
}
# Установка времени ожидания на отправку и чтение ответа, в секундах

Nginx с подключенными FastCGI-серверами
В этом случае ошибка возникает, если истекло время ожидания на чтение ответа от подключенных сервисов или приложений, PHP-FPM, к примеру.

Решение такое же банальное, как и в первом случае — необходимо увелчить время ожидания:

location ~* .php$ {
include fastcgi_params;
...
fastcgi_read_timeout 150;
...
}
# Время ожидания на чтение ответа, в секундах

Важно!
Прежде чем увеличить время ожидания, которое в данном случае по умолчанию составляет 60 с, следует проверить работоспособность всех компонентов и модулей. Если же все работает как нужно, то увеличение таймаута будет самым простым решением проблемы.

Источник


  • Ошибка upstream prematurely closed connection while reading response header from upstream
  • Ошибка uplay user getnameutf8
  • Ошибка uplay r1 loader64 dll для far cry 4
  • Ошибка uplay r1 loader dll скачать
  • Ошибка update cpp 1205 продолжение установки невозможно