No live upstreams while connecting to upstream nginx ошибка

I have a really weird issue with NGINX.

I have the following upstream.conf file, with the following upstream:

upstream files_1 {
    least_conn;
    check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;

    server mymachine:6006 ;
}

In locations.conf:

location ~ "^/files(?<command>.+)/[0123]" {
        rewrite ^ $command break;
        proxy_pass https://files_1 ;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

In /etc/hosts:

127.0.0.1               localhost               mymachine

When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.

But when I send to the NGINX file server a request, I get the following error:

no live upstreams while connecting to upstream, client: .., request: «POST /files/save/2 HTTP/1.1, upstream: «https://files_1/save»

But the upstream is OK. What is the problem?

502 bad gateway error displayed when switching between site pages and some times on home page but not for the first request on the home page it is only when another page redirect to it. and it happens for some javascript files

load balancing configured on two upstreams php1 php2 both are apache server.

When I checked error log i fond:

no live upstreams while connecting to upstream

[error] 27212#0: *314 no live upstreams while connecting to   upstream, client: ip_address , server: example.com, request: "GET / HTTP/1.1", upstream: "http://example.com", host: "example.com", referrer: "http://example.com/mypages/"

and this is load balancing server configuration

  upstream example.com  {
    #  ip_hash;
      server php01 max_fails=3 fail_timeout=15s;
      server php02 max_fails=3 fail_timeout=15s;
    }

    server {
      listen IP:80;
      server_name example.com;
      access_log /var/log/nginx/example.com.access;
      error_log /var/log/nginx/example.com.error error;

     location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass  http://$server_name/$uri;
        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
        proxy_cache_bypass $http_pragma $http_authorization;
        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
        proxy_no_cache $http_pragma $http_authorization;
      }

    }

I searched for hours and nothing helpful found my streams are up and no problems with them.

I am tired of solving a problem, for 2 weeks, I spend a lot of time solving a problem of a proxy that was in operation for more than 2 years.

I have changed the docker-compose files to the latest version that appears in the documentation, but this has not generated a positive change, so I request your valuable support.

the problem is that when trying to access some site, nginx returns a 500 or a 503 error.

this is the nginx container configuration:

version: '2'
services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy
    container_name: nginx-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - dhparam:/etc/nginx/dhparam
      - certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
    network_mode: bridge
  docker-gen:
    image: nginxproxy/docker-gen
    container_name: nginx-proxy-gen
    command: -notify-sighup nginx-proxy -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    restart: always
    volumes_from:
      - nginx-proxy
    volumes:
      - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
    network_mode: bridge
  acme-companion:
    image:  nginxproxy/acme-companion
    container_name: nginx-proxy-acme
    restart: always
    volumes:
      - certs:/etc/nginx/certs:rw
      - acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock:ro
    volumes_from:
      - nginx-proxy
    depends_on:
      - "nginx-proxy"
    network_mode: bridge
    environment:
      DEFAULT_EMAIL: server@example.com
      NGINX_DOCKER_GEN_CONTAINER: nginx-proxy-gen
  whoami:
    image: jwilder/whoami
    restart: always
    expose:
      - "8000"
    environment:
      - VIRTUAL_HOST=whoami.local
      - VIRTUAL_PORT=8000
volumes:
  conf:
  vhost:
  html:
  dhparam:
  certs:
  acme:


networks:
  default:
    external:
      name: nginx-proxy

this is the client container

version: '2' # version of docker-compose to use
services: # configuring each container
  TS-DB1: # name of our mysql container
    image: mariadb:latest # which image to pull, in this case specifying v. 5.7
    volumes: # data to map to the container
      - ./database/:/var/lib/mysql # where to find our data -- we'll talk more about this
    restart: always # always restart the container after reboot
    environment: # environment variables -- mysql options in this case
      MYSQL_ROOT_PASSWORD: password
      MYSQL_DATABASE: dbname
      MYSQL_USER: dbuser
      MYSQL_PASSWORD: password

  TS-WP1: # name of our wordpress container
    depends_on: # container dependencies that need to be running first
      - TS-DB1
    image: wordpress:latest # image used by our container
    restart: always
    environment:
      VIRTUAL_HOST: example.com, www.example.com
      VIRTUAL_PORT: 8003
      LETSENCRYPT_HOST: www.example.com,example.com,cloud.example.com,tienda.example.com
      LETSENCRYPT_EMAIL: server@example.com
      WORDPRESS_DB_HOST: TS-DB1:3306 # default mysql port
      WORDPRESS_DB_NAME: dbname # default mysql port
      WORDPRESS_DB_USER: dbuser# default mysql port
      WORDPRESS_DB_PASSWORD: password # matches the password set in the db containe

    volumes: # this is where we tell Docker what to pay attention to
      - ./html:/var/www/html # mapping our custom theme to the container
      - ./php.ini:/usr/local/etc/php/conf.d/uploads.ini
networks:
  default:
    external:
      name: nginx-proxy

When I create nginx with docker-compose it dies after a few seconds with this message:

nginx-proxy       | WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now whil
e a new one
nginx-proxy       | is being generated in the background.  Once the new dhparam.pem is in place, nginx will be reloaded.
nginx-proxy       | forego      | starting dockergen.1 on port 5000
nginx-proxy       | forego      | starting nginx.1 on port 5100
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: using the "epoll" event method
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: nginx/1.21.0
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: built by gcc 8.3.0 (Debian 8.3.0-6)
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: OS: Linux 4.15.0-144-generic
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: getrlimit(RLIMIT_NOFILE): 1048576:10485
76
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker processes
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 40
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 41
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 42
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 43
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:23 Generated '/etc/nginx/conf.d/default.conf' from 4 conta
iners
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:23 Running 'nginx -s reload'
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 1 (SIGHUP) received from 45, rec
onfiguring
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: reconfiguring
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:23 Watching docker events
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: using the "epoll" event method
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker processes
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 48
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 49
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 50
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: start worker process 51
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 41#41: gracefully shutting down
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 41#41: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 42#42: gracefully shutting down
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 42#42: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 40#40: gracefully shutting down
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 40#40: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 42#42: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 40#40: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 43#43: gracefully shutting down
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 43#43: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 43#43: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 41#41: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 17 (SIGCHLD) received from 40
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: worker process 40 exited with code 0
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: worker process 42 exited with code 0
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 29 (SIGIO) received
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 17 (SIGCHLD) received from 43
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: worker process 43 exited with code 0
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 29 (SIGIO) received
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 17 (SIGCHLD) received from 41
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: worker process 41 exited with code 0
nginx-proxy       | nginx.1     | 2021/06/20 19:33:23 [notice] 34#34: signal 29 (SIGIO) received
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:24 Contents of /etc/nginx/conf.d/default.conf did not chan
ge. Skipping notification 'nginx -s reload'
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:24 Received event start for container c61e5ec61dd2
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:24 Received event start for container 9e41f8f62cfd
nginx-proxy       | forego      | sending SIGTERM to dockergen.1
nginx-proxy       | forego      | sending SIGTERM to nginx.1
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 48#48: signal 15 (SIGTERM) received from 1, exiti
ng
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 50#50: signal 15 (SIGTERM) received from 1, ex
iting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 48#48: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 50#50: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 48#48: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 50#50: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 34#34: signal 15 (SIGTERM) received from 1, ex
iting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 49#49: signal 15 (SIGTERM) received from 1, ex
iting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 49#49: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 49#49: exit
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 51#51: signal 15 (SIGTERM) received from 1, ex
iting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 51#51: exiting
nginx-proxy       | nginx.1     | 2021/06/20 19:33:24 [notice] 51#51: exit
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:24 Received signal: terminated
nginx-proxy       | dockergen.1 | 2021/06/20 19:33:24 Received signal: terminated

When trying to access a site from port 80 or 443, the following error appears:

by 80 port:

nginx-proxy       | nginx.1     | example.com 192.168.1.231 - - [20/Jun/2021:20:38:51 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "example.com-upstream"
nginx-proxy       | nginx.1     | 2021/06/20 20:38:51 [error] 45#45: *42 no live upstreams while connecting to upstream, client: 192.168.1.231, server: example.com, request: "GET / HTTP/1.1", upstream: "http://example.com-upstream/", host: "example.com"

by 443 port:


nginx-proxy       | nginx.1     | example.com 192.168.1.231 - - [20/Jun/2021:20:39:31 +0000] "GET / HTTP/2.0" 500 177 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
nginx-proxy       | nginx.1     | example.com 192.168.1.231 - - [20/Jun/2021:20:39:31 +0000] "GET /favicon.ico HTTP/2.0" 500 177 "https://example.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"

when I try to get a new ssl certificate with docker exec nginx-proxy-acme /app/force_renew I get this:

nginx-proxy       | nginx.1     | cloud.example.com 34.221.255.206 - - [20/Jun/2021:20:44:28 +0000] "GET /.well-known/acme-challenge/EwjFpSqhqkuOcGVcPDwpE1HoPYOr8CFQlmaIUYWVj7g HTTP/1.1" 503 190 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nginx-proxy       | nginx.1     | cloud.example.com 3.142.122.14 - - [20/Jun/2021:20:44:28 +0000] "GET /.well-known/acme-challenge/EwjFpSqhqkuOcGVcPDwpE1HoPYOr8CFQlmaIUYWVj7g HTTP/1.1" 503 190 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nginx-proxy       | nginx.1     | cloud.example.com 66.133.109.36 - - [20/Jun/2021:20:44:28 +0000] "GET /.well-known/acme-challenge/EwjFpSqhqkuOcGVcPDwpE1HoPYOr8CFQlmaIUYWVj7g HTTP/1.1" 503 190 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nginx-proxy       | nginx.1     | cloud.example.com 18.184.29.122 - - [20/Jun/2021:20:44:28 +0000] "GET /.well-known/acme-challenge/EwjFpSqhqkuOcGVcPDwpE1HoPYOr8CFQlmaIUYWVj7g HTTP/1.1" 503 190 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nginx-proxy       | nginx.1     | tienda.example.com 66.133.109.36 - - [20/Jun/2021:20:44:33 +0000] "GET /.well-known/acme-challenge/sv7DLBk-Rp79GWz0oXno8JfRdtDdAQevJ9OumrChdCc HTTP/1.1" 503 190 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nginx-proxy       | nginx.1     | tienda.example.com 52.39.4.59 - - [20/Jun/2021:20:44:33 +0000] "GET /.well-known/acme-challenge/sv7DLBk-Rp79GWz0oXno8JfRdtDdAQevJ9OumrChdCc HTTP/1.1" 503 190 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"

and the foce renew result this:

docker exec nginx-proxy-acme /app/force_renew
Creating/renewal www.example.com certificates... (www.example.com example.com cloud.example.com tienda.example.com)
[Sun Jun 20 20:44:13 UTC 2021] Using CA: https://acme-v02.api.letsencrypt.org/directory
[Sun Jun 20 20:44:13 UTC 2021] Creating domain key
[Sun Jun 20 20:44:21 UTC 2021] The domain key is here: /etc/acme.sh/server@example.com/www.example.com/www.example.com.key
[Sun Jun 20 20:44:22 UTC 2021] Multi domain='DNS:www.example.com,DNS:example.com,DNS:cloud.example.com,DNS:tienda.example.com'
[Sun Jun 20 20:44:22 UTC 2021] Getting domain auth token for each domain
^B[Sun Jun 20 20:44:26 UTC 2021] Getting webroot for domain='www.example.com'
[Sun Jun 20 20:44:26 UTC 2021] Getting webroot for domain='example.com'
[Sun Jun 20 20:44:26 UTC 2021] Getting webroot for domain='cloud.example.com'
[Sun Jun 20 20:44:26 UTC 2021] Getting webroot for domain='tienda.example.com'
[Sun Jun 20 20:44:27 UTC 2021] www.example.com is already verified, skip http-01.
[Sun Jun 20 20:44:27 UTC 2021] example.com is already verified, skip http-01.
[Sun Jun 20 20:44:27 UTC 2021] Verifying: cloud.example.com
[Sun Jun 20 20:44:30 UTC 2021] cloud.example.com:Verify error:Invalid response from http://cloud.example.com/.well-known/acme-challenge/EwjFpSqhqkuOcGVcPDwpE1HoPYOr8CFQlmaIUYWVj7g [IP-PUBLIC]:
[Sun Jun 20 20:44:30 UTC 2021] Please check log file for more details: /dev/null

My environtment configuration:

$ docker-compose -version
docker-compose version 1.29.2, build 5becea4c

$ docker -v
Docker version 20.10.7, build f0df350

$ docker -v
Docker version 20.10.7, build f0df350

$ docker network ls
NETWORK ID     NAME          DRIVER    SCOPE
b4295e60714a   bridge        bridge    local
4728bf16f693   host          host      local
fdc61b1b1480   nginx-proxy   bridge    local
2e0bc41b39f7   none          null      local

$ uname -a
Linux serverhttp 4.15.0-144-generic #148-Ubuntu SMP Sat May 8 02:33:43 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

$ free
              total        used        free      shared  buff/cache   available
Mem:        3930672      392052     2879056        4356      659564     3320632
Swap:       2097148           0     2097148

$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev             1932496        0   1932496   0% /dev
tmpfs             393068     1492    391576   1% /run
/dev/sda2       76395292 20924768  51546764  29% /
tmpfs            1965336        0   1965336   0% /dev/shm
tmpfs               5120        0      5120   0% /run/lock
tmpfs            1965336        0   1965336   0% /sys/fs/cgroup
/dev/loop0         89088    89088         0 100% /snap/core/4917
/dev/loop1         89984    89984         0 100% /snap/core/5742
/dev/loop2         90368    90368         0 100% /snap/core/5897
overlay         76395292 20924768  51546764  29% /var/lib/docker/overlay2/a9d62074a9ac482884984df110e1a3eea05a34b592f8f6456deb57b85e526391/merged
overlay         76395292 20924768  51546764  29% /var/lib/docker/overlay2/7abe60ee34ecdae6c4a56de63aedd626bcb81ddffec0f1a17d942a39782dca56/merged
tmpfs             393064        0    393064   0% /run/user/1000
overlay         76395292 20924768  51546764  29% /var/lib/docker/overlay2/d27753aa5a6b2234b83a327dd94e02dc26b6813cd5b78b9fc192a44292b327ff/merged
overlay         76395292 20924768  51546764  29% /var/lib/docker/overlay2/4a4fb8b03de81bc4666d93911f0ba31db2b35dcf95c9af304ff30c56c6bbf532/merged
overlay         76395292 20924768  51546764  29% /var/lib/docker/overlay2/47327e53cbd26df9b76c45bafeaef4f95c06726ba0ca1fa51a20e5ae0c6c33db/merged
overlay         76395292 20924768  51546764  29% /var/lib/docker/overlay2/76f276b83f73c21c663d3af5f60c0a00b32c613e1d35db0a08a46e675caf065d/merged

the result of this:

docker inspect yournginxproxycontainer
docker exec yournginxproxycontainer nginx -T
docker exec yournginxproxycontainer cat /proc/1/cpuset
docker exec yournginxproxycontainer cat /proc/self/cgroup
docker exec yournginxproxycontainer cat /proc/self/mountinfo

I don’t know what happened, since these services have worked for more than 2 years and now they are no longer working.

I hope you can guide me.

best regards.

08.01.2019 21:01, Eugene Toropov пишет:
> Добрый вечер,
>
> Тогда получается ситуация, при которой часть запросов файрвол пропускает, а часть режет. При чем ночью до 9 утра не режет ничего, а вечером почти все. Как nginx определяет, что апстрим живой? Любой статус, отличный от 200?

посмотрите описание proxy_next_upstream

Директива также определяет, что считается неудачной попыткой работы с
сервером. Случаи error, timeout и invalid_header всегда считаются
неудачными попытками, даже если они не указаны в директиве. Случаи
http_500, http_502, http_503, http_504 и http_429 считаются неудачными
попытками, только если они указаны в директиве. Случаи http_403 и
http_404 никогда не считаются неудачными попытками.

и директиву server из секции описания upstream

max_fails=число
    задаёт число неудачных попыток работы с сервером, которые должны
произойти в течение времени, заданного параметром fail_timeout, чтобы
сервер считался недоступным на период времени, также заданный параметром
fail_timeout. По умолчанию число попыток устанавливается равным 1.
Нулевое значение отключает учёт попыток. Что считается неудачной
попыткой, определяется  директивами proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,scgi_next_upstream,
memcached_next_upstream и grpc_next_upstream.

если апстрим реально один, то укажите ему max_fails=0

А вообще смотрите запросы рядом с первым 502. там скорее всего гдето
случились таймауты, единственный апстрим отметился как фейл и на время
fail_timeout(10с по умолчанию) выпадает из работы.

/Алексей

_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

For me, the issue was with my proxy_pass entry. I had

location / {
        ...
        proxy_pass    http://localhost:5001;
    }

This caused the upstream request to use the IP4 localhost IP or the IP6 localhost IP, but every now and again, it would use the localhost DNS without the port number resulting in the upstream error as seen in the logs below.

[27/Sep/2018:16:23:37 +0100] <request IP> - - - <requested URI>  to: [::1]:5001: GET /api/hc response_status 200
[27/Sep/2018:16:24:37 +0100] <request IP> - - - <requested URI>  to: 127.0.0.1:5001: GET /api/hc response_status 200
[27/Sep/2018:16:25:38 +0100] <request IP> - - - <requested URI>  to: localhost: GET /api/hc response_status 502
[27/Sep/2018:16:26:37 +0100] <request IP> - - - <requested URI>  to: 127.0.0.1:5001: GET /api/hc response_status 200
[27/Sep/2018:16:27:37 +0100] <request IP> - - - <requested URI>  to: [::1]:5001: GET /api/hc response_status 200

As you can see, I get a 502 status for localhost:

Changing my proxy_pass to 127.0.0.1:5001 means that all requests now use IP4 with a port.

This StackOverflow response was a big help in finding the issue as it detailed changing the log format to make it possible to see the issue.

I saw such behavior many times during perf. tests.

Under heavy workload the performance of your upstream server(s) may not be enough and upstream module may mark upstream server(s) as unavailable.

The relevant parameters (server directive) are:

max_fails=number

sets the number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable for a duration also set by the fail_timeout parameter. By default, the number of unsuccessful attempts is set to 1. The zero value disables the accounting of attempts. What is considered an unsuccessful attempt is defined by the proxy_next_upstream, directives.

fail_timeout=time

sets:

  • the time during which the specified number of unsuccessful attempts
    to communicate with the server should happen to consider the server
    unavailable;
  • and the period of time the server will be considered unavailable.

By default, the parameter is set to 10 seconds.

How to solve nginx – no live upstreams while connecting to upstream client?

Related posts on nginx :

  • configuration – How to set index.html as root file in Nginx?
  • Docker – how to expose port thru jwilder nginx-proxy?
  • How to block a specific user agent in nginx config
  • iframe – X-Frame-Options in nginx to allow all domains
  • analytics – Simple NGINX log file analyzer
  • How to authenticate nginx with ldap?
  • Plex behind nginx reverse proxy
  • why does nginx fail to start?
  • Socket.io with nginx

  • No ide master hdd detected press f1 to resume как исправить ошибку
  • No i o ports were found during enumeration ошибка
  • No host is compatible with the virtual machine ошибка
  • No healthy upstream ошибка что значит
  • No healthy upstream ошибка как исправить