Skip to content

"Network is unreachable" problem #470

@VitMosin

Description

@VitMosin

Bug Overview

I'm setting up nginx to cache files from S3 and getting a random error. Simetimes is ok, sometimes no.

2025/10/22 08:03:11 [error] 81#81: *4256 connect() to [ipv6]:443 failed (101: Network is unreachable) while connecting to upstream, client: 10.200.186.53, server: , request: "GET /a14feca0-274e-4265-a1c0-c541e50c3b99 HTTP/1.1", upstream: "https://[ipv6]:443/a14feca0-274e-4265-a1c0-c541e50c3b99", host: "nginx-s3-gateway.development"

2025/10/22 08:03:11 [warn] 81#81: *4256 upstream server temporarily disabled while Connecting to upstream, client: 10.200.186.53, server: , request: "GET /a14feca0-274e-4265-a1c0-c541e50c3b99 HTTP/1.1", upstream: "https://[ipv6]:443/a14feca0-274e-4265-a1c0-c541e50c3b99", host: "nginx-s3-gateway.development"

Everything is fine without errors locally in Docker. This error appears only in Cloud provider - and in the k8s pod, and on a separate Docker node.
S3 address is the same locally and in cloud provider.

Expected Behavior

conrainer has no error log

Steps to Reproduce the Bug

When I use the gateway using docker in Cloud provider, gateway fails with "Network unreachable" error. If I use it locally everything is ok.

Environment Details

  • nginxinc/nginx-s3-gateway:latest-20251020
  • Target deployment platforms [e.g. AWS/GCP/local cluster/etc...]: Yandex cloud
  • S3 backend implementation: [e.g. AWS, Ceph, NetApp StorageGrid, etc...]: AWS
  • Authentication method: [e.g. IAM, IAM with Fargate, IAM with K8S, AWS Credentials, etc...]: AWS Credentials

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions