Nginx Error — “Upstream sent Too Big Header”

Ambidextrous
3 min readFeb 23, 2023

--

I recently faced this issue where I started getting 502 Bad Gateway errors when I tried to access the API. 502 Bad Gateway error usually comes when the gateway/proxy server such as Nginx got an invalid response from the upstream server (destination server). Here is the high-level flow of the end-to-end operation in my scenario:

As a first step, I checked my gateway servers and found the following error in the error logs:

tail -f /var/log/nginx/error_maviqitsolutions.log
2023/02/20 20:23:03 [error] 83831#83831: *1917032 upstream sent too big header
while reading response header from upstream, client: 10.xxx.yyy.zzz,
server: www.maviqitsolutions.com, request: “POST /app HTTP/1.1”,
upstream: “https://10.xxx.yyy.zzz/”, host: “www.maviqitsolutions.com”,
referrer: “https://id-dev.maviqitsolutions.com/”

From the error description, it is clear that the header payload coming from the response is too big for the Nginx server. This error is pretty common in Nginx because the default buffer size is 4k or 8k depending on the OS.

To solve this error, we need to increase the buffer space reserved for the headers by adding these two settings to your existing nginx configuration:

  • proxy_buffer_size: This config sets the size of the buffer used to read the response from the upstream/downstream server
  • proxy_buffers: This config controls the size and number of buffers allocated for a request

Add these values as per your requirement to the location block of your nginx configuration.

server {
listen 80;
server_name ;

location / {
proxy_pass http://upstreamserverhere;
...

proxy_buffer_size 32k;
proxy_buffers 8 32k;
}
}

Then, reload the nginx to apply the new config:

nginx -t 
systemctl reload nginx

After this, I was not seeing this error in the nginx logs, but I was still getting a 502 bad gateway error in the browser. So, I followed the flow again and realized it must be getting the same issue in the nginx ingress controller in Kubernetes. So, I checked the logs of the nginx ingress controller and yes, the same error was being logged there.

To update the configuration in the Nginx Ingress Controller, we will do the following steps:

  1. Find out how the Nginx Ingress Controller is being deployed — it could be a Kubernetes Deployment or a DaemonSet. In my case, it was a DaemonSet deployed in the ingress-nginx namespace
  2. Check the YAML of DaemonSet to see if it is using a ConfigMap to load the configuration:
kubectl get daemonset.apps/nginx-ingress-controller -n ingress-nginx -o yaml

3. Update the ConfigMap

In this case, this DaemonSet is using the ConfigMap “ingress-nginx-configuration” for its configuration. We will then, edit the ConfigMap and add the following two configurations (Note how the keys here are using hyphens instead of underscores and the values are a bit different):

proxy-buffer-size: 32k
proxy-buffers-number: 8

4. Reload the DaemonSet using kubectl rollout:

#Redeploy the App
kubectl rollout restart daemonset.apps/nginx-ingress-controller \
-n ingress-nginx

#Check status of the rollout
kubectl rollout status daemonset.apps/nginx-ingress-controller \
-n ingress-nginx default

This fixed my issue and I was able to access my API again. Hopefully, this helped!

--

--