Sponsored

Not the solution you were looking for?

While working on my client website which is based on WooCommerce, I happened to see the checkout page failing with an error message “502 Bad gateway”. I suspected the issue might be because of NGINX and it happened to be true. The NGINX error log read as ‘upstream sent too big header while reading response header from upstream request: GET /checkout/ HTTP/1.1, upstream: fastcgi://unix:/var/run/php-fpm/php-fpm.sock‘. In this tutorial, I’ll be explaining what the error is all about and how to fix the same.

What does the error “Upstream sent too big header while reading response header from upstream” mean?

I understand that the page seems to send too big header than the capacity of the receiving end. But what was the header size that was too big for the server to handle? As the page I got the error was checkout page, which included 10 items added to the cart. Hence, the cookies, the content of the page were all high and that could have resulted in bigger header size. So how to find what the response headers include? That’s simple.

Launch the chrome browser, right click and select Inspect

Click Network tab

Reload the page

Select any of the HTTP request from the left panel and view the HTTP headers on the right panel.

That’s fine, you know to view the HTTP headers. But why did the server fail with an error “Upstream sent too big header while reading response header from upstream”? Well, the answer is each web server has a maximum header size set and the HTTP request headers sent was too big than the one set in the web server. Below are the maximum header size limit on various web servers.

Apache web server – 8K

NGINX – 4K to 8K

IIS (varies on each version) – 8K to 16K

Tomcat (varies on each version) : 8K to 48K.

As the web server I am using is NGINX, the default header size limit is 4K to 8K. By default NGINX uses the system page size, which is 4K in most of the systems. You can find that using the below command:

By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.

The two parameters that are related to FastCGI response buffering are:

fastcgi_buffers
fastcgi_buffer_size

fastcgi_buffers – controls the number and memory size of buffer segments used for the payload of the FastCGI response.

fastcgi_buffer_size – controls the buffer size that used to hold the first chunk of fastCGI response from the HTTP response headers.

According to the NGNIX documentation, you don’t need to adjust the default value of fastCGI response parameters, as the NGINX by default uses the smallest page size of 4KB and it should fit most of the HTTP header requests. However, it does not seems to fit in my case. The same documentation says, some of the frameworks might push large amount of cookie data via Set-Cookie HTTP header and that might blow out the buffer resulting in HTTP 500 error. In such cases, you might need to increase the buffer size to 8k/16k/32k to accommodate larger upstream HTTP header.

How to find the average and maximum FastCGI response sizes received by the web server?

That can be found out by grepping the NGINX access log files. To do that, run the below command by providing the access_log file as an input

From the above snapshot, it’s clear that the average buffer size is more than 21K. So we need to set buffer size slightly more than the average request, which can probably be 32K. To do that, open the nginx.conf file add the below lines under location section – location ~ \.php$ { }

fastcgi_buffers 32 32k;
fastcgi_buffer_size 32k;

Note: You might need to set lesser buffer value. I had set 32K as the average size was over 21K.