405 Not Allowed

I checked file content (actual request body) buffered in /tmp and it
contains expected file name ("upload.txt").
When I comment out auth_basic* directives in above config for both cases
(for GET and POST) I get "401 Authorization Required".
What did I miss?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270063#msg-270063
From francis at daoine.org Tue Oct 4 19:46:42 2016
From: francis at daoine.org (Francis Daly)
Date: Tue, 4 Oct 2016 20:46:42 +0100
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <7598f715feca4d51b9a9d738bd221c68.NginxMailingListEnglish@forum.nginx.org>
References: <7598f715feca4d51b9a9d738bd221c68.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161004194642.GK11677@daoine.org>
On Tue, Oct 04, 2016 at 03:28:18PM -0400, yurai wrote:
Hi there,
> Unfortunately I get "HTTP/1.1 405 Not Allowed" error code all the time.
The "back-end" thing that you POST to must be able to handle the POST.
Right now, you just ask nginx to serve a file from the filesystem,
which does not accept a POST (by default).
> location / {
> root /usr/share/nginx/html/foo/bar;
> autoindex on;
> }
Add something like
return 200 "Do something sensible with $http_x_file\n";
in there and you'll see that it does work.
And then decide what you actually want to do with the file, and make
something do that.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From justinbeech at gmail.com Wed Oct 5 03:29:32 2016
From: justinbeech at gmail.com (jb)
Date: Wed, 5 Oct 2016 14:29:32 +1100
Subject: Safari gets network connection reset over https with very high speed
connection
Message-ID:
Does anyone know how I can debug this issue?
nginx (latest version and 1.9 too) running on iMac
Safari running on macbook
thunderbolt2 cable between the two of them.
Any download of https file from nginx downloads random start part of the
file then
Safari reports in red "The network connection was lost".
No other browser has an issue.
Doesn't happen with http only https
doesn't happen if the connection is throttled down to less than 400 megabit.
Happens faster, the faster the connection is...
Nothing in the nginx error log.
I've reported the bug to Safari however from their response I believe they
are not going to find the issue.
Can anyone with a fast https connection - maybe to localhost - confirm this
problem? Under Sierra and Safari 10. I don't know if the older version of
Safari also had this.
thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Wed Oct 5 06:39:50 2016
From: francis at daoine.org (Francis Daly)
Date: Wed, 5 Oct 2016 07:39:50 +0100
Subject: Rate limit per day
In-Reply-To:
References:
Message-ID: <20161005063950.GM11677@daoine.org>
On Mon, Oct 03, 2016 at 02:43:16PM +0530, nitin bhadauria wrote:
Hi there,
> Is it possible to use Module ngx_http_limit_req_module to rate limit
> request / day ?
No. (Unless the request rate is so high that it could also be expressed
per minute.)
The config interface only allows integer requests per second or per
minute.
The implementation seems to allow integer requests-per-thousand-seconds,
but it's not immediately clear to me how exact that can be at low numbers.
But if you wanted to play, minor patching could let you set values there,
which could be of the order of hundreds of request per day. Perhaps that
would work well enough for you?
Otherwise, it looks like it would require significant patching to
implement what you might want. At that point, you may be better off with
a different starting module.
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed Oct 5 06:55:23 2016
From: francis at daoine.org (Francis Daly)
Date: Wed, 5 Oct 2016 07:55:23 +0100
Subject: location query string?
In-Reply-To:
References:
<20160929213233.GE11677@daoine.org>
<20161002100709.GH11677@daoine.org>
Message-ID: <20161005065523.GN11677@daoine.org>
On Tue, Oct 04, 2016 at 10:12:07AM -0700, Grant wrote:
Hi there,
> > Your later mail suggests that "Keepalive" is involved somehow. If you
> > are still keen to investigate -- can you see that nginx does something
> > wrong when Keepalive is or is not set? Or does upstream do something
> > wrong when Keepalive is or is not set? (If there is an nginx problem,
> > I suspect that people will be interested in fixing it. If there is an
> > upstream problem, then possibly people there will be interested in fixing
> > it, or possibly a workaround can be provided on the nginx side.)
>
>
> Admittedly this is over my head. I would be happy to test and probe
> if anyone is interested enough to tell me what to do.
I'm guessing quite a bit here, but it sounds like there may be an issue
where your nginx believes it makes a http request to upstream without
http-keepalive (HTTP/1.0 without Connection:, or with "Connection:
close") but your upstream processes it as if it had http-keepalive set;
and so upstream does not close the tcp connection after it thinks it
completed the http response.
In that case, if the response did not have Content-Length set and did
not use chunked transfer encoding, then nginx would not know that the
http response was complete and would keep waiting for more input.
(That would be unusual, since the client is nginx and the upstream is
apache, and both are usually reasonable at handing http. Maybe your
specific configuration matters.)
If you have a reproducible test case, where you make *this* request
and the problem manifests itself *that* way, then you have a chance of
making changes and testing them and seeing if the problem goes away. If
you do not, then it is mostly blind debugging.
If you can identify one request/response that is part of the problem,
then "tcpdump" or something to see what traffic passes during that
request may be useful.
But so far, no-one else can reproduce the problem, and I am not sure
what the problem actually is.
So there is not a recipe of "do exactly this, then exactly that",
I'm afraid.
f
--
Francis Daly francis at daoine.org
From brentgclarklist at gmail.com Wed Oct 5 08:39:36 2016
From: brentgclarklist at gmail.com (Brent Clark)
Date: Wed, 5 Oct 2016 10:39:36 +0200
Subject: Ngnix wont cache woff
Message-ID:
Good day Guys
Im struggling to get nginx to cache woff and woff2 file.
It would appear its the particular wordpress theme is set to not cache.
But I would like to override that.
Nothing I seem to do, works.
If someone could please review my work it would be appreciated.
bclark at bclark:~$ curl -I
http://$REMOVEDDOMAIN/wp-content/themes/REMOVED-v5-2/fonts/adelle_bold-webfont.woff
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 05 Oct 2016 08:28:31 GMT
Content-Type: application/font-woff
Content-Length: 41160
Connection: keep-alive
Last-Modified: Sun, 01 Nov 2015 15:02:55 GMT
ETag: "a0c8-5237bf49e7739"
Expires: Wed, 05 Oct 2016 09:28:31 GMT
Vary: User-Agent
Pragma: public
X-Powered-By: W3 Total Cache/0.9.4.1
Cache-Control: max-age=3600
X-Cache-Status: MISS
Accept-Ranges: bytes
Here is my code: http://pastebin.com/RAVKYipU
Kind Regards
Brent Clark
From nginx-forum at forum.nginx.org Wed Oct 5 10:29:13 2016
From: nginx-forum at forum.nginx.org (sobuz)
Date: Wed, 05 Oct 2016 06:29:13 -0400
Subject: Why does nginx always send content encoding as gzip
Message-ID: <0f2e356162c3d54170259f2df1e485b9.NginxMailingListEnglish@forum.nginx.org>
I have set gzip to off
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request"
'
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip off;
include /etc/nginx/conf.d/*.conf;
}
Check it is not set anywhere else
cd /etc/nginx/
grep -R gzip .
./nginx.conf: gzip off;
service nginx restart
yet content is still getiing sent as gzip?
Response Headers
Connection:keep-alive
Content-Encoding:gzip
Content-Length:51
Content-Type:application/json; charset=utf-8
Date:Wed, 05 Oct 2016 10:00:00 GMT
Server:nginx/1.6.2
Vary:Accept-Encoding
Any ideas to turn gzip off completely
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270070,270070#msg-270070
From jsharan15 at gmail.com Wed Oct 5 10:59:34 2016
From: jsharan15 at gmail.com (Sharan J)
Date: Wed, 5 Oct 2016 16:29:34 +0530
Subject: Nginx old worker process not exiting on reload
Message-ID:
Hi,
While reloading nginx, sometimes old worker process are not exiting
thereby, entering into "uninterrupted sleep" state. Is there a way to kill
such abandoned worker process? How can this process be avoided?
We are using nginx-1.10.1
Thanks,
Santhakumari.V
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From philip.walenta at gmail.com Wed Oct 5 11:05:13 2016
From: philip.walenta at gmail.com (Philip Walenta)
Date: Wed, 5 Oct 2016 06:05:13 -0500
Subject: Nginx old worker process not exiting on reload
In-Reply-To:
References:
Message-ID: <74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com>
The only thing I ever experienced that would hold an old worker process open after a restart (in my case config reload) were websocket connections.
Sent from my iPhone
> On Oct 5, 2016, at 5:59 AM, Sharan J wrote:
>
> Hi,
>
> While reloading nginx, sometimes old worker process are not exiting thereby, entering into "uninterrupted sleep" state. Is there a way to kill such abandoned worker process? How can this process be avoided?
> We are using nginx-1.10.1
>
> Thanks,
> Santhakumari.V
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From eric.cox at kroger.com Wed Oct 5 11:07:04 2016
From: eric.cox at kroger.com (Cox, Eric S)
Date: Wed, 5 Oct 2016 11:07:04 +0000
Subject: Use individual upstream server name as host header
Message-ID: <74A4D440E25E6843BC8E324E67BB3E39454EEEDB@N060XBOXP38.kroger.com>
Is anyone aware of a way to pass the upstream server name as the host header per individual server instead of setting it at the location level for all the upstream members? Without using a lua script that is.
Thanks
________________________________
This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ru at nginx.com Wed Oct 5 11:50:39 2016
From: ru at nginx.com (Ruslan Ermilov)
Date: Wed, 5 Oct 2016 14:50:39 +0300
Subject: Use individual upstream server name as host header
In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E39454EEEDB@N060XBOXP38.kroger.com>
References: <74A4D440E25E6843BC8E324E67BB3E39454EEEDB@N060XBOXP38.kroger.com>
Message-ID: <20161005115039.GB5760@lo0.su>
On Wed, Oct 05, 2016 at 11:07:04AM +0000, Cox, Eric S wrote:
> Is anyone aware of a way to pass the upstream server name as the host header
> per individual server instead of setting it at the location level for all the
> upstream members? Without using a lua script that is.
This is currently impossible.
From nginx-forum at forum.nginx.org Wed Oct 5 12:10:10 2016
From: nginx-forum at forum.nginx.org (anish10dec)
Date: Wed, 05 Oct 2016 08:10:10 -0400
Subject: Uneven High Load on the Nginx Server
In-Reply-To: <20160927175737.GL73038@mdounin.ru>
References: <20160927175737.GL73038@mdounin.ru>
Message-ID: <33b1f21ad745b46345732332e961dd27.NginxMailingListEnglish@forum.nginx.org>
On some of the Severs Waiting is increasing in uneven way like if we have 3
Set of Servers on all of them Active Connections is around 6K and Writing on
two of the Server its around 500 -600 while on third ts 3000 . On this
server response time is increasing in delivering the content.
This happens even if the content is served from cache of nginx.
Is any parameter in Nginx causing this, as on stopping the Nginx , the same
behaviour shifts to Other Two of them.
This is the Nginx Conf which we are using
Server is having 60 CPU Cores with 1.5 TB of RAM
PFB part of nginx.conf of server with issue :
worker_processes auto;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
worker_rlimit_nofile 100001;
http {
include mime.types;
default_type video/mp4;
proxy_buffering on;
proxy_buffer_size 4096k;
proxy_buffers 5 4096k;
sendfile on;
keepalive_timeout 30;
keepalive_requests 60000;
send_timeout 10;
tcp_nodelay on;
tcp_nopush on;
reset_timedout_connection on;
gzip off;
server_tokens off;
Regards,
Anish
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270077#msg-270077
From r at roze.lv Wed Oct 5 12:40:42 2016
From: r at roze.lv (Reinis Rozitis)
Date: Wed, 5 Oct 2016 15:40:42 +0300
Subject: Uneven High Load on the Nginx Server
In-Reply-To: <33b1f21ad745b46345732332e961dd27.NginxMailingListEnglish@forum.nginx.org>
References: <20160927175737.GL73038@mdounin.ru>
<33b1f21ad745b46345732332e961dd27.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <4C0AE5ACAD6349B994224759F6756832@MasterPC>
> On some of the Severs Waiting is increasing in uneven way like if we have
> 3 Set of Servers on all of them Active Connections is around 6K and
> Writing on
two of the Server its around 500 -600 while on third ts 3000
> as on stopping the Nginx , the same behaviour shifts to Other Two of them.
What do you use to distribute the load/requests between the set of servers?
Without knowing any other details/metrics this would just indicate that the
balancing mechanism/solution doesn't do the job the way you would expect.
Just for example a simple dns roundrobin while in long term works fine (the
requests are distributed somewhat eventy) still the load on the first
entry/server is always (a bit) higher.
rr
From nginx-forum at forum.nginx.org Wed Oct 5 12:51:41 2016
From: nginx-forum at forum.nginx.org (anish10dec)
Date: Wed, 05 Oct 2016 08:51:41 -0400
Subject: Uneven High Load on the Nginx Server
In-Reply-To: <4C0AE5ACAD6349B994224759F6756832@MasterPC>
References: <4C0AE5ACAD6349B994224759F6756832@MasterPC>
Message-ID: <2da49a7942cafac07cd2eb5550dd8985.NginxMailingListEnglish@forum.nginx.org>
We are using Haproxy to distribute the load on the Servers.
Load is ditributed on the basis of URI, with parameter set in haproxy config
as "balance uri".
This has been done to achieve maximum Cache Hit from the Server.
Does high number of Writing is leading to increase in response time for
delivering the content ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270080#msg-270080
From anoopalias01 at gmail.com Wed Oct 5 13:04:06 2016
From: anoopalias01 at gmail.com (Anoop Alias)
Date: Wed, 5 Oct 2016 18:34:06 +0530
Subject: proxying to upstream port based on scheme
Message-ID:
I have an httpd upstream server that listen on both http and https at
different port and want to send all http=>http_upstream and https =>
https_upstream
The following does the trick
#####################
if ( $scheme = https ) {
set $port 4430;
}
if ( $scheme = http ) {
set $port 9999;
}
location / {
proxy_pass $scheme://127.0.0.1:$port;
}
#####################
Just wanted to know if this is very inefficient (if-being evil) than
hard-coding the port and having two different server{} blocks for http and
https .
Thanks in advance.
--
*Anoop P Alias*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Wed Oct 5 13:08:13 2016
From: r at roze.lv (Reinis Rozitis)
Date: Wed, 5 Oct 2016 16:08:13 +0300
Subject: Uneven High Load on the Nginx Server
In-Reply-To: <2da49a7942cafac07cd2eb5550dd8985.NginxMailingListEnglish@forum.nginx.org>
References: <4C0AE5ACAD6349B994224759F6756832@MasterPC>
<2da49a7942cafac07cd2eb5550dd8985.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
> Load is ditributed on the basis of URI, with parameter set in haproxy
> config as "balance uri".
> This has been done to achieve maximum Cache Hit from the Server.
While the cache might be more efficient this way this can lead to one server
always serving some "hot" content while others stay idle.
If you can afford to shrink the cache you could try the haproxy's leastconn
mechanism.
It will innitially a increase the load on the backend (cache needs to
downloaded 3 times (if not configured to look up the neighbors first)) but
the load on the frontends should always be more or less even.
> Does high number of Writing is leading to increase in response time for
> delivering the content ?
Of course, it means that more clients are getting the data at the same time
(more disk/network io etc).
rr
From francis at daoine.org Wed Oct 5 13:24:54 2016
From: francis at daoine.org (Francis Daly)
Date: Wed, 5 Oct 2016 14:24:54 +0100
Subject: Ngnix wont cache woff
In-Reply-To:
References:
Message-ID: <20161005132454.GO11677@daoine.org>
On Wed, Oct 05, 2016 at 10:39:36AM +0200, Brent Clark wrote:
Hi there,
> Im struggling to get nginx to cache woff and woff2 file.
>
> It would appear its the particular wordpress theme is set to not cache.
> But I would like to override that.
> bclark at bclark:~$ curl -I
> http://$REMOVEDDOMAIN/wp-content/themes/REMOVED-v5-2/fonts/adelle_bold-webfont.woff
> Expires: Wed, 05 Oct 2016 09:28:31 GMT
> Vary: User-Agent
> Pragma: public
> Cache-Control: max-age=3600
> Here is my code: http://pastebin.com/RAVKYipU
For "woff", that has:
proxy_ignore_headers Cache-Control Vary Expires Set-Cookie X-Accel-Expires;
proxy_cache_valid 404 1m;
For "swf" it has:
proxy_ignore_headers Vary;
proxy_cache_valid 404 3m;
For how long do you want your "woff" content cached by nginx? If it
is a fixed amount, set proxy_cache_valid suitably. If it is "whatever
Cache-Control says", remove that from proxy_ignore_headers.
(Aside: it is usually friendlier to include the config in the email, so
that someone in next year will be able to see the complete question. It's
possible that that pastebin link may not have the same content as today,
then.)
Cheers,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Wed Oct 5 13:30:00 2016
From: nginx-forum at forum.nginx.org (smaig)
Date: Wed, 05 Oct 2016 09:30:00 -0400
Subject: nginx worker process exited on signal 7
In-Reply-To: <20161003151313.GE73038@mdounin.ru>
References: <20161003151313.GE73038@mdounin.ru>
Message-ID:
Thanks Maxim, we'll try this.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270043,270084#msg-270084
From nginx-forum at forum.nginx.org Wed Oct 5 13:42:28 2016
From: nginx-forum at forum.nginx.org (anish10dec)
Date: Wed, 05 Oct 2016 09:42:28 -0400
Subject: Uneven High Load on the Nginx Server
In-Reply-To:
References:
Message-ID: <881970201c39356c8b6dbfb42f8ce7c5.NginxMailingListEnglish@forum.nginx.org>
Actually, its not the case that More number of Clients are trying to get the
content from One of Server as Server Throughput shows equal load on all
interfaces of Server which is around 4 Gbps.
So Do I expect , Writing will Increase with more number of Active
Connections.
Is it so that Nginx is not able to handle the load of as much connections
and due to which requests is going into Writing Mode and Nginx not releasing
it
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270085#msg-270085
From nginx-forum at forum.nginx.org Wed Oct 5 14:40:05 2016
From: nginx-forum at forum.nginx.org (nixcoder)
Date: Wed, 05 Oct 2016 10:40:05 -0400
Subject: 400 bad request for http m-post method
Message-ID: <5edc178411db579b66361f64b751ee26.NginxMailingListEnglish@forum.nginx.org>
Hi,
I'm getting the below error in nginx reverse proxy server. It seems the
proxy server does not recognize the http method: "M-POST" ? Is there a way i
can allow these incoming requests ?
nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST
/cimom HTTP/1.1" 400 166 "-" "-"
nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST
/cimom HTTP/1.1" 400 166 "-" "-"
Thanks in advance.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270087,270087#msg-270087
From mdounin at mdounin.ru Wed Oct 5 15:25:45 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 5 Oct 2016 18:25:45 +0300
Subject: 400 bad request for http m-post method
In-Reply-To: <5edc178411db579b66361f64b751ee26.NginxMailingListEnglish@forum.nginx.org>
References: <5edc178411db579b66361f64b751ee26.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161005152545.GU73038@mdounin.ru>
Hello!
On Wed, Oct 05, 2016 at 10:40:05AM -0400, nixcoder wrote:
> Hi,
> I'm getting the below error in nginx reverse proxy server. It seems the
> proxy server does not recognize the http method: "M-POST" ? Is there a way i
> can allow these incoming requests ?
>
> nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST
> /cimom HTTP/1.1" 400 166 "-" "-"
> nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST
> /cimom HTTP/1.1" 400 166 "-" "-"
Only "A" .. "Z" and "_" are allowed in method names by nginx.
If you want to allow "M-POST", please try the following patch:
# HG changeset patch
# User Maxim Dounin
# Date 1475681003 -10800
# Wed Oct 05 18:23:23 2016 +0300
# Node ID fb39836bb3708b26629eaea06fe1221e39daa253
# Parent 9b9ae81cd4f01ed60e7bab323d49b470cec69d9e
Allowed '-' in method names.
It is used at least by SOAP (M-POST method, defined by RFC 2774) and
by WebDAV versioning (VERSION-CONTROL and BASELINE-CONTROL methods,
defined by RFC 3253).
diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c
--- a/src/http/ngx_http_parse.c
+++ b/src/http/ngx_http_parse.c
@@ -149,7 +149,7 @@ ngx_http_parse_request_line(ngx_http_req
break;
}
- if ((ch < 'A' || ch > 'Z') && ch != '_') {
+ if ((ch < 'A' || ch > 'Z') && ch != '_' && ch != '-') {
return NGX_HTTP_PARSE_INVALID_METHOD;
}
@@ -270,7 +270,7 @@ ngx_http_parse_request_line(ngx_http_req
break;
}
- if ((ch < 'A' || ch > 'Z') && ch != '_') {
+ if ((ch < 'A' || ch > 'Z') && ch != '_' && ch != '-') {
return NGX_HTTP_PARSE_INVALID_METHOD;
}
--
Maxim Dounin
http://nginx.org/
From vbart at nginx.com Wed Oct 5 15:32:58 2016
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Wed, 05 Oct 2016 18:32:58 +0300
Subject: proxying to upstream port based on scheme
In-Reply-To:
References:
Message-ID: <6504454.EPfo57qKWK@vbart-workstation>
On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote:
> I have an httpd upstream server that listen on both http and https at
> different port and want to send all http=>http_upstream and https =>
> https_upstream
>
> The following does the trick
>
> #####################
> if ( $scheme = https ) {
> set $port 4430;
> }
> if ( $scheme = http ) {
> set $port 9999;
> }
>
> location / {
>
> proxy_pass $scheme://127.0.0.1:$port;
> }
> #####################
>
> Just wanted to know if this is very inefficient (if-being evil) than
> hard-coding the port and having two different server{} blocks for http and
> https .
>
[..]
Why don't use map?
map $scheme $port {
http 9999;
https 4430;
}
proxy_pass $scheme://127.0.0.1:$port;
wbr, Valentin V. Bartenev
From me at myconan.net Wed Oct 5 15:36:21 2016
From: me at myconan.net (Edho Arief)
Date: Thu, 06 Oct 2016 00:36:21 +0900
Subject: proxying to upstream port based on scheme
In-Reply-To: <6504454.EPfo57qKWK@vbart-workstation>
References:
<6504454.EPfo57qKWK@vbart-workstation>
Message-ID: <1475681781.1951403.746776561.6FD18030@webmail.messagingengine.com>
Hi,
On Thu, Oct 6, 2016, at 00:32, Valentin V. Bartenev wrote:
> On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote:
> > I have an httpd upstream server that listen on both http and https at
> > different port and want to send all http=>http_upstream and https =>
> > https_upstream
> >
>
> Why don't use map?
>
> map $scheme $port {
> http 9999;
> https 4430;
> }
>
>
> proxy_pass $scheme://127.0.0.1:$port;
>
>
or two separate server block...
From r at roze.lv Wed Oct 5 18:33:42 2016
From: r at roze.lv (Reinis Rozitis)
Date: Wed, 5 Oct 2016 21:33:42 +0300
Subject: Uneven High Load on the Nginx Server
In-Reply-To: <881970201c39356c8b6dbfb42f8ce7c5.NginxMailingListEnglish@forum.nginx.org>
References:
<881970201c39356c8b6dbfb42f8ce7c5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <017a01d21f36$ff9d0020$fed70060$@roze.lv>
> Actually, its not the case that More number of Clients are trying to get the
> content from One of Server as Server Throughput shows equal load on all
> interfaces of Server which is around 4 Gbps.
This contradicts (or I understood it differently) a bit what you wrote previously.
But what I got was - if you shutdown (or disable) the "slow nginx" then the behavior shifts to the remainin. Which to me indicates that the balancing via URI doesn't result in even load to servers or at least with current software (nginx)/hardware configuration it leads to the same result which means that particular server instance isn't at fault.
I reread your initial post and some other things don't seem right to me:
- "We see that out of two server on which load is high i.e around 5" but later your write "Server is having 60 CPU Cores with 1.5 TB of RAM" - a 5 load on 60 core cpu machine means that server has only ~8% load which isn't very high or is it a typo?
But you could look at the haproxy status page and see if there aren't big differences in currently active connections to backends and/or the "slow" backend doesn't have way more (completed) requests - as just plainly comparing network interface throughput doesn't always indicate the work the server needs to do - you could saturate the 4Gbps link by sending a single 100Gb file to a single client with ~500MB/s the same time another server could send 1Mb requests to 500 clients or 50000 clients each downloading 10kb but the resulting load could be way different.
rr
From nginx-forum at forum.nginx.org Thu Oct 6 04:28:36 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Thu, 06 Oct 2016 00:28:36 -0400
Subject: Why does nginx always send content encoding as gzip
In-Reply-To: <0f2e356162c3d54170259f2df1e485b9.NginxMailingListEnglish@forum.nginx.org>
References: <0f2e356162c3d54170259f2df1e485b9.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
You should check your application sounds like that is compressing its
pages.
A simple test is this create a empty html file and serve that from a
location and check the headers.
location = /test.html {
root "path/to/html/file";
}
if the headers on that have no gzip compression as set in your nginx config
then you know its your web application gzipping.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270070,270096#msg-270096
From nginx-forum at forum.nginx.org Thu Oct 6 07:59:38 2016
From: nginx-forum at forum.nginx.org (anish10dec)
Date: Thu, 06 Oct 2016 03:59:38 -0400
Subject: Uneven High Load on the Nginx Server
In-Reply-To: <017a01d21f36$ff9d0020$fed70060$@roze.lv>
References: <017a01d21f36$ff9d0020$fed70060$@roze.lv>
Message-ID: <118ad0c4aa4fd3c27c50e408362c94cf.NginxMailingListEnglish@forum.nginx.org>
> I reread your initial post and some other things don't seem right to
> me:
> - "We see that out of two server on which load is high i.e around 5"
> but later your write "Server is having 60 CPU Cores with 1.5 TB of
> RAM" - a 5 load on 60 core cpu machine means that server has only ~8%
> load which isn't very high or is it a typo?
Load was mentioned compared to other server , On the Server were Writing is
500-600 load is around 0.5 - 0.8
While on the problematic Server were Writing is in 3000 load is 5.
So, totally agree with you that on such a high configuration server this
load is minimal.
So, it might not be load , but large number of Writing which is increasing
the response time and at same time load.
> But you could look at the haproxy status page and see if there aren't
> big differences in currently active connections to backends and/or the
> "slow" backend doesn't have way more (completed) requests - as just
Will look into haproxy config and share the observation.
One more observation is that over the day this Writing remains balance on
all the Server , though Active Connections remains 10k to 11K.
But at Night , on different Set of Server if the connection reaches 6k to 7k
at night , the Writing gets Varied .
Since at night there is high load on the Network as more number of users
trying to access the Video/Songs , so is there any possibility that Network
might be contributing high number of Writing on any one of Server.
Can there be some issue at Network Side ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270097#msg-270097
From nginx-forum at forum.nginx.org Thu Oct 6 20:43:07 2016
From: nginx-forum at forum.nginx.org (ezak)
Date: Thu, 06 Oct 2016 16:43:07 -0400
Subject: strange condition file not found nginx with php-fpm
Message-ID:
I was working with this config from 2 years without any problem
sudenly I face not found error message from niginx
and its come only when the link has "?", sample
http://firmware.gem-flash.com/index.php?a=browse&b=category&id=1
if open the normal link, its working
http://firmware.gem-flash.com/index.php
http://firmware.gem-flash.com/[any other php file].php
site config (changed user info)
server {
listen *:80;
server_name firmware.gem-flash.com;
#error_log /var/log/nginx/firmware.gem-flash.com.log error;
rewrite_log on;
root /home/user/public_html/;
location / {
index index.php index.html index.htm ;
}
location ~*
^.+\.(jpg|jpeg|gif|css|html|png|js|ico|bmp|zip|rar|txt|pdf|doc)$ {
root /home/user/public_html/;
# expires max;
access_log off;
}
location ~ ^/.+\.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_intercept_errors on;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270104,270104#msg-270104
From nginx-forum at forum.nginx.org Thu Oct 6 22:07:51 2016
From: nginx-forum at forum.nginx.org (mrast)
Date: Thu, 06 Oct 2016 18:07:51 -0400
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx
Message-ID:
I have an ubuntu 16.04 server running LEMP with 4 websites on it
website.com website1.com website2.com website3.com
I have installed phpmyadmin and configured it and it works fine, however it
serves all 4 websites.
website1 and website3 do not need access to phpmyadmin.
How do i tell Nginx to only load phpmyadmin for certian websites please?
Thankyou
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270105#msg-270105
From philip.walenta at gmail.com Fri Oct 7 12:03:19 2016
From: philip.walenta at gmail.com (Philip Walenta)
Date: Fri, 7 Oct 2016 07:03:19 -0500
Subject: Practical size limit of config files
Message-ID:
Is there a practical maximum size limit for config files?
What is possible - 100k, 1MB, 10MB?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Fri Oct 7 13:24:12 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 7 Oct 2016 16:24:12 +0300
Subject: Practical size limit of config files
In-Reply-To:
References:
Message-ID: <20161007132411.GI73038@mdounin.ru>
Hello!
On Fri, Oct 07, 2016 at 07:03:19AM -0500, Philip Walenta wrote:
> Is there a practical maximum size limit for config files?
>
> What is possible - 100k, 1MB, 10MB?
I think this mostly depends on your ability as an administrator to
manage such a configuration.
I've worked with configurations larger than 10MB, though most of
these megabytes were in geo{} and map{} bases managed separately.
--
Maxim Dounin
http://nginx.org/
From francis at daoine.org Fri Oct 7 13:33:19 2016
From: francis at daoine.org (Francis Daly)
Date: Fri, 7 Oct 2016 14:33:19 +0100
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To:
References:
Message-ID: <20161007133319.GU11677@daoine.org>
On Thu, Oct 06, 2016 at 06:07:51PM -0400, mrast wrote:
Hi there,
> I have installed phpmyadmin and configured it and it works fine, however it
> serves all 4 websites.
>
> website1 and website3 do not need access to phpmyadmin.
>
> How do i tell Nginx to only load phpmyadmin for certian websites please?
Look at your config for the server{} block for website1.
Find the piece that relates to phpmyadmin.
Remove it.
The server{} block is identified by "server_name" including "website1".
You can "find the piece" by looking at the "location" blocks that are
defined, and learning one request that makes use of phpmyadmin, and seeing
which one "location" handles that request -- http://nginx.org/r/location
for details.
If it turns out that you only have one server{} block for all four
websites, then it is probably simplest to copy-and-change it to two
or four different blocks with different "server_name" values, and then
remove the phpmyadmin part from the website1 and website3 block or blocks.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Fri Oct 7 14:20:51 2016
From: nginx-forum at forum.nginx.org (mrast)
Date: Fri, 07 Oct 2016 10:20:51 -0400
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To: <20161007133319.GU11677@daoine.org>
References: <20161007133319.GU11677@daoine.org>
Message-ID:
Hello Francis,
Thankyou for your reply.
I have seperate config files for each website in /etc/nginx/sites-enabled
and have removed the default file
I have this directive in /etc/nginx/nginx.conf
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80 default_server;
server_name _;
return 444;
}
This is the start of my server block for website2 that i do want phpmyadmin
access for
server {
server_name website.com www.website.com;
and the only phpmyadmin directive is for a pre-auth login box before the
phpam login page displays
location /phpmyadmin {
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/allow_phpmyadmin;
}
If i remove the /phpmyadmin section from website1 config file, it just
removes the pre-auth login box and goes straight to the main phpma screen.
I have a symlink for nginx to use phpmyadmin /usr/share/phpmyadmin
/usr/share/nginx/html
I have a symlink in website2 and 4 directory's for /usr/share/phpmyadmin
i dont have any symlinks in websites 1 and 3 directory's for
/usr/share/phpmyadmin - but yet these websites are still server /phpmyadmin
Im not sure what i need to remove after reading your reply - Shall i remove
the server_name line from the nginx.conf file?
Thankyou
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270117#msg-270117
From nginx-forum at forum.nginx.org Fri Oct 7 19:47:47 2016
From: nginx-forum at forum.nginx.org (yurai)
Date: Fri, 07 Oct 2016 15:47:47 -0400
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <20161004194642.GK11677@daoine.org>
References: <20161004194642.GK11677@daoine.org>
Message-ID: <8260e5aeb7dcb440714376bc3d0f2d37.NginxMailingListEnglish@forum.nginx.org>
Hi Francis,
I added return statement to my config as you suggested. Now config for
backend s2 looks like:
server {
listen 8080;
server_name s2;
location / {
root /usr/share/nginx/html/foo/bar;
return 200 "Do something sensible with $http_x_file\n";
autoindex on;
}
}
Unfortunately still it doesn't work as I expect - upload.txt file content is
not saved on server side in /tmp/nginx-client-body.
My understanding is that transfer should be performed in 2 phases (2 POST
requests via 2x curl). First request should deliver file name, and second
should deliver actual file content without ingeration from backend side. I
analyzed HTTP flow in wireshark and it looks fine for me (details below).
1. curl --data-binary upload.txt http://localhost/upload
- s1 listening on 80 recive POST request with body = "upload.txt". S1 buffer
"upload.txt" in /tmp/0000001, generate new POST request with field
X-FILE="/tmp/0000001" and pass this request to backend (s2)
- s2 listening on 8080 recieve POST request with X-FILE = "/tmp/0000001"
- s2 generate HTTP response 200 with body = "Do something sensible with
/tmp/0000001\n" and pass it to s1
- s1 recieve above response and pass it to client
- client recieve HTTP response 200 with body = "Do something sensible with
/tmp/0000001\n"
2. curl --data-binary '@upload.txt' http://localhost/upload
If I understand this mechanism correctly now actual upload.txt transfer to
server without backend ingeration should be triggered.
So I should get reponse 200 and upload.txt content should be saved by server
under /tmp/nginx-client-body.
Anyway when I type curl --data-binary '@upload.txt' http://localhost/upload
whole scenario from previous point is performed again.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270122#msg-270122
From jsharan15 at gmail.com Sat Oct 8 05:34:52 2016
From: jsharan15 at gmail.com (Sharan J)
Date: Sat, 8 Oct 2016 11:04:52 +0530
Subject: Nginx old worker process not exiting on reload
In-Reply-To: <74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com>
References:
<74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com>
Message-ID:
Hi,
Is there a way to prevent this? Is there any other way to kill such process
without the need for rebooting the machine.
Thanks,
Santhakumari
On Wed, Oct 5, 2016 at 4:35 PM, Philip Walenta
wrote:
> The only thing I ever experienced that would hold an old worker process
> open after a restart (in my case config reload) were websocket connections.
>
> Sent from my iPhone
>
> > On Oct 5, 2016, at 5:59 AM, Sharan J wrote:
> >
> > Hi,
> >
> > While reloading nginx, sometimes old worker process are not exiting
> thereby, entering into "uninterrupted sleep" state. Is there a way to kill
> such abandoned worker process? How can this process be avoided?
> > We are using nginx-1.10.1
> >
> > Thanks,
> > Santhakumari.V
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sat Oct 8 15:05:38 2016
From: nginx-forum at forum.nginx.org (nixcoder)
Date: Sat, 08 Oct 2016 11:05:38 -0400
Subject: 400 bad request for http m-post method
In-Reply-To: <20161005152545.GU73038@mdounin.ru>
References: <20161005152545.GU73038@mdounin.ru>
Message-ID: <1edee87adc666ded971653ebdd08331e.NginxMailingListEnglish@forum.nginx.org>
Awesome!! Thanks a lot, Maxim. The patch fixed the issue.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270087,270129#msg-270129
From reallfqq-nginx at yahoo.fr Sat Oct 8 17:26:57 2016
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sat, 8 Oct 2016 19:26:57 +0200
Subject: Nginx old worker process not exiting on reload
In-Reply-To:
References:
<74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com>
Message-ID:
RTFM?
http://nginx.org/en/docs/control.html
---
*B. R.*
On Sat, Oct 8, 2016 at 7:34 AM, Sharan J wrote:
> Hi,
>
> Is there a way to prevent this? Is there any other way to kill such
> process without the need for rebooting the machine.
>
> Thanks,
> Santhakumari
>
> On Wed, Oct 5, 2016 at 4:35 PM, Philip Walenta
> wrote:
>
>> The only thing I ever experienced that would hold an old worker process
>> open after a restart (in my case config reload) were websocket connections.
>>
>> Sent from my iPhone
>>
>> > On Oct 5, 2016, at 5:59 AM, Sharan J wrote:
>> >
>> > Hi,
>> >
>> > While reloading nginx, sometimes old worker process are not exiting
>> thereby, entering into "uninterrupted sleep" state. Is there a way to kill
>> such abandoned worker process? How can this process be avoided?
>> > We are using nginx-1.10.1
>> >
>> > Thanks,
>> > Santhakumari.V
>> > _______________________________________________
>> > nginx mailing list
>> > nginx at nginx.org
>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Sun Oct 9 15:48:33 2016
From: francis at daoine.org (Francis Daly)
Date: Sun, 9 Oct 2016 16:48:33 +0100
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To:
References: <20161007133319.GU11677@daoine.org>
Message-ID: <20161009154833.GV11677@daoine.org>
On Fri, Oct 07, 2016 at 10:20:51AM -0400, mrast wrote:
Hi there,
There are many ways that you might have configured things. Only you
know the one way that you have configured things. Until you share that,
there is not much that others can do.
> If i remove the /phpmyadmin section from website1 config file, it just
> removes the pre-auth login box and goes straight to the main phpma screen.
When you do that, what is the actual url that your browser has requested?
It will start with http:// or https://, then it will have website1.com --
I don't care about those bits.
Then it will have a /, and then something, and then maybe a ? or a #
and something else. The "/something" is the bit that matters here.
> I have a symlink for nginx to use phpmyadmin /usr/share/phpmyadmin
> /usr/share/nginx/html
>
> I have a symlink in website2 and 4 directory's for /usr/share/phpmyadmin
I am not sure what you mean by that; it may not matter at least until
the request and matching config is identified.
If you have a new-enough nginx, then
nginx -T | grep 'server\|location'
may hide enough of the config that you are willing to show the
output. Only the "website1" piece is interesting here. And this assumes
that there are no directives from the "rewrite" module that will take
effect first.
Given the "location"s that are defined, and given the request that you
make, what one "location" will nginx use to handle the request?
That is a place to consider making a change.
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Sun Oct 9 16:05:07 2016
From: francis at daoine.org (Francis Daly)
Date: Sun, 9 Oct 2016 17:05:07 +0100
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <8260e5aeb7dcb440714376bc3d0f2d37.NginxMailingListEnglish@forum.nginx.org>
References: <20161004194642.GK11677@daoine.org>
<8260e5aeb7dcb440714376bc3d0f2d37.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161009160507.GW11677@daoine.org>
On Fri, Oct 07, 2016 at 03:47:47PM -0400, yurai wrote:
Hi there,
> Unfortunately still it doesn't work as I expect - upload.txt file content is
> not saved on server side in /tmp/nginx-client-body.
Oh. Why do you expect that?
I would only expect that to happen if I send the upload.txt file content,
which I have not done yet.
> My understanding is that transfer should be performed in 2 phases (2 POST
> requests via 2x curl).
Why?
Each request is completely independent of each other request. If you
want to tie two together, you must add the tying part yourself. (Or use
a framework which does it for you).
> First request should deliver file name, and second
> should deliver actual file content without ingeration from backend side. I
> analyzed HTTP flow in wireshark and it looks fine for me (details below).
>
> 1. curl --data-binary upload.txt http://localhost/upload
>
> - s1 listening on 80 recive POST request with body = "upload.txt". S1 buffer
> "upload.txt" in /tmp/0000001, generate new POST request with field
> X-FILE="/tmp/0000001" and pass this request to backend (s2)
> - s2 listening on 8080 recieve POST request with X-FILE = "/tmp/0000001"
> - s2 generate HTTP response 200 with body = "Do something sensible with
> /tmp/0000001\n" and pass it to s1
> - s1 recieve above response and pass it to client
> - client recieve HTTP response 200 with body = "Do something sensible with
> /tmp/0000001\n"
Yes, that is exactly what should happen.
Except that you seem to have switched between /tmp and
/tmp/nginx-client-body somewhere.
> 2. curl --data-binary '@upload.txt' http://localhost/upload
This should do exactly the same as the first one, except that now
(because of what the curl client does) the POST data is not the string
"upload.txt", but is instead the content of the file upload.txt.
nginx has no idea that the content came from a file, or what that filename
might have been.
> If I understand this mechanism correctly now actual upload.txt transfer to
> server without backend ingeration should be triggered.
> So I should get reponse 200 and upload.txt content should be saved by server
> under /tmp/nginx-client-body.
You should get something like response 200 with body = "Do something
sensible with /tmp/0000002\n", exactly the same format as the first
response (but with a different filename.)
nginx receives a POST with some body content. nginx writes that body
content to a new file in its client_body_temp_path
> Anyway when I type curl --data-binary '@upload.txt' http://localhost/upload
> whole scenario from previous point is performed again.
What is the filename that you get back in the response? What is the
content of that file, when you look on the server?
It looks to me like everything is working as intended. I have a file
/tmp/nginx-client-body/0000000005 which contains the contents of my
upload.txt, and I have a file /tmp/nginx-client-body/0000000004 which
contains the 10 characters "upload.txt".
If you do not have that, what specifically do you have instead?
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Sun Oct 9 16:50:50 2016
From: nginx-forum at forum.nginx.org (mrast)
Date: Sun, 09 Oct 2016 12:50:50 -0400
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To: <20161009154833.GV11677@daoine.org>
References: <20161009154833.GV11677@daoine.org>
Message-ID:
Hi Francis,
Its a brand new server setup.
I have no problem sharing the config files - ill just sanitize the actual
websites. But everything else is 100% as is.
Here is the full nginx.conf file from /etc/nginx
cat /etc/nginx/nginx.conf
user www-data;
worker_processes 1;
worker_rlimit_nofile 100000;
pid /run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
}
http {
##
# EasyEngine Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
server_tokens off;
reset_timedout_connection on;
# add_header X-Powered-By "EasyEngine";
add_header rt-Fastcgi-Cache $upstream_cache_status;
# Limit Request
limit_req_status 403;
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
# Proxy Settings
# set_real_ip_from proxy-server-ip;
# real_ip_header X-Forwarded-For;
fastcgi_read_timeout 300;
client_max_body_size 100m;
##
# SSL Settings
##
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_ciphers
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
##
# Basic Settings
##
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Log format Settings
log_format rt_cache '$remote_addr $upstream_response_time
$upstream_cache_status [$time_local] '
'$http_host "$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 2;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component
text/xml
text/javascript;
##
# Cache Settings
##
add_header Fastcgi-Cache $upstream_cache_status;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80 default_server;
server_name _;
return 444;
}
}
Here is the full config for website.com - that does need access to
phpmyadmin and does have an extra login prompt before /phpmyadmin is shown
(which is what th e location /phpmyadmin block dictates
cat /etc/nginx/sites-available/website.com
fastcgi_cache_path /var/www/html/website.com/cache levels=1:2
keys_zone=website.com:100m inactive=60m;
server {
server_name website.com www.website.com;
access_log /var/www/html/website.com/logs/access.log;
error_log /var/www/html/website.com/logs/error.log;
root /var/www/html/website.com/public/;
index index.php index.html index.htm;
set $skip_cache 0;
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
if ($request_uri ~*
"/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml")
{
set $skip_cache 1;
}
if ($http_cookie ~*
"comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in")
{
set $skip_cache 1;
}
if ($http_cookie ~* "PHPSESSID"){
set $skip_cache 1;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location /phpmyadmin {
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/allow_phpmyadmin;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache magentafp.com;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge website.com
"$scheme$request_method$host$1";
}
}
Here is the full config for website1.com - that doesnt need access to
phpmyadmin - and thus doesnt have the location /phpmyamin block in it
cat /etc/nginx/sites-available/fulgent.co.uk
fastcgi_cache_path /var/www/html/website1.com/cache levels=1:2
keys_zone=website1.com:100m inactive=60m;
server {
server_name website1.com www.website1.com;
access_log /var/www/html/website1.com/logs/access.log;
error_log /var/www/html/website1.com/logs/error.log;
root /var/www/html/website1.com/public/;
index index.php index.html index.htm;
set $skip_cache 0;
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
if ($request_uri ~*
"/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml")
{
set $skip_cache 1;
}
if ($http_cookie ~*
"comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in")
{
set $skip_cache 1;
}
if ($http_cookie ~* ?PHPSESSID"){
set $skip_cache 1;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache magentafp.com;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge website1.com
"$scheme$request_method$host$1";
}
}
I have made no changes to any phpmyadmin config files.
If i go to website1.com/phpmyadmin - the phpmyadmin login page is served.
There are no changes to the url - it stays website1.com/phpmyadmin
This is the article i followd to install an secure phpmyadmin - i did
everything on that page except change the /phpmyadmin location name. (this
is where the symlink came into it)
So to me that symlink tells nginx too server phpmyadmin php pages for the
web server - am i correct?
If i remove that symlink - and then just create symlinks for the websites
themselves - ive found it doesnt make a difference.
eg - a symlink for website.com exisits pointing to /usr/share/phpmyadmin. So
im telling nginx to serve phpmyadmin php files for that website only and not
the whole server which the /usr/share/phpmyadmin /usr/share/nginx/html
symlink does.
Here is the output of nginx -T | grep 'server\|location' as requested (ive
cut out website2 and website3 bits as they are not relevant as they are just
copies of .com and 1.com (.com and 2.com need access 1.com and 3.com dont
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
server_tokens off;
# set_real_ip_from proxy-server-ip;
ssl_prefer_server_ciphers on;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
server {
listen 80 default_server;
server_name _;
# server {
# server {
server {
server_name website.com www.website.com;
location / {
location /phpmyadmin {
location ~ \.php$ {
location ~ /purge(/.*) {
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
server {
server_name website1.com www.website1.com;
location / {
location ~ \.php$ {
location ~ /purge(/.*) {
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
Thanks for your assistance.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270134#msg-270134
From francis at daoine.org Sun Oct 9 19:41:34 2016
From: francis at daoine.org (Francis Daly)
Date: Sun, 9 Oct 2016 20:41:34 +0100
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To:
References: <20161009154833.GV11677@daoine.org>
Message-ID: <20161009194134.GX11677@daoine.org>
On Sun, Oct 09, 2016 at 12:50:50PM -0400, mrast wrote:
Hi there,
> I have no problem sharing the config files - ill just sanitize the actual
> websites. But everything else is 100% as is.
Thanks for this - it does give more information about what is happening.
A few notes, with the order switched...
> if ($http_cookie ~* ?PHPSESSID"){
If that is a copy-paste of the config file, then it probably won't match
some things that you would want it to.
> If i go to website1.com/phpmyadmin - the phpmyadmin login page is served.
> There are no changes to the url - it stays website1.com/phpmyadmin
That piece surprises me. I would expect that it would have issued a
redirect to website1.com/phpmyadmin/
That is because, for website1, with the following:
> location / {
> location ~ \.php$ {
> location ~ /purge(/.*) {
a request for /phpmyadmin is handled in the first location, which has
> try_files $uri $uri/ /index.php?$args;
which, since you have
> root /var/www/html/website1.com/public/;
should check if /var/www/html/website1.com/public//phpmyadmin
is a file, and if so serve it; else check if
/var/www/html/website1.com/public//phpmyadmin is a directory, and if so
serve a redirect to /phpmyadmin/
Oh - unless /var/www/html/website1.com/public//phpmyadmin does not exist,
in which case it will be handled internally to nginx as a subrequest
to /index.php
That makes sense now -- I'm guessing that that path does not exist?
Your /index.php subrequest is handled in the second location, which does
> try_files $uri =404;
> fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
> include fastcgi_params;
where try_files checks that the file
/var/www/html/website1.com/public//index.php exists, and then contacts
your fastcgi server and asks it to process a file. That is probably
"SCRIPT_FILENAME" in your fastcgi_params file -- what is that set to?
Most likely it is $document_root$fastcgi_script_name, which corresponds
to the file /var/www/html/website1.com/public//index.php
What happens next is outside of the control of nginx, and is entirely
down to your fastcgi server and whatever that php file contains.
> Here is the full config for website.com - that does need access to
> phpmyadmin and does have an extra login prompt before /phpmyadmin is shown
> (which is what th e location /phpmyadmin block dictates
Just as an aside - it is possible that some other configuration will
protect against this; but it looks to me as though if you access
http://website.com/phpmyadmin/index.php you may get access to things
without having attempted the nginx basic authentication "extra login" step.
> This is the article i followd to install an secure phpmyadmin - i did
> everything on that page except change the /phpmyadmin location name. (this
> is where the symlink came into it)
The link to the article seems to be missing.
I'm not sure what exactly this symlink is.
For each of the files/directories named above that "try_files" tests,
what does "ls -lLd" say that they are? File, directory, or not there?
> So to me that symlink tells nginx too server phpmyadmin php pages for the
> web server - am i correct?
nginx does not "do" php. If php is involved, it is your fastcgi server
that handles it. nginx will tell your fastcgi server which file it should
attempt to process, though.
If the symlink you refer to is one of
/var/www/html/website1.com/public//phpmyadmin
/var/www/html/website1.com/public//index.php
then it will be relevant; if not then it should not be.
> eg - a symlink for website.com exisits pointing to /usr/share/phpmyadmin. So
> im telling nginx to serve phpmyadmin php files for that website only and not
> the whole server which the /usr/share/phpmyadmin /usr/share/nginx/html
> symlink does.
In the config that you have shown, /usr/share/nginx/html is not relevant,
I think.
> Here is the output of nginx -T | grep 'server\|location' as requested (ive
> cut out website2 and website3 bits as they are not relevant as they are just
> copies of .com and 1.com (.com and 2.com need access 1.com and 3.com dont
> server {
> server_name website.com www.website.com;
> location / {
> location /phpmyadmin {
> location ~ \.php$ {
> location ~ /purge(/.*) {
> server {
> server_name website1.com www.website1.com;
> location / {
> location ~ \.php$ {
> location ~ /purge(/.*) {
Those are the initially-important bits. For each request (or internal
subrequest), you can tell which one location nginx will use to handle
it. Only the configuration in, or inherited into, that location is
relevant for this request.
>From the above, I think that the file
/var/www/html/website1.com/public//index.php
may be especially interesting. What is in it? Is it in any way related
to phpmyadmin?
Good luck with it,
f
--
Francis Daly francis at daoine.org
From me at myconan.net Mon Oct 10 03:56:57 2016
From: me at myconan.net (Edho Arief)
Date: Mon, 10 Oct 2016 12:56:57 +0900
Subject: Index fallback?
Message-ID: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
I somehow can't make this scenario work:
root structure:
/a/index.html
/b/ redirect to site.com/a/ -> show /a/index.html
2. site.com/b -> redirect to site.com/b/ -> show @fallback
Using
try_files $uri $uri/index.html @fallback;
doesn't work quite well because #1 becomes this instead:
1. site.com/a -> show /a/index.html
and breaks relative path javascript/css files (because it's `/a` in
browser, not `/a/`).
And using
try_files $uri @fallback;
Just always show @fallback for both scenarios.
Whereas
try_files $uri $uri/ @fallback;
Always return 403 for #2 because the directory exists and there's no
index.
As a side note,
error_page 404 = @fallback;
Wouldn't work because as mentioned in the previous one, it returns 403
for #2 (directory exists, no index), not 404.
Is there any way to do it without specifying separate location for each
of them?
From me at myconan.net Mon Oct 10 06:08:27 2016
From: me at myconan.net (Edho Arief)
Date: Mon, 10 Oct 2016 15:08:27 +0900
Subject: Index fallback?
In-Reply-To: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
Message-ID: <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
Hi,
On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote:
> I somehow can't make this scenario work:
>
> root structure:
> /a/index.html
> /b/
> accessing:
> 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> 2. site.com/b -> redirect to site.com/b/ -> show @fallback
>
>
after trying out a bit more, this is the closest thing I can make which
works:
location / {
error_page 418 = @dirlist;
set $redirect 0;
if (-d $request_filename) {
set $redirect A;
}
if (-f $request_filename/index.html) {
set $redirect "${redirect}B";
}
if ($uri !~ /$) {
set $redirect "${redirect}C";
}
if ($redirect = ABC) {
return 302 $uri/$is_args$args;
}
if ($redirect = A) {
return 418;
}
}
Honestly speaking, it looks terrible. It would help if someone can point
me to a better solution.
From me at myconan.net Mon Oct 10 06:19:15 2016
From: me at myconan.net (Edho Arief)
Date: Mon, 10 Oct 2016 15:19:15 +0900
Subject: Index fallback?
In-Reply-To: <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
<1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
Message-ID: <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com>
Made a bit more compact but still using ifs.
location / {
location ~ /$ {
error_page 418 = @dirlist;
if (-d $request_filename) {
set $index_fallback A;
}
if (!-f $request_filename/index.html) {
set $index_fallback "${index_fallback}B";
}
if ($index_fallback = AB) {
return 418;
}
}
}
On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote:
> Hi,
>
> On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote:
> > I somehow can't make this scenario work:
> >
> > root structure:
> > /a/index.html
> > /b/ >
> > accessing:
> > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> > 2. site.com/b -> redirect to site.com/b/ -> show @fallback
> >
> >
>
From francis at daoine.org Mon Oct 10 06:23:27 2016
From: francis at daoine.org (Francis Daly)
Date: Mon, 10 Oct 2016 07:23:27 +0100
Subject: Index fallback?
In-Reply-To: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
Message-ID: <20161010062327.GZ11677@daoine.org>
On Mon, Oct 10, 2016 at 12:56:57PM +0900, Edho Arief wrote:
Hi there,
untested, but...
> accessing:
> 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> 2. site.com/b -> redirect to site.com/b/ -> show @fallback
> As a side note,
>
> error_page 404 = @fallback;
>
> Wouldn't work because as mentioned in the previous one, it returns 403
> for #2 (directory exists, no index), not 404.
Would "error_page 403" Do The Right Thing?
(It may be that it matches more than you want it to, of course.)
f
--
Francis Daly francis at daoine.org
From me at myconan.net Mon Oct 10 06:25:49 2016
From: me at myconan.net (Edho Arief)
Date: Mon, 10 Oct 2016 15:25:49 +0900
Subject: Index fallback?
In-Reply-To: <20161010062327.GZ11677@daoine.org>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
<20161010062327.GZ11677@daoine.org>
Message-ID: <1476080749.2472762.750851769.43375E18@webmail.messagingengine.com>
Hi,
On Mon, Oct 10, 2016, at 15:23, Francis Daly wrote:
> On Mon, Oct 10, 2016 at 12:56:57PM +0900, Edho Arief wrote:
>
> Hi there,
>
> untested, but...
>
> > accessing:
> > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> > 2. site.com/b -> redirect to site.com/b/ -> show @fallback
>
> > As a side note,
> >
> > error_page 404 = @fallback;
> >
> > Wouldn't work because as mentioned in the previous one, it returns 403
> > for #2 (directory exists, no index), not 404.
>
> Would "error_page 403" Do The Right Thing?
>
> (It may be that it matches more than you want it to, of course.)
>
Yeah, it matches a bit too much.
From nurahmadie at gmail.com Mon Oct 10 06:29:09 2016
From: nurahmadie at gmail.com (Nurahmadie Nurahmadie)
Date: Mon, 10 Oct 2016 13:29:09 +0700
Subject: Index fallback?
In-Reply-To: <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
<1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
<1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com>
Message-ID:
Hi
On Mon, Oct 10, 2016 at 1:19 PM, Edho Arief wrote:
> Made a bit more compact but still using ifs.
>
> location / {
> location ~ /$ {
> error_page 418 = @dirlist;
>
> if (-d $request_filename) {
> set $index_fallback A;
> }
>
> if (!-f $request_filename/index.html) {
> set $index_fallback "${index_fallback}B";
> }
>
> if ($index_fallback = AB) {
> return 418;
> }
> }
> }
>
> On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote:
> > Hi,
> >
> > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote:
> > > I somehow can't make this scenario work:
> > >
> > > root structure:
> > > /a/index.html
> > > /b/ > >
> > > accessing:
> > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback
> > >
> > >
> >
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
Still need more locations, but independent to directories you want to
access:
server {
listen 7770;
root /tmp;
autoindex on;
autoindex_format json;
}
server {
listen 80;
server_name localhost;
index index.html;
root /tmp;
location ~ /.*?[^/]$ {
try_files $uri @redir;
}
location @redir {
return 301 $uri/;
}
location ~ /$ {
try_files $uri"index.html" @reproxy;
}
location @reproxy {
proxy_pass http://localhost:7770;
}
}
--
regards,
Nurahmadie
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From me at myconan.net Mon Oct 10 06:39:56 2016
From: me at myconan.net (Edho Arief)
Date: Mon, 10 Oct 2016 15:39:56 +0900
Subject: Index fallback?
In-Reply-To:
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
<1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
<1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com>
Message-ID: <1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com>
Hi,
On Mon, Oct 10, 2016, at 15:29, Nurahmadie Nurahmadie wrote:
> Hi
>
> > On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote:
> > > Hi,
> > >
> > > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote:
> > > > I somehow can't make this scenario work:
> > > >
> > > > root structure:
> > > > /a/index.html
> > > > /b/ > > >
> > > > accessing:
> > > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> > > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback
> > > >
>
> Still need more locations, but independent to directories you want to
> access:
>
>
> server {
> listen 7770;
> root /tmp;
> autoindex on;
> autoindex_format json;
> }
>
> server {
> listen 80;
> server_name localhost;
> index index.html;
> root /tmp;
>
> location ~ /.*?[^/]$ {
> try_files $uri @redir;
> }
>
> location @redir {
> return 301 $uri/;
> }
>
> location ~ /$ {
> try_files $uri"index.html" @reproxy;
> }
>
> location @reproxy {
> proxy_pass http://localhost:7770;
> }
> }
>
Thanks, but that's even longer than my ifs. Also one `server { }` and
one regexp location too many.
From nurahmadie at gmail.com Mon Oct 10 06:50:15 2016
From: nurahmadie at gmail.com (Nurahmadie Nurahmadie)
Date: Mon, 10 Oct 2016 13:50:15 +0700
Subject: Index fallback?
In-Reply-To: <1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
<1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
<1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com>
<1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com>
Message-ID:
Hi,
On Mon, Oct 10, 2016 at 1:39 PM, Edho Arief wrote:
> Hi,
>
> On Mon, Oct 10, 2016, at 15:29, Nurahmadie Nurahmadie wrote:
> > Hi
> >
> > > On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote:
> > > > Hi,
> > > >
> > > > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote:
> > > > > I somehow can't make this scenario work:
> > > > >
> > > > > root structure:
> > > > > /a/index.html
> > > > > /b/ > > > >
> > > > > accessing:
> > > > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> > > > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback
> > > > >
> >
> > Still need more locations, but independent to directories you want to
> > access:
> >
> >
> > server {
> > listen 7770;
> > root /tmp;
> > autoindex on;
> > autoindex_format json;
> > }
> >
> > server {
> > listen 80;
> > server_name localhost;
> > index index.html;
> > root /tmp;
> >
> > location ~ /.*?[^/]$ {
> > try_files $uri @redir;
> > }
> >
> > location @redir {
> > return 301 $uri/;
> > }
> >
> > location ~ /$ {
> > try_files $uri"index.html" @reproxy;
> > }
> >
> > location @reproxy {
> > proxy_pass http://localhost:7770;
> > }
> > }
> >
>
> Thanks, but that's even longer than my ifs. Also one `server { }` and
> one regexp location too many.
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
Ah, sorry I thought it's obvious that the other server is just served as
dummy example,
also I have this tendency to avoid `if` as much as possible, so yeah
--
regards,
Nurahmadie
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From me at myconan.net Mon Oct 10 06:53:58 2016
From: me at myconan.net (Edho Arief)
Date: Mon, 10 Oct 2016 15:53:58 +0900
Subject: Index fallback?
In-Reply-To:
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
<1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com>
<1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com>
<1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com>
Message-ID: <1476082438.2478106.750867537.6C84B5C0@webmail.messagingengine.com>
Hi,
On Mon, Oct 10, 2016, at 15:50, Nurahmadie Nurahmadie wrote:
> > >
> > > Still need more locations, but independent to directories you want to
> > > access:
> > >
> > >
> > > server {
> > > listen 7770;
> > > root /tmp;
> > > autoindex on;
> > > autoindex_format json;
> > > }
> > >
> > > server {
> > > listen 80;
> > > server_name localhost;
> > > index index.html;
> > > root /tmp;
> > >
> > > location ~ /.*?[^/]$ {
> > > try_files $uri @redir;
> > > }
> > >
> > > location @redir {
> > > return 301 $uri/;
> > > }
> > >
> > > location ~ /$ {
> > > try_files $uri"index.html" @reproxy;
> > > }
> > >
> > > location @reproxy {
> > > proxy_pass http://localhost:7770;
> > > }
> > > }
> > >
> >
> > Thanks, but that's even longer than my ifs. Also one `server { }` and
> > one regexp location too many.
> >
>
> Ah, sorry I thought it's obvious that the other server is just served as
> dummy example,
> also I have this tendency to avoid `if` as much as possible, so yeah
>
Looking again, that's actually the solution:
location / {
location ~ /$ {
try_files $uri/index.html @dirlist;
}
}
Thanks.
From nginx-forum at forum.nginx.org Mon Oct 10 07:41:13 2016
From: nginx-forum at forum.nginx.org (yurai)
Date: Mon, 10 Oct 2016 03:41:13 -0400
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <20161009160507.GW11677@daoine.org>
References: <20161009160507.GW11677@daoine.org>
Message-ID:
Hello Francis,
thank you for response. I just want to transfer big file on Nginx server
inside POST request. I use method from:
https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend
Whole my analysis and expectations are based on this article.
Unfotunately this "clientbodyinfileonly" functionality is not well
documented so I'm not sure how exactly ok scenario from Nginx POV should
look like. I just know that my file is not transfered and not saved on
server side.
Regards,
Dawid
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270145#msg-270145
From nginx-forum at forum.nginx.org Mon Oct 10 08:29:27 2016
From: nginx-forum at forum.nginx.org (mrast)
Date: Mon, 10 Oct 2016 04:29:27 -0400
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To: <20161009194134.GX11677@daoine.org>
References: <20161009194134.GX11677@daoine.org>
Message-ID:
Hi Francis,
Wow, this gets stranger......who have cracked it for me....but i have no
idea how or why it got there!
there were symlinks in website1.com and website3.com roots public
directories for phpmyadmin - symlinked to /usr/share/phpmyadmin.
I never noticed them though as they didnt appear as a folder when browsing
the public folder structure with an FTP program.
Only doing as ls on /website1.com/public did it show it.
Removed symlink to /usr/share/phpmyadmin in the 2 root folders and now i get
a 404 page if navigatng to /phpmyadmin.....Perfect! :-)
As you can probably tell im primarily a Windows admin and Linux knowledge is
limited, thanks to people like yourself though im learning!
PS - Index.php does exists in the public root folders - but the file belongs
to WordPress.
PPS - You say:
> if ($http_cookie ~* ?PHPSESSID"){
>If that is a copy-paste of the config file, then it probably won't match
>some things that you would want it to.
Could you elaborate on this please if you have time?
Thankyou for your time and help Francis, its most appreciated.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270146#msg-270146
From nginx-forum at forum.nginx.org Mon Oct 10 09:29:45 2016
From: nginx-forum at forum.nginx.org (bobykus)
Date: Mon, 10 Oct 2016 05:29:45 -0400
Subject: mail-proxy starttls and ssl on
Message-ID: <6e552de247b17a8327e0d1452a1c7978.NginxMailingListEnglish@forum.nginx.org>
The manual
Setting up SSL/TLS for a Mail Proxy
https://www.nginx.com/resources/admin-guide/mail-proxy/
says
Enable SSL/TLS for mail proxy with the ssl directive. If the directive is
specified in the mail context, SSL/TLS will be enabled for all mail proxy
servers. You can also enable STLS and STARTTLS with the starttls directive:
mail {
...
ssl on;
starttls on;
...
}
However if I add both,
nginx: [warn] "ssl" directive conflicts with "starttls" in
/root/nginx.conf:79
nginx: configuration file /root/nginx.conf test failed
How comes?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270147,270147#msg-270147
From chris.west at logicalglue.com Mon Oct 10 11:34:18 2016
From: chris.west at logicalglue.com (Chris West)
Date: Mon, 10 Oct 2016 12:34:18 +0100
Subject: 5s hangs with http2 and variable-based proxy_pass
Message-ID:
If you enable http2, our proxy setup develops 5s hangs, under load.
This happens from at least Chrome/linux, Firefox/linux and Edge/win10.
Any suggestions on how to further diagnose this problem, or work out
where this "5 second" number is coming from? Full reproduction config
and debug logs are attached, but I don't understand the debug logs.
This isn't always reproducible, but happens frequently. Changing
browser, restarting nginx, ... doesn't cause it to be immediately
reproducible.
The proxying is based on a variable:
resolver 8.8.4.4;
location ~/proxy/([a-z-]+\.example\.com)$ {
proxy_pass https://$1/foo;
...
This is easiest to see when a number of these urls are hit from a
single page, e.g.
... etc.
The observed effect is that exactly eight requests will be serviced,
then there will be a 5s wait, then another eight will be serviced,
then hang, etc. until all requests have been serviced.
Reproduced on Ubuntu 16.04's nginx packages (1.10 based), with default
config, and this sites-enabled/default full config:
server {
listen 443 default_server ssl http2;
ssl on;
ssl_certificate /etc/letsencrypt/live/.../fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem;
root /var/www/html;
index index.html index.htm;
server_name _;
location / {
try_files $uri $uri/ =404;
}
resolver 8.8.4.4;
location ~/proxy/([a-z-]+\....)$ {
proxy_pass https://$1/index.txt;
proxy_http_version 1.1;
proxy_connect_timeout 13s;
proxy_read_timeout 28s;
}
}
The 5s pause is evident in the debug log. However, the debug log
*also* shows that the upstream requests have been generated, which
means that all the requests have been received.
Pause:
2016/10/10 11:17:31 [debug] 4058#4058: *238 process http2 frame type:3
f:0 l:4 sid:17
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 RST_STREAM frame,
sid:17 status:8
2016/10/10 11:17:31 [debug] 4058#4058: *238 unknown http2 stream
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete
pos:00007F536315501D end:00007F536315501D
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 read handler
2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: 13
2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: -1
2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_get_error: 2
2016/10/10 11:17:31 [debug] 4058#4058: *238 process http2 frame type:3
f:0 l:4 sid:13
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 RST_STREAM frame,
sid:13 status:8
2016/10/10 11:17:31 [debug] 4058#4058: *238 unknown http2 stream
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete
pos:00007F536315501D end:00007F536315501D
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 read handler
2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: 13
2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: -1
2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_get_error: 2
2016/10/10 11:17:31 [debug] 4058#4058: *238 process http2 frame type:3
f:0 l:4 sid:5
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 RST_STREAM frame,
sid:5 status:8
2016/10/10 11:17:31 [debug] 4058#4058: *238 unknown http2 stream
2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete
pos:00007F536315501D end:00007F536315501D
2016/10/10 11:17:36 [debug] 4058#4058: *238 http upstream resolve:
"/proxy/nettesto....?"
2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 94.23.43.98
2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to
2001:41d0:2:2c62::
2016/10/10 11:17:36 [debug] 4058#4058: *238 posix_memalign:
000055B45897FDB0:4096 @16
2016/10/10 11:17:36 [debug] 4058#4058: *238 get rr peer, try: 2
2016/10/10 11:17:36 [debug] 4058#4058: *238 get rr peer, current:
000055B45897FE18 -1
2016/10/10 11:17:36 [debug] 4058#4058: *238 stream socket 12
Upstream requsets:
2016/10/10 11:17:31 [debug] 4058#4058: *238 http proxy header:
"GET /index.txt HTTP/1.1^M
Host: nettestz.fau...^M
But they're not served by the backend until much later, at
[10/Oct/2016:11:17:46 +0000] in this case (according to the backend's
nginx access logs).
The host names mentioned in the debug log are public and are valid
until I pull them down, but I don't know if this is reproducible with
multiple people accessing it (and you can probably guess why I
stripped them from the email body).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: debug.log.gz
Type: application/x-gzip
Size: 15622 bytes
Desc: not available
URL:
From vbart at nginx.com Mon Oct 10 11:58:36 2016
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Mon, 10 Oct 2016 14:58:36 +0300
Subject: 5s hangs with http2 and variable-based proxy_pass
In-Reply-To:
References:
Message-ID: <8585362.MJTgDEinSn@vbart-workstation>
On Monday 10 October 2016 12:34:18 Chris West wrote:
> If you enable http2, our proxy setup develops 5s hangs, under load.
> This happens from at least Chrome/linux, Firefox/linux and Edge/win10.
>
> Any suggestions on how to further diagnose this problem, or work out
> where this "5 second" number is coming from? Full reproduction config
> and debug logs are attached, but I don't understand the debug logs.
>
>
> This isn't always reproducible, but happens frequently. Changing
> browser, restarting nginx, ... doesn't cause it to be immediately
> reproducible.
>
[..]
> 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete
> pos:00007F536315501D end:00007F536315501D
> 2016/10/10 11:17:36 [debug] 4058#4058: *238 http upstream resolve:
> "/proxy/nettesto....?"
> 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 94.23.43.98
> 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to
> 2001:41d0:2:2c62::
[..]
Looks like the delay is created by your resolver (8.8.4.4 as set in your configuration).
Please, also check the documentation and don't use any public DNS in the resolver
directive: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
| To prevent DNS spoofing, it is recommended configuring DNS servers in a properly
| secured trusted local network.
wbr, Valentin V. Bartenev
From chris.west at logicalglue.com Mon Oct 10 12:30:38 2016
From: chris.west at logicalglue.com (Chris West)
Date: Mon, 10 Oct 2016 13:30:38 +0100
Subject: 5s hangs with http2 and variable-based proxy_pass
In-Reply-To: <8585362.MJTgDEinSn@vbart-workstation>
References:
<8585362.MJTgDEinSn@vbart-workstation>
Message-ID:
You are correct, the DNS server (Google Public DNS) isn't responding
to the requests. I don't know if this is because the UDP packets are
getting lost due to the flood generated, or if it thinks it's an
attack.
Ramming dnsmasq in the middle fixes it, but I don't really understand
why, as the test only generates 26*2=52 requests, and dnsmasq is
supposed to have a default concurrency of 150. Both generate, as far
as I can see, identical dns packets. dnsmasq takes about 200ms to
transmit them, whereas nginx only takes about 30ms, maybe that's
sufficient.
At least this isn't something scarily wrong with the http2 support,
which was what was worrying me. Cheers!
On 10 October 2016 at 12:58, Valentin V. Bartenev wrote:
> On Monday 10 October 2016 12:34:18 Chris West wrote:
>> If you enable http2, our proxy setup develops 5s hangs, under load.
>> This happens from at least Chrome/linux, Firefox/linux and Edge/win10.
>>
>> Any suggestions on how to further diagnose this problem, or work out
>> where this "5 second" number is coming from? Full reproduction config
>> and debug logs are attached, but I don't understand the debug logs.
>>
>>
>> This isn't always reproducible, but happens frequently. Changing
>> browser, restarting nginx, ... doesn't cause it to be immediately
>> reproducible.
>>
> [..]
>> 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete
>> pos:00007F536315501D end:00007F536315501D
>> 2016/10/10 11:17:36 [debug] 4058#4058: *238 http upstream resolve:
>> "/proxy/nettesto....?"
>> 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 94.23.43.98
>> 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to
>> 2001:41d0:2:2c62::
> [..]
>
>
> Looks like the delay is created by your resolver (8.8.4.4 as set in your configuration).
> Please, also check the documentation and don't use any public DNS in the resolver
> directive: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
>
> | To prevent DNS spoofing, it is recommended configuring DNS servers in a properly
> | secured trusted local network.
>
> wbr, Valentin V. Bartenev
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From francis at daoine.org Mon Oct 10 16:10:17 2016
From: francis at daoine.org (Francis Daly)
Date: Mon, 10 Oct 2016 17:10:17 +0100
Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and
Nginx
In-Reply-To:
References: <20161009194134.GX11677@daoine.org>
Message-ID: <20161010161017.GA11677@daoine.org>
On Mon, Oct 10, 2016 at 04:29:27AM -0400, mrast wrote:
Hi there,
> there were symlinks in website1.com and website3.com roots public
> directories for phpmyadmin - symlinked to /usr/share/phpmyadmin.
It's good that you found an answer that works for you.
> > if ($http_cookie ~* ?PHPSESSID"){
> >If that is a copy-paste of the config file, then it probably won't match
> >some things that you would want it to.
>
> Could you elaborate on this please if you have time?
? is not "
You probably want "PHPSESSID", like in the other server block.
All the best,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Mon Oct 10 16:16:22 2016
From: francis at daoine.org (Francis Daly)
Date: Mon, 10 Oct 2016 17:16:22 +0100
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To:
References: <20161009160507.GW11677@daoine.org>
Message-ID: <20161010161622.GB11677@daoine.org>
On Mon, Oct 10, 2016 at 03:41:13AM -0400, yurai wrote:
Hi there,
> thank you for response. I just want to transfer big file on Nginx server
> inside POST request. I use method from:
> https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend
>
> Whole my analysis and expectations are based on this article.
I think that at least one of us is confused.
You have "client" - "nginx" - "backend".
That document is about getting a file from "client" to "nginx", and then
telling "backend" what filename is used on "nginx".
If "backend" wants to access the file, that is out of the scope of that
document. ("backend" gets the filename, and should presumably do a
separate "open" on the shared filesystem, or have a separate transfer
to be able to read the file.)
> Unfotunately this "clientbodyinfileonly" functionality is not well
> documented so I'm not sure how exactly ok scenario from Nginx POV should
> look like. I just know that my file is not transfered and not saved on
> server side.
The file should be transferred to the nginx server.
If that happens, the nginx side is doing what it was configured to do.
The functionality is documented at
http://nginx.org/r/client_body_in_file_only
Cheers,
f
--
Francis Daly francis at daoine.org
From dmiller at amfes.com Mon Oct 10 17:19:55 2016
From: dmiller at amfes.com (Daniel Miller)
Date: Mon, 10 Oct 2016 10:19:55 -0700
Subject: invalid url - my config or invalid request?
Message-ID:
My site is generally doing exactly what I want. Periodically I'll see
some errors in the log. I'm trying to determine if these indicate
problems in my config, or potential attacks, or simply a broken client.
The last few lines in my log:
2016/10/05 14:38:37 [error] 17912#0: *17824 invalid url, client:
195.154.181.113, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0"
2016/10/05 19:47:27 [error] 17912#0: *18315 invalid url, client:
169.56.71.56, server: amfes.com, request: "GET / HTTP/1.0"
2016/10/08 13:46:21 [error] 17910#0: *27413 invalid url, client:
212.83.162.138, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0"
2016/10/09 18:05:30 [error] 17912#0: *32588 invalid url, client:
211.1.156.90, server: amfes.com, request: "HEAD / HTTP/1.0"
Clients I control have no problem reaching the root or the robots.txt
file - so what is this telling me?
--
Daniel
From vbart at nginx.com Mon Oct 10 17:43:08 2016
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Mon, 10 Oct 2016 20:43:08 +0300
Subject: invalid url - my config or invalid request?
In-Reply-To:
References:
Message-ID: <2136505.JouzbVE9h5@vbart-workstation>
On Monday 10 October 2016 10:19:55 Daniel Miller wrote:
> My site is generally doing exactly what I want. Periodically I'll see
> some errors in the log. I'm trying to determine if these indicate
> problems in my config, or potential attacks, or simply a broken client.
>
> The last few lines in my log:
> 2016/10/05 14:38:37 [error] 17912#0: *17824 invalid url, client:
> 195.154.181.113, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0"
> 2016/10/05 19:47:27 [error] 17912#0: *18315 invalid url, client:
> 169.56.71.56, server: amfes.com, request: "GET / HTTP/1.0"
> 2016/10/08 13:46:21 [error] 17910#0: *27413 invalid url, client:
> 212.83.162.138, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0"
> 2016/10/09 18:05:30 [error] 17912#0: *32588 invalid url, client:
> 211.1.156.90, server: amfes.com, request: "HEAD / HTTP/1.0"
>
> Clients I control have no problem reaching the root or the robots.txt
> file - so what is this telling me?
>
The official nginx build cannot produce such messages. They likely come
from 3rd-party module or patches you're using.
wbr, Valentin V. Bartenev
From nginx-forum at forum.nginx.org Mon Oct 10 19:14:33 2016
From: nginx-forum at forum.nginx.org (gg4u)
Date: Mon, 10 Oct 2016 15:14:33 -0400
Subject: cache all endpoints but one: nginx settings
In-Reply-To: <20161004162825.GM73038@mdounin.ru>
References: <20161004162825.GM73038@mdounin.ru>
Message-ID: <30c5aa0d4b5d2f7241d7f5b923c90f51.NginxMailingListEnglish@forum.nginx.org>
thank you Maxim,
I ll try with your suggestions.
So basically, if I have a "production" server and a proxy server in front of
it, I just need cache on the proxy server
(http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid)
, and not cache responses on the production server?
I have this setting on production server:
# Set Cache for my Json api
location ~* \.(?:json)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
But if I am using a proxy server it is useless, correct?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270058,270167#msg-270167
From nginx-forum at forum.nginx.org Mon Oct 10 19:34:50 2016
From: nginx-forum at forum.nginx.org (gg4u)
Date: Mon, 10 Oct 2016 15:34:50 -0400
Subject: cache all endpoints but one: nginx settings
In-Reply-To: <30c5aa0d4b5d2f7241d7f5b923c90f51.NginxMailingListEnglish@forum.nginx.org>
References: <20161004162825.GM73038@mdounin.ru>
<30c5aa0d4b5d2f7241d7f5b923c90f51.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <7abc6b22246e2e63b9408b56e7ad27bf.NginxMailingListEnglish@forum.nginx.org>
Update about FLASK:
as you indicated in:
I am using errorhandler decorator, but returning a template in the handler
function:
@application.errorhandler(404)
def error_404(e):
application.logger.error('Page Not Found: %s', (request.path))
#return render_template('404.html'), 404
return render_template("404.html", error = str(e))
In this situation, it is not clear to me if nginx will read a 200 response,
for actually the template 404.html is found, or the 404 error, handled by
the decorator.
Actually, with the suggestions from:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid
I can see:
/api/_invalidpage
returned header: X-Proxy-Cache: MISS and it is not cached now.
while
/_invalidpage/ (the page a user will see for that specific page)
returned X-Proxy-Cache: HIT
I would like to cache the html template but not the 404 api response.
I think now is correct but would appreciate a clarification on how a
template is handled inside handleerror in flask, to better understand how
things works.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270058,270168#msg-270168
From darbas.mindaugas at gmail.com Tue Oct 11 11:26:50 2016
From: darbas.mindaugas at gmail.com (=?UTF-8?Q?Mindaugas_Bernatavi=C4=8Dius?=)
Date: Tue, 11 Oct 2016 11:26:50 +0000
Subject: Rate limiting zone size question
Message-ID:
Greetings group,
I have posted the same questions elsewhere, hope its not against the policy.
One of the modules that is often employed *ngx_http_limit_req_module*
has the following precaution in the documentation:
*If the **zone storage is exhausted, the server will return the 503*
*(Service Temporarily Unavailable) error to all further requests.*
*Questions:*
*----------------------------------------------------------------------*
1. It is interesting for me how is the *zone *defined?
I know that the underlining data structure is a red-black tree.
But what comprises the entire zone record?
All the information needed for the rate limit?
2. I have multiple users on the website served by nginx.
And the zone size is 1m. How do I determine the lower
bound of the zone for a given unique ip count?
3. After what time is the zone memory released?
If I have: rate=1r/m; does that mean that all the records will
have to be kept for 1 minute to do the accounting, then cleared
so that memory in zone could be renewed?
*Some code considerations:*
*----------------------------------------------------------------------*
Trying to look at *ngx_http_limit_req_module.c *I saw only configure
time error being thrown when the zone size is specified incorrectly:
*if (size < (ssize_t) (8 * ngx_pagesize)) { *
*ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "zone \"%V\" is too small",
&value[i]); *
*return NGX_CONF_ERROR;*
* }*
(8 * ngx_pagesize), if I'm not mistaken is 8 * 4096 = 32768
I confirmed experimentally that the smallest size is indeed 32768 bytes =
32KB.
*----------------------------------------------------------------------*
The function contains some interesting data:
*static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t
*limit, *
*ngx_uint_t hash, *
*ngx_str_t *key, *
*ngx_uint_t *ep, *
*ngx_uint_t account)*
* node = ngx_slab_alloc_locked(ctx->shpool, size);*
* if (node == NULL) {*
* ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0,*
* "could not allocate node%s",
ctx->shpool->log_ctx);*
* return NGX_ERROR;*
* } *
I suppose this is the error thrown when zone size limit is reached?
Would really appreciate your help on this issue.
*----------------------------------------------------------------------*
Also, I calculated the size of each node of the rb tree and
it seems to only comprise 44 bytes.
0022 struct ngx_rbtree_node_s
{0023
ngx_rbtree_key_t
key ; ===> 4 bytes0024
ngx_rbtree_node_t
*left ; ===> 8 bytes (pointer
size on 64bit)0025 ngx_rbtree_node_t
*right
; ===> 80026
ngx_rbtree_node_t
*parent ;===> 80027 u_char
color ; ===> 80028
u_char data ;
===> 80029 };
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Tue Oct 11 15:32:34 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 11 Oct 2016 18:32:34 +0300
Subject: nginx-1.11.5
Message-ID: <20161011153234.GC73038@mdounin.ru>
Changes with nginx 1.11.5 11 Oct 2016
*) Change: the --with-ipv6 configure option was removed, now IPv6
support is configured automatically.
*) Change: now if there are no available servers in an upstream, nginx
will not reset number of failures of all servers as it previously
did, but will wait for fail_timeout to expire.
*) Feature: the ngx_stream_ssl_preread_module.
*) Feature: the "server" directive in the "upstream" context supports
the "max_conns" parameter.
*) Feature: the --with-compat configure option.
*) Feature: "manager_files", "manager_threshold", and "manager_sleep"
parameters of the "proxy_cache_path", "fastcgi_cache_path",
"scgi_cache_path", and "uwsgi_cache_path" directives.
*) Bugfix: flags passed by the --with-ld-opt configure option were not
used while building perl module.
*) Bugfix: in the "add_after_body" directive when used with the
"sub_filter" directive.
*) Bugfix: in the $realip_remote_addr variable.
*) Bugfix: the "dav_access", "proxy_store_access",
"fastcgi_store_access", "scgi_store_access", and "uwsgi_store_access"
directives ignored permissions specified for user.
*) Bugfix: unix domain listen sockets might not be inherited during
binary upgrade on Linux.
*) Bugfix: nginx returned the 400 response on requests with the "-"
character in the HTTP method.
--
Maxim Dounin
http://nginx.org/
From kworthington at gmail.com Tue Oct 11 17:23:33 2016
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 11 Oct 2016 13:23:33 -0400
Subject: [nginx-announce] nginx-1.11.5
In-Reply-To: <20161011153240.GD73038@mdounin.ru>
References: <20161011153240.GD73038@mdounin.ru>
Message-ID:
Hello Nginx users,
Now available: Nginx 1.11.5 for Windows
https://kevinworthington.com/nginxwin1115 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Google+ https://plus.google.com/+KevinWorthington/
Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
http://kevinworthington.com/
http://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
On Tue, Oct 11, 2016 at 11:32 AM, Maxim Dounin wrote:
> Changes with nginx 1.11.5 11 Oct
> 2016
>
> *) Change: the --with-ipv6 configure option was removed, now IPv6
> support is configured automatically.
>
> *) Change: now if there are no available servers in an upstream, nginx
> will not reset number of failures of all servers as it previously
> did, but will wait for fail_timeout to expire.
>
> *) Feature: the ngx_stream_ssl_preread_module.
>
> *) Feature: the "server" directive in the "upstream" context supports
> the "max_conns" parameter.
>
> *) Feature: the --with-compat configure option.
>
> *) Feature: "manager_files", "manager_threshold", and "manager_sleep"
> parameters of the "proxy_cache_path", "fastcgi_cache_path",
> "scgi_cache_path", and "uwsgi_cache_path" directives.
>
> *) Bugfix: flags passed by the --with-ld-opt configure option were not
> used while building perl module.
>
> *) Bugfix: in the "add_after_body" directive when used with the
> "sub_filter" directive.
>
> *) Bugfix: in the $realip_remote_addr variable.
>
> *) Bugfix: the "dav_access", "proxy_store_access",
> "fastcgi_store_access", "scgi_store_access", and
> "uwsgi_store_access"
> directives ignored permissions specified for user.
>
> *) Bugfix: unix domain listen sockets might not be inherited during
> binary upgrade on Linux.
>
> *) Bugfix: nginx returned the 400 response on requests with the "-"
> character in the HTTP method.
>
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx-announce mailing list
> nginx-announce at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From alex at samad.com.au Wed Oct 12 01:43:12 2016
From: alex at samad.com.au (Alex Samad)
Date: Wed, 12 Oct 2016 12:43:12 +1100
Subject: newbie question
Message-ID:
Hi
I am trying to create a dynamic auth address
# grab ssoid
map $cookie_SSOID $ssoid_cookie {
default "";
~SSOID=(?P.+) $ssoid;
}
location /imaadmin/ {
proxy_cache off;
proxy_pass http://IMAAdmin;
auth_request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin;
what I am trying to do is fill the variable ssoid_cookie with the
cookie value for SSOID in the request or make it blank
then when somebody tries to access /imaadmin make the auth request
/sso/validate?SSOID=$ssoid_cookie&a=imaadmin;
but i get this
GET /sso/validate%3FSSOID=$ssoid_cookie&a=imaadmin HTTP/1.0
Alex
From anoopalias01 at gmail.com Wed Oct 12 09:33:29 2016
From: anoopalias01 at gmail.com (Anoop Alias)
Date: Wed, 12 Oct 2016 15:03:29 +0530
Subject: [nginx-announce] nginx-1.11.5
In-Reply-To:
References: <20161011153240.GD73038@mdounin.ru>
Message-ID:
*) Feature: the --with-compat configure option.
What does this do actually?
On Tue, Oct 11, 2016 at 10:53 PM, Kevin Worthington
wrote:
> Hello Nginx users,
>
> Now available: Nginx 1.11.5 for Windows https://kevinworthington.com/
> nginxwin1115 (32-bit and 64-bit versions)
>
> These versions are to support legacy users who are already using Cygwin
> based builds of Nginx. Officially supported native Windows binaries are
> at nginx.org.
>
> Announcements are also available here:
> Twitter http://twitter.com/kworthington
> Google+ https://plus.google.com/+KevinWorthington/
>
> Thank you,
> Kevin
> --
> Kevin Worthington
> kworthington *@* (gmail] [dot} {com)
> http://kevinworthington.com/
> http://twitter.com/kworthington
> https://plus.google.com/+KevinWorthington/
>
> On Tue, Oct 11, 2016 at 11:32 AM, Maxim Dounin wrote:
>
>> Changes with nginx 1.11.5 11 Oct
>> 2016
>>
>> *) Change: the --with-ipv6 configure option was removed, now IPv6
>> support is configured automatically.
>>
>> *) Change: now if there are no available servers in an upstream, nginx
>> will not reset number of failures of all servers as it previously
>> did, but will wait for fail_timeout to expire.
>>
>> *) Feature: the ngx_stream_ssl_preread_module.
>>
>> *) Feature: the "server" directive in the "upstream" context supports
>> the "max_conns" parameter.
>>
>> *) Feature: the --with-compat configure option.
>>
>> *) Feature: "manager_files", "manager_threshold", and "manager_sleep"
>> parameters of the "proxy_cache_path", "fastcgi_cache_path",
>> "scgi_cache_path", and "uwsgi_cache_path" directives.
>>
>> *) Bugfix: flags passed by the --with-ld-opt configure option were not
>> used while building perl module.
>>
>> *) Bugfix: in the "add_after_body" directive when used with the
>> "sub_filter" directive.
>>
>> *) Bugfix: in the $realip_remote_addr variable.
>>
>> *) Bugfix: the "dav_access", "proxy_store_access",
>> "fastcgi_store_access", "scgi_store_access", and
>> "uwsgi_store_access"
>> directives ignored permissions specified for user.
>>
>> *) Bugfix: unix domain listen sockets might not be inherited during
>> binary upgrade on Linux.
>>
>> *) Bugfix: nginx returned the 400 response on requests with the "-"
>> character in the HTTP method.
>>
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>>
>> _______________________________________________
>> nginx-announce mailing list
>> nginx-announce at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
*Anoop P Alias*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mat999 at gmail.com Wed Oct 12 10:01:24 2016
From: mat999 at gmail.com (Mathew Heard)
Date: Wed, 12 Oct 2016 21:01:24 +1100
Subject: CAP_NET_ADMIN
Message-ID:
Hi All,
I am stuck trying to get my nginx service which is launched via
SystemD to give CAP_NET_ADMIN to its workers (required for
IP_TRANSPARENT).
I have tried /etc/security/capability.conf & setcap. SystemD has the
permission whitelisted:
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID
Any advice?
Regards,
Mathew
From mat999 at gmail.com Wed Oct 12 10:07:58 2016
From: mat999 at gmail.com (Mathew Heard)
Date: Wed, 12 Oct 2016 21:07:58 +1100
Subject: Fwd: CAP_NET_ADMIN
In-Reply-To:
References:
Message-ID:
I have also tried:
InheritableCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SETGID
CAP_SETUID CAP_SYS_RESOURCE
and various other options without avail.
---------- Forwarded message ----------
From: Mathew Heard
Date: Wed, Oct 12, 2016 at 9:01 PM
Subject: CAP_NET_ADMIN
To: nginx at nginx.org
Hi All,
I am stuck trying to get my nginx service which is launched via
SystemD to give CAP_NET_ADMIN to its workers (required for
IP_TRANSPARENT).
I have tried /etc/security/capability.conf & setcap. SystemD has the
permission whitelisted:
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN
CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID
Any advice?
Regards,
Mathew
From nginx-forum at forum.nginx.org Wed Oct 12 10:28:47 2016
From: nginx-forum at forum.nginx.org (yurai)
Date: Wed, 12 Oct 2016 06:28:47 -0400
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <20161010161622.GB11677@daoine.org>
References: <20161010161622.GB11677@daoine.org>
Message-ID: <216fd4b10237c6458f868f35d5a7f2ac.NginxMailingListEnglish@forum.nginx.org>
Hello,
>"The file should be transferred to the nginx server."
This is the whole point.
With current configuration when I type curl --data-binary '@upload.txt'
http://localhost/upload file is NOT transffered from client to server at all
- "proxy_pass" is performed and I only get HTTP response 200.
When I change my configuration (by removing whole backend configuration (s2
block) and all proxy_* directives from s1) and type same command I get HTTP
405 Not Allowed or HTTP 301 Moved Permanently.
Let's ignore for the second size of my file in body. Maybe in this moment
the right question is: what should I do to make my above curl command work?
Regards,
Dawid
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270196#msg-270196
From nginx-forum at forum.nginx.org Wed Oct 12 11:09:22 2016
From: nginx-forum at forum.nginx.org (netcana)
Date: Wed, 12 Oct 2016 07:09:22 -0400
Subject: Practical size limit of config files
In-Reply-To:
References:
Message-ID: <7bf878c21818ae77b8d586b187f34798.NginxMailingListEnglish@forum.nginx.org>
same question here but i think no one knows that's why there is no reply.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270109,270197#msg-270197
From nginx-forum at forum.nginx.org Wed Oct 12 11:11:57 2016
From: nginx-forum at forum.nginx.org (netcana)
Date: Wed, 12 Oct 2016 07:11:57 -0400
Subject: URL is not pointing to https on iframe
In-Reply-To:
References:
Message-ID: <900484913fd476e79073886182b0898c.NginxMailingListEnglish@forum.nginx.org>
wish you all the best for your project but sorry i have no clue about your
question.
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270042,270198#msg-270198
From mdounin at mdounin.ru Wed Oct 12 13:52:14 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 12 Oct 2016 16:52:14 +0300
Subject: [nginx-announce] nginx-1.11.5
In-Reply-To:
References: <20161011153240.GD73038@mdounin.ru>
Message-ID: <20161012135214.GG73038@mdounin.ru>
Hello!
On Wed, Oct 12, 2016 at 03:03:29PM +0530, Anoop Alias wrote:
> *) Feature: the --with-compat configure option.
>
> What does this do actually?
This option enables dynamic modules compatibility, that is, it
ensures that appropriate fields in structures are present (or
appropriately-sized placeholders are added).
As a result, it is now possible to compile compatible dynamic
modules using a minimal set of configure arguments as long as main
nginx binary is compiled using --with-compat. Just
./configure --with-compat --add-dynamic-module=/path/to/module
should be enough to compile a binary compatible module.
Additionally, this option enables binary compatibility of dynamic
modules with our commercial product, NGINX Plus, and thus allows
one to compile and load custom modules into NGINX Plus.
Corresponding version of NGINX Plus is yet to be released though.
--
Maxim Dounin
http://nginx.org/
From mdounin at mdounin.ru Wed Oct 12 14:02:10 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 12 Oct 2016 17:02:10 +0300
Subject: newbie question
In-Reply-To:
References:
Message-ID: <20161012140210.GH73038@mdounin.ru>
Hello!
On Wed, Oct 12, 2016 at 12:43:12PM +1100, Alex Samad wrote:
> Hi
>
> I am trying to create a dynamic auth address
>
>
> # grab ssoid
> map $cookie_SSOID $ssoid_cookie {
> default "";
> ~SSOID=(?P.+) $ssoid;
> }
>
>
> location /imaadmin/ {
> proxy_cache off;
> proxy_pass http://IMAAdmin;
>
>
>
> auth_request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin;
>
>
> what I am trying to do is fill the variable ssoid_cookie with the
> cookie value for SSOID in the request or make it blank
>
> then when somebody tries to access /imaadmin make the auth request
> /sso/validate?SSOID=$ssoid_cookie&a=imaadmin;
>
> but i get this
> GET /sso/validate%3FSSOID=$ssoid_cookie&a=imaadmin HTTP/1.0
This is because the "auth_request" directive doesn't support
variables, and also doesn't support request arguments.
Try this instead:
location /imaadmin/ {
auth_request /sso/validate;
... proxy_pass ...
}
location = /sso/validate {
set $args SSOID=$ssoid_cookie&a=imaadmin;
... proxy_pass ...
}
--
Maxim Dounin
http://nginx.org/
From akshayaamohan05 at gmail.com Wed Oct 12 15:22:15 2016
From: akshayaamohan05 at gmail.com (AKSHAYAA MOHAN)
Date: Wed, 12 Oct 2016 20:52:15 +0530
Subject: Log response location headers when Nginx is used as reverse proxy
Message-ID:
Hi,
I have a usecase where I want to log the response location headers returned
by the upstream servers.
Is there a way I can do this without installing any third party tools?
Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Wed Oct 12 16:03:02 2016
From: francis at daoine.org (Francis Daly)
Date: Wed, 12 Oct 2016 17:03:02 +0100
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <216fd4b10237c6458f868f35d5a7f2ac.NginxMailingListEnglish@forum.nginx.org>
References: <20161010161622.GB11677@daoine.org>
<216fd4b10237c6458f868f35d5a7f2ac.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161012160302.GD11677@daoine.org>
On Wed, Oct 12, 2016 at 06:28:47AM -0400, yurai wrote:
Hi there,
> >"The file should be transferred to the nginx server."
>
> This is the whole point.
> With current configuration when I type curl --data-binary '@upload.txt'
> http://localhost/upload file is NOT transffered from client to server at all
> - "proxy_pass" is performed and I only get HTTP response 200.
There is the client.
There is the nginx server, that the client talks to.
There is the upstream back-end server, that nginx talks to.
Are you reporting that the content of the client-file upload.txt is not
saved on the nginx server that is localhost, in a numbered file below
your client_body_temp_path?
Or are you reporting that the content of the client-file upload.txt is
not transferred to the upstream back-end server?
There is more than one server involved. Please be very clear which one
you are referring to, when you refer to any.
> When I change my configuration (by removing whole backend configuration (s2
> block) and all proxy_* directives from s1) and type same command I get HTTP
> 405 Not Allowed or HTTP 301 Moved Permanently.
>
> Let's ignore for the second size of my file in body. Maybe in this moment
> the right question is: what should I do to make my above curl command work?
It works for me.
Perhaps I have a different idea of what "works" mean.
f
--
Francis Daly francis at daoine.org
From thomas at glanzmann.de Wed Oct 12 17:50:06 2016
From: thomas at glanzmann.de (Thomas Glanzmann)
Date: Wed, 12 Oct 2016 19:50:06 +0200
Subject: Use ngx_stream_ssl_preread_module but also log client ip in
access.log for https requests
Message-ID: <20161012175006.GD5983@glanzmann.de>
Hello,
I would like to use ngx_stream_ssl_preread_module to multiplex a web
server, openvpn, and squid to one ip address and port. However I would
also like to keep the real client ip address in my http logs, is that
possible, if so how?
Cheers,
Thomas
From arut at nginx.com Wed Oct 12 18:06:58 2016
From: arut at nginx.com (Roman Arutyunyan)
Date: Wed, 12 Oct 2016 21:06:58 +0300
Subject: Use ngx_stream_ssl_preread_module but also log client ip in
access.log for https requests
In-Reply-To: <20161012175006.GD5983@glanzmann.de>
References: <20161012175006.GD5983@glanzmann.de>
Message-ID: <20161012180658.GD52217@Romans-MacBook-Air.local>
Hi Thomas,
On Wed, Oct 12, 2016 at 07:50:06PM +0200, Thomas Glanzmann wrote:
> Hello,
> I would like to use ngx_stream_ssl_preread_module to multiplex a web
> server, openvpn, and squid to one ip address and port. However I would
> also like to keep the real client ip address in my http logs, is that
> possible, if so how?
You can enable the PROXY protocol for upstream connections.
But your backends must support it.
http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol
--
Roman Arutyunyan
From thomas at glanzmann.de Wed Oct 12 18:33:29 2016
From: thomas at glanzmann.de (Thomas Glanzmann)
Date: Wed, 12 Oct 2016 20:33:29 +0200
Subject: Use ngx_stream_ssl_preread_module but also log client ip in
access.log for https requests
In-Reply-To: <20161012180658.GD52217@Romans-MacBook-Air.local>
References: <20161012175006.GD5983@glanzmann.de>
<20161012180658.GD52217@Romans-MacBook-Air.local>
Message-ID: <20161012183329.GB12201@glanzmann.de>
Hello Roman,
* Roman Arutyunyan [2016-10-12 20:07]:
> On Wed, Oct 12, 2016 at 07:50:06PM +0200, Thomas Glanzmann wrote:
> > I would like to use ngx_stream_ssl_preread_module to multiplex a web
> > server, openvpn, and squid to one ip address and port. However I would
> > also like to keep the real client ip address in my http logs, is that
> > possible, if so how?
> You can enable the PROXY protocol for upstream connections.
> But your backends must support it.
> http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol
thanks a lot for the hint. It works like a charm. For others want to do
the same, I did the following:
- configured nginx with --with-stream --with-stream_ssl_preread_module
- For https listened on stream:
stream {
proxy_protocol on;
upstream webserver {
server 127.0.0.1:443;
}
map $ssl_preread_server_name $name {
default webserver;
}
server {
listen :443;
proxy_pass $name;
ssl_preread on;
}
}
- In my http context, I added:
set_real_ip_from 127.0.0.1;
real_ip_header proxy_protocol;
- And in my https listen directives I put:
listen 127.0.0.1:443 ssl http2 proxy_protocol;
I didn't even had to modify the access_log logformat because apparently
'real_ip_header proxy_protocol' takes care of that.
Cheers,
Thomas
From nginx-forum at forum.nginx.org Wed Oct 12 19:34:45 2016
From: nginx-forum at forum.nginx.org (yurai)
Date: Wed, 12 Oct 2016 15:34:45 -0400
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <20161012160302.GD11677@daoine.org>
References: <20161012160302.GD11677@daoine.org>
Message-ID: <4483dc9306924ed387ae3c4183e1cc23.NginxMailingListEnglish@forum.nginx.org>
Hi,
>Are you reporting that the content of the client-file upload.txt is not
>saved on the nginx server that is localhost, in a numbered file below
>your client_body_temp_path?
Yes. Exactly this. My /tmp/nginx-client-body directory is empty.
>There is more than one server involved. Please be very clear which one
>you are referring to, when you refer to any.
Please notice that in many places I try to be precised as much as possible
by reffering to s1 and s2. Sorry for confusion.
By writing "server" I mean s1. By writing "backend" I mean s2.
Both server names comes from configuration file I placed on beginning of
discussion.
Regards,
Dawid
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270222#msg-270222
From nginx-forum at forum.nginx.org Wed Oct 12 19:44:39 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Wed, 12 Oct 2016 15:44:39 -0400
Subject: URL is not pointing to https on iframe
In-Reply-To:
References:
Message-ID: <52c40d5e12a504e8e0fb9a570061fb8f.NginxMailingListEnglish@forum.nginx.org>
geopcgeo Wrote:
-------------------------------------------------------
> fine on https. But please let me know whats the issue? Is it on Iframe
> or
> on Nginx. Can anyone please help us?
This needs to be fixed in the iframe (or whatever you use to generate this
iframe).
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270042,270224#msg-270224
From francis at daoine.org Wed Oct 12 21:44:43 2016
From: francis at daoine.org (Francis Daly)
Date: Wed, 12 Oct 2016 22:44:43 +0100
Subject: Clientbodyinfileonly - POST request is discarded
In-Reply-To: <4483dc9306924ed387ae3c4183e1cc23.NginxMailingListEnglish@forum.nginx.org>
References: <20161012160302.GD11677@daoine.org>
<4483dc9306924ed387ae3c4183e1cc23.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161012214443.GE11677@daoine.org>
On Wed, Oct 12, 2016 at 03:34:45PM -0400, yurai wrote:
Hi there,
> >Are you reporting that the content of the client-file upload.txt is not
> >saved on the nginx server that is localhost, in a numbered file below
> >your client_body_temp_path?
>
> Yes. Exactly this. My /tmp/nginx-client-body directory is empty.
Ok, that is unexpected to me.
I've read back over the mail thread, and there seem to be a few things
where it is not clear to me what exactly is happening.
Can you make a test nginx.conf that is very simple, in an attempt to
isolate where things are going wrong?
I use this:
==
events {}
http {
server {
listen 8008;
location = /upload {
client_body_temp_path /tmp/clientb;
client_body_in_file_only on;
proxy_set_header X-FILE $request_body_file;
proxy_pass http://127.0.0.1:8008/upstream;
}
location = /upstream {
return 200 "Look in $http_x_file\n";
}
}
}
==
and when I do
curl -v --data-binary words http://127.0.0.1:8008/upload
I see the POST with Content-Length: 5; I get a response of "Look in " and a
filename, and when I "ls -l" that filename I see that it is 5 bytes long.
I use /tmp/clientb as the client directory above; that directory did
not exist before I reloaded nginx, so nginx will create it with suitable
permissions.
When I then do
curl -v --data-binary @upload.txt http://127.0.0.1:8008/upload
I see the POST with Content-Length: 16; I get a response of "Look in "
and a different filename, and when I "cat" that filename I see the same
16-byte content as was in my original local upload.txt file.
When you do exactly that, do you see anything different?
Note that this is *not* exactly the same as your original case, because
it leaves out many of the config directives. In particular, this *does*
send the initial POST content to the upstream. That's ok; the point of
this is to find out why and where the initial set-up is broken. Other
bits can be added afterwards.
> >There is more than one server involved. Please be very clear which one
> >you are referring to, when you refer to any.
>
> Please notice that in many places I try to be precised as much as possible
> by reffering to s1 and s2. Sorry for confusion.
Actually, I'm wrong there, sorry about that. I had got confused with a
separate mail; your mails were clear about the two server blocks on the
one nginx on localhost.
Thanks,
f
--
Francis Daly francis at daoine.org
From alex at samad.com.au Thu Oct 13 04:53:37 2016
From: alex at samad.com.au (Alex Samad)
Date: Thu, 13 Oct 2016 15:53:37 +1100
Subject: newbie question
In-Reply-To: <20161012140210.GH73038@mdounin.ru>
References:
<20161012140210.GH73038@mdounin.ru>
Message-ID:
Hi
Thanks
I ended up with this but still with issues
map $cookie_SSOID $ssoid_cookie {
default "";
~SSOID=(?P.+) $ssoid;
}
location /imaadmin/ {
proxy_cache off;
proxy_pass http://IMAAdmin;
auth_request /sso/validate;
# must use %20 for url encoding
set $sso_group "Staff-sso";
proxy_pass
error_page 401 = @error401;
location @error401 {
# return 302 https://$server_name/sso/login;
rewrite ^ https://$server_name/sso/login;
}
location /sso/validate {
proxy_cache off;
rewrite (.*) $1?SSOID=$cookie_ssoid&a=$sso_group? break;
proxy_set_header X-Original-URI $request_uri;
proxy_pass
location /sso/ {
proxy_cache off;
rewrite (.*) $1 break;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Original-URI "imaadmin"; # have to hard code
proxy_pass
So
http://abc.com.au/imaadmin
does a http://abc.com.au/sso/validate?SSOID=&a=
200 = okay
401 redirect to http://abc.com.au/sso/login
sso redirects to http://abc.com.au/sso/login/form/[X-Original-URI]
<<< its failing here I am hard coding this
Thanks
On 13 October 2016 at 01:02, Maxim Dounin wrote:
> Hello!
>
> On Wed, Oct 12, 2016 at 12:43:12PM +1100, Alex Samad wrote:
>
>> Hi
>>
>> I am trying to create a dynamic auth address
>>
>>
>> # grab ssoid
>> map $cookie_SSOID $ssoid_cookie {
>> default "";
>> ~SSOID=(?P.+) $ssoid;
>> }
>>
>>
>> location /imaadmin/ {
>> proxy_cache off;
>> proxy_pass http://IMAAdmin;
>>
>>
>>
>> auth_request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin;
>>
>>
>> what I am trying to do is fill the variable ssoid_cookie with the
>> cookie value for SSOID in the request or make it blank
>>
>> then when somebody tries to access /imaadmin make the auth request
>> /sso/validate?SSOID=$ssoid_cookie&a=imaadmin;
>>
>> but i get this
>> GET /sso/validate%3FSSOID=$ssoid_cookie&a=imaadmin HTTP/1.0
>
> This is because the "auth_request" directive doesn't support
> variables, and also doesn't support request arguments.
>
> Try this instead:
>
> location /imaadmin/ {
> auth_request /sso/validate;
> ... proxy_pass ...
> }
>
> location = /sso/validate {
> set $args SSOID=$ssoid_cookie&a=imaadmin;
> ... proxy_pass ...
> }
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From zeal at freecharge.com Thu Oct 13 09:37:25 2016
From: zeal at freecharge.com (Zeal Vora)
Date: Thu, 13 Oct 2016 15:07:25 +0530
Subject: NGINX not checking OCSP for revoked certificates
Message-ID:
Hi
We've implemented basic Certificate Based Authentication for Nginx.
However whenever the certificate is revoked, Nginx still allows the client
( with revoked certificate ) to access the website.
I verified manually with openssl with OCSP URI and OCSP seems to be working
properly. Nginx doesn't seem to be forwarding request to OCSP before
allowing client.
I tried to specify the ssl_crl but as soon as I put it, all the clients
starts to receive 400 Bad Request.
Here is my sample relevant Nginx Config :-
### SSL cert files ###
ssl_client_certificate /test/ca.crt;
ssl_verify_client optional;
ssl_crl /prod-adcs/latest.pem;
ssl_verify_depth 2;
Is there something that I'm missing here ?
Any help will be appreciated.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Oct 13 09:40:21 2016
From: nginx-forum at forum.nginx.org (lancee83)
Date: Thu, 13 Oct 2016 05:40:21 -0400
Subject: Multiple proxy_cache_path location
Message-ID:
Hi All
I'm using nginx with Unified Streaming - I would like to have different
cache settings per channel. Is it possible to state different
proxy_cache_path parameters?
Thanks in advance
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270240,270240#msg-270240
From rainer at ultra-secure.de Thu Oct 13 10:25:44 2016
From: rainer at ultra-secure.de (rainer at ultra-secure.de)
Date: Thu, 13 Oct 2016 12:25:44 +0200
Subject: ocsp-stapling through http proxy?
Message-ID: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
Hi,
we have been informed by our CA that they will be moving their
OCSP-servers to "the cloud" - it was a fixed set of IPs before.
These fixed sets could relatively easily be entered as firewall rules
(and hosts-file entries, should DNS-resolution be unavailable).
Of course, they could as easily be targeted by Script-Kiddies and
Wannabe-Hackers as targets for a DDoS.
As such, I would need to allow outbound http-connections to the whole
internet, which is kind of exactly the opposite of what I want to do.
And that's ignoring for a moment the necessity to allow outbound DNS...
It would be cool if nginx would be able to do the stapling through a
http-proxy.
Rainer
From r at roze.lv Thu Oct 13 11:16:42 2016
From: r at roze.lv (Reinis Rozitis)
Date: Thu, 13 Oct 2016 14:16:42 +0300
Subject: ocsp-stapling through http proxy?
In-Reply-To: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
Message-ID: <005c01d22543$46a3eb70$d3ebc250$@roze.lv>
> It would be cool if nginx would be able to do the stapling through a http-
> proxy.
Technically you could just "override" (via /etc/hosts or if you have your own dns service) your ssl's provider ocsp ip to your own proxy which will forward then the requests to the original server.
p.s. in this case though probably a simple http proxy won't do but tcp should work
rr
From rainer at ultra-secure.de Thu Oct 13 12:22:58 2016
From: rainer at ultra-secure.de (rainer at ultra-secure.de)
Date: Thu, 13 Oct 2016 14:22:58 +0200
Subject: ocsp-stapling through http proxy?
In-Reply-To: <005c01d22543$46a3eb70$d3ebc250$@roze.lv>
References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
<005c01d22543$46a3eb70$d3ebc250$@roze.lv>
Message-ID: <2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de>
Am 2016-10-13 13:16, schrieb Reinis Rozitis:
>> It would be cool if nginx would be able to do the stapling through a
>> http-
>> proxy.
>
> Technically you could just "override" (via /etc/hosts or if you have
> your own dns service) your ssl's provider ocsp ip to your own proxy
> which will forward then the requests to the original server.
You mean a transparent proxy?
In our case, this is not possible.
From mdounin at mdounin.ru Thu Oct 13 12:57:32 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 13 Oct 2016 15:57:32 +0300
Subject: NGINX not checking OCSP for revoked certificates
In-Reply-To:
References:
Message-ID: <20161013125732.GR73038@mdounin.ru>
Hello!
On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote:
> Hi
>
> We've implemented basic Certificate Based Authentication for Nginx.
>
> However whenever the certificate is revoked, Nginx still allows the client
> ( with revoked certificate ) to access the website.
>
> I verified manually with openssl with OCSP URI and OCSP seems to be working
> properly. Nginx doesn't seem to be forwarding request to OCSP before
> allowing client.
That's because nginx doesn't support OCSP validation of client
certificates. Use CRLs instead.
> I tried to specify the ssl_crl but as soon as I put it, all the clients
> starts to receive 400 Bad Request.
>
> Here is my sample relevant Nginx Config :-
>
>
> ### SSL cert files ###
>
> ssl_client_certificate /test/ca.crt;
> ssl_verify_client optional;
>
> ssl_crl /prod-adcs/latest.pem;
> ssl_verify_depth 2;
>
>
> Is there something that I'm missing here ?
Your error log should have details. Given you are using verify
depth set to 2, most likely there is no CRL for the root
certificate itself, and that's why nginx complaining.
--
Maxim Dounin
http://nginx.org/
From mdounin at mdounin.ru Thu Oct 13 13:34:14 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 13 Oct 2016 16:34:14 +0300
Subject: ocsp-stapling through http proxy?
In-Reply-To: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
Message-ID: <20161013133414.GT73038@mdounin.ru>
Hello!
On Thu, Oct 13, 2016 at 12:25:44PM +0200, rainer at ultra-secure.de wrote:
> Hi,
>
> we have been informed by our CA that they will be moving their OCSP-servers
> to "the cloud" - it was a fixed set of IPs before.
> These fixed sets could relatively easily be entered as firewall rules (and
> hosts-file entries, should DNS-resolution be unavailable).
> Of course, they could as easily be targeted by Script-Kiddies and
> Wannabe-Hackers as targets for a DDoS.
>
> As such, I would need to allow outbound http-connections to the whole
> internet, which is kind of exactly the opposite of what I want to do.
> And that's ignoring for a moment the necessity to allow outbound DNS...
>
> It would be cool if nginx would be able to do the stapling through a
> http-proxy.
OCSP stapling allows you to:
- provide your own file to staple using ssl_stapling_file
directive. It doesn't matter for nginx how the file was
obtained. You can even update it by hand. It might be
relatively straightforward to configure automatic updating
process though. See http://nginx.org/r/ssl_stapling_file for details.
- use an explicitly configured OCSP responder with the
ssl_stapling_responder directive. It allows to configure your
own OCSP responder at a fixed address, and then proxy requests to
the real responder. See http://nginx.org/r/ssl_stapling_responder
for details.
--
Maxim Dounin
http://nginx.org/
From r at roze.lv Thu Oct 13 14:13:20 2016
From: r at roze.lv (Reinis Rozitis)
Date: Thu, 13 Oct 2016 17:13:20 +0300
Subject: ocsp-stapling through http proxy?
In-Reply-To: <2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de>
References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
<005c01d22543$46a3eb70$d3ebc250$@roze.lv>
<2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de>
Message-ID:
> You mean a transparent proxy?
> In our case, this is not possible.
It's not really transparent.
As far as I understand you have a problem with opening outgoing traffic to
_random_ destination but you are fine if such traffic is pushed through some
proxy server (which in general means that the proxy server will anyways have
outgoing to "everywhere").
So while there is no http proxy support for such things in nginx ( in
Apache as a workarround you can override the responders url
https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslstaplingforceurl )
what you could do is just force the ocsp responders host to resolve to your
proxy (no other traffic has to be altered) which then forwards the request
to the original responder.
The proxy could be aswell another nginx instance (the problem is just that
nginx (besides the commercial nginx+) doesn't resolve (without some
workarrounds) backend hostnames on the fly but only on startup).
But in the end do you really need it?
Even in the "cloud" the IPs shouldn't change too often (if so maybe it's
worth to look for another SSL provider?) also there is no failure if
suddenly the stapling doesn't happen serverside, just monitor it and when
the resolution changes (or nginx starts to complain) alter your firewall
rules.
p.s. I haven't done the "proxy part" but at one time there were problems
with Godaddys European ocsp responders so I did the DNS thingy and forced
the ocsp.godaddy.com to be resolved to US ips and it worked fine.
rr
From r at roze.lv Thu Oct 13 14:15:36 2016
From: r at roze.lv (Reinis Rozitis)
Date: Thu, 13 Oct 2016 17:15:36 +0300
Subject: ocsp-stapling through http proxy?
In-Reply-To: <20161013133414.GT73038@mdounin.ru>
References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
<20161013133414.GT73038@mdounin.ru>
Message-ID:
>- use an explicitly configured OCSP responder with the
> ssl_stapling_responder directive. It allows to configure your
> own OCSP responder at a fixed address, and then proxy requests to
> the real responder. See http://nginx.org/r/ssl_stapling_responder
> for details.
Ohh totally have looked over this setting .. also since 1.3.7.
Apparantly need to reread documentation way often.
rr
From rainer at ultra-secure.de Thu Oct 13 14:45:01 2016
From: rainer at ultra-secure.de (rainer at ultra-secure.de)
Date: Thu, 13 Oct 2016 16:45:01 +0200
Subject: ocsp-stapling through http proxy?
In-Reply-To:
References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de>
<005c01d22543$46a3eb70$d3ebc250$@roze.lv>
<2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de>
Message-ID: <9c1c11949268d7f2b2d0a44b1bc6381f@ultra-secure.de>
Am 2016-10-13 16:13, schrieb Reinis Rozitis:
>> You mean a transparent proxy?
>> In our case, this is not possible.
>
> It's not really transparent.
>
> As far as I understand you have a problem with opening outgoing
> traffic to _random_ destination but you are fine if such traffic is
> pushed through some proxy server (which in general means that the
> proxy server will anyways have outgoing to "everywhere").
Yes, but the OCSP URL is known and doesn't change.
And the proxy has a very limited set of URLs it can access.
As such, this is much better than opening up "*".
> So while there is no http proxy support for such things in nginx ( in
> Apache as a workarround you can override the responders url
> https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslstaplingforceurl
> ) what you could do is just force the ocsp responders host to resolve
> to your proxy (no other traffic has to be altered) which then forwards
> the request to the original responder.
I will have to try this.
> The proxy could be aswell another nginx instance (the problem is just
> that nginx (besides the commercial nginx+) doesn't resolve (without
> some workarrounds) backend hostnames on the fly but only on startup).
>
>
>
> But in the end do you really need it?
>
> Even in the "cloud" the IPs shouldn't change too often (if so maybe
> it's worth to look for another SSL provider?) also there is no failure
> if suddenly the stapling doesn't happen serverside, just monitor it
> and when the resolution changes (or nginx starts to complain) alter
> your firewall rules.
I have a lot of these proxies.
Also TTLs on these records are notoriously short and I have no idea what
scheme our CA has chosen for running these boxes.
As I know a bit about the CA software they use, my guess would also be
that these servers are going to be relatively stable.
Changing to a different CA is not an option, either - and not my call
anyway...
> p.s. I haven't done the "proxy part" but at one time there were
> problems with Godaddys European ocsp responders so I did the DNS
> thingy and forced the ocsp.godaddy.com to be resolved to US ips and it
> worked fine.
I generally try to avoid hosts-file entries. They are a source of hassle
and confusion.
The only exception is when you need to point a server to itself and the
public IP the name resolves to is different (because: NAT) than the IP
the server is running on. Then I do create 127.0.0.1 entries in the
hosts-file.
Thanks for your input.
Rainer
From shahzaib.cb at gmail.com Thu Oct 13 17:39:18 2016
From: shahzaib.cb at gmail.com (shahzaib mushtaq)
Date: Thu, 13 Oct 2016 22:39:18 +0500
Subject: Slow uploading speed !!
Message-ID:
Hi,
We're facing quite slow uploading speed on FreeBSD-10.X over HTTP (NGINX).
Hardware is quite strong with 4x1Gbps LACP / 65G RAM / 12x3TB SATA .
There's not much load on HDDs so i suspect that maybe tcp tuning has some
problem. Here is my sysctl.conf
http://pastebin.com/MqNbD3VR
Here is /boot/loader.conf :
http://pastebin.com/WrW3ceVF
I'd also like to inform that -tso is disabled on all interfaces. Regarding
upload mechanism :
- client uploads the video with HTTP POST request on a file name
uploader.php
- uploading starts
Is there some NGINX variables which we can tweak for POST request ?
Currently the relevant one looks to be fastcgi_buffers . Here is nginx.conf
:
http://pastebin.com/ek7TCJha
Thanks in advance !!
Regards.
Shahzaib
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Thu Oct 13 19:44:18 2016
From: r at roze.lv (Reinis Rozitis)
Date: Thu, 13 Oct 2016 22:44:18 +0300
Subject: Slow uploading speed !!
In-Reply-To:
References:
Message-ID: <00fd01d2258a$30114b40$9033e1c0$@roze.lv>
> We're facing quite slow uploading speed on FreeBSD-10.X over HTTP (NGINX).
How slow is "slow"?
As in you didn't provide any metrics.
> There's not much load on HDDs so i suspect that maybe tcp tuning has some problem.
Well you could simply transfer a file via scp (-c arcfour) or netcat to see if the bottleneck is network/tcp.
> Is there some NGINX variables which we can tweak for POST request ? Currently the relevant one looks to be fastcgi_buffers . Here is nginx.conf :
> http://pastebin.com/ek7TCJha
Fastcgi_buffers don't affect upload bandwidth.
But client_body_buffer_size 4096M; seems a bit extreme to me since even with 65G ram you would be able to have only ~16 simultaneous uploads (if all upload ~4G the same time) - a quite possible dos factor.
rr
From zeal at freecharge.com Fri Oct 14 05:49:27 2016
From: zeal at freecharge.com (Zeal Vora)
Date: Fri, 14 Oct 2016 11:19:27 +0530
Subject: NGINX not checking OCSP for revoked certificates
In-Reply-To: <20161013125732.GR73038@mdounin.ru>
References:
<20161013125732.GR73038@mdounin.ru>
Message-ID:
Thanks Maxim.
I tried changing the ssl_verify_depth to 1 from value of 2 however still I
get 400 Bad Request for all the certificates ( Valid and Revoked ).
I checked the error_log file, there are no entries in that file. It all
works when I remove the ssl_crl option ( however then revoked certificates
are allowed ).
Just for bit more info, I downloaded the CRL from ADCS which is in form of
test.crl which I convert it to .pem format with openssl.
On Thu, Oct 13, 2016 at 6:27 PM, Maxim Dounin wrote:
> Hello!
>
> On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote:
>
> > Hi
> >
> > We've implemented basic Certificate Based Authentication for Nginx.
> >
> > However whenever the certificate is revoked, Nginx still allows the
> client
> > ( with revoked certificate ) to access the website.
> >
> > I verified manually with openssl with OCSP URI and OCSP seems to be
> working
> > properly. Nginx doesn't seem to be forwarding request to OCSP before
> > allowing client.
>
> That's because nginx doesn't support OCSP validation of client
> certificates. Use CRLs instead.
>
> > I tried to specify the ssl_crl but as soon as I put it, all the clients
> > starts to receive 400 Bad Request.
> >
> > Here is my sample relevant Nginx Config :-
> >
> >
> > ### SSL cert files ###
> >
> > ssl_client_certificate /test/ca.crt;
> > ssl_verify_client optional;
> >
> > ssl_crl /prod-adcs/latest.pem;
> > ssl_verify_depth 2;
> >
> >
> > Is there something that I'm missing here ?
>
> Your error log should have details. Given you are using verify
> depth set to 2, most likely there is no CRL for the root
> certificate itself, and that's why nginx complaining.
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From alex at samad.com.au Fri Oct 14 08:50:35 2016
From: alex at samad.com.au (Alex Samad)
Date: Fri, 14 Oct 2016 19:50:35 +1100
Subject: NGINX not checking OCSP for revoked certificates
In-Reply-To:
References:
<20161013125732.GR73038@mdounin.ru>
Message-ID:
What I had to do was sent the depth to the number or greater than the
number of ca's and I had to get all the crl's for each CA and concat
into a crl file.
On 14 October 2016 at 16:49, Zeal Vora wrote:
> Thanks Maxim.
>
> I tried changing the ssl_verify_depth to 1 from value of 2 however still I
> get 400 Bad Request for all the certificates ( Valid and Revoked ).
>
> I checked the error_log file, there are no entries in that file. It all
> works when I remove the ssl_crl option ( however then revoked certificates
> are allowed ).
>
> Just for bit more info, I downloaded the CRL from ADCS which is in form of
> test.crl which I convert it to .pem format with openssl.
>
>
>
>
> On Thu, Oct 13, 2016 at 6:27 PM, Maxim Dounin wrote:
>>
>> Hello!
>>
>> On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote:
>>
>> > Hi
>> >
>> > We've implemented basic Certificate Based Authentication for Nginx.
>> >
>> > However whenever the certificate is revoked, Nginx still allows the
>> > client
>> > ( with revoked certificate ) to access the website.
>> >
>> > I verified manually with openssl with OCSP URI and OCSP seems to be
>> > working
>> > properly. Nginx doesn't seem to be forwarding request to OCSP before
>> > allowing client.
>>
>> That's because nginx doesn't support OCSP validation of client
>> certificates. Use CRLs instead.
>>
>> > I tried to specify the ssl_crl but as soon as I put it, all the clients
>> > starts to receive 400 Bad Request.
>> >
>> > Here is my sample relevant Nginx Config :-
>> >
>> >
>> > ### SSL cert files ###
>> >
>> > ssl_client_certificate /test/ca.crt;
>> > ssl_verify_client optional;
>> >
>> > ssl_crl /prod-adcs/latest.pem;
>> > ssl_verify_depth 2;
>> >
>> >
>> > Is there something that I'm missing here ?
>>
>> Your error log should have details. Given you are using verify
>> depth set to 2, most likely there is no CRL for the root
>> certificate itself, and that's why nginx complaining.
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From zeal at freecharge.com Fri Oct 14 10:02:05 2016
From: zeal at freecharge.com (Zeal Vora)
Date: Fri, 14 Oct 2016 15:32:05 +0530
Subject: NGINX not checking OCSP for revoked certificates
In-Reply-To:
References:
<20161013125732.GR73038@mdounin.ru>
Message-ID:
Oh. We have just one root CA and I downloaded the CRL file for that CA and
used it in nginx. The depth is also 1. As soon as I put crl config in
nginx, all request leads to HTTP 400 Bad Request .
On Fri, Oct 14, 2016 at 2:20 PM, Alex Samad wrote:
> What I had to do was sent the depth to the number or greater than the
> number of ca's and I had to get all the crl's for each CA and concat
> into a crl file.
>
>
>
> On 14 October 2016 at 16:49, Zeal Vora wrote:
> > Thanks Maxim.
> >
> > I tried changing the ssl_verify_depth to 1 from value of 2 however still
> I
> > get 400 Bad Request for all the certificates ( Valid and Revoked ).
> >
> > I checked the error_log file, there are no entries in that file. It all
> > works when I remove the ssl_crl option ( however then revoked
> certificates
> > are allowed ).
> >
> > Just for bit more info, I downloaded the CRL from ADCS which is in form
> of
> > test.crl which I convert it to .pem format with openssl.
> >
> >
> >
> >
> > On Thu, Oct 13, 2016 at 6:27 PM, Maxim Dounin
> wrote:
> >>
> >> Hello!
> >>
> >> On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote:
> >>
> >> > Hi
> >> >
> >> > We've implemented basic Certificate Based Authentication for Nginx.
> >> >
> >> > However whenever the certificate is revoked, Nginx still allows the
> >> > client
> >> > ( with revoked certificate ) to access the website.
> >> >
> >> > I verified manually with openssl with OCSP URI and OCSP seems to be
> >> > working
> >> > properly. Nginx doesn't seem to be forwarding request to OCSP before
> >> > allowing client.
> >>
> >> That's because nginx doesn't support OCSP validation of client
> >> certificates. Use CRLs instead.
> >>
> >> > I tried to specify the ssl_crl but as soon as I put it, all the
> clients
> >> > starts to receive 400 Bad Request.
> >> >
> >> > Here is my sample relevant Nginx Config :-
> >> >
> >> >
> >> > ### SSL cert files ###
> >> >
> >> > ssl_client_certificate /test/ca.crt;
> >> > ssl_verify_client optional;
> >> >
> >> > ssl_crl /prod-adcs/latest.pem;
> >> > ssl_verify_depth 2;
> >> >
> >> >
> >> > Is there something that I'm missing here ?
> >>
> >> Your error log should have details. Given you are using verify
> >> depth set to 2, most likely there is no CRL for the root
> >> certificate itself, and that's why nginx complaining.
> >>
> >> --
> >> Maxim Dounin
> >> http://nginx.org/
> >>
> >> _______________________________________________
> >> nginx mailing list
> >> nginx at nginx.org
> >> http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri Oct 14 10:53:48 2016
From: nginx-forum at forum.nginx.org (avk)
Date: Fri, 14 Oct 2016 06:53:48 -0400
Subject: proxy_pass for subfolders
Message-ID: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org>
Hi! Can you help? How use proxy_pass (or other methods) for proxy
subfolder-requests? Example: site1.ltd/folder1 -> proxy to site2.ltd/app
Thx!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270269#msg-270269
From black.fledermaus at arcor.de Fri Oct 14 11:51:09 2016
From: black.fledermaus at arcor.de (basti)
Date: Fri, 14 Oct 2016 13:51:09 +0200
Subject: proxy_pass for subfolders
In-Reply-To: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org>
References: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de>
Hello,
try somethink like
location /folder1/ {
rewrite /folder1/(.*)$ /app/$1 break;
proxy_pass http://site2.ltd;
proxy_redirect off;
// this is only for loging on site2 to see the ip of the user and not
the ip of proxy server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
}
Best Regards,
Basti
p.s i use this to proxy an wordpress site
On 14.10.2016 12:53, avk wrote:
> Hi! Can you help? How use proxy_pass (or other methods) for proxy
> subfolder-requests? Example: site1.ltd/folder1 -> proxy to site2.ltd/app
> Thx!
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270269#msg-270269
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
From nginx-forum at forum.nginx.org Fri Oct 14 14:28:41 2016
From: nginx-forum at forum.nginx.org (avk)
Date: Fri, 14 Oct 2016 10:28:41 -0400
Subject: proxy_pass for subfolders
In-Reply-To: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de>
References: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de>
Message-ID: <4e29340607eab0e846575690eedf889d.NginxMailingListEnglish@forum.nginx.org>
Thank you for reply, but not working.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270278#msg-270278
From francis at daoine.org Fri Oct 14 14:56:55 2016
From: francis at daoine.org (Francis Daly)
Date: Fri, 14 Oct 2016 15:56:55 +0100
Subject: proxy_pass for subfolders
In-Reply-To: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org>
References: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161014145655.GG11677@daoine.org>
On Fri, Oct 14, 2016 at 06:53:48AM -0400, avk wrote:
Hi there,
> Hi! Can you help? How use proxy_pass (or other methods) for proxy
> subfolder-requests? Example: site1.ltd/folder1 -> proxy to site2.ltd/app
http://nginx.org/r/proxy_pass
Possibly the first bullet point after "A request URI is passed to the
server as follows:" is relevant?
Cheers,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Fri Oct 14 16:11:47 2016
From: nginx-forum at forum.nginx.org (mrast)
Date: Fri, 14 Oct 2016 12:11:47 -0400
Subject: Fastcgi_cache only caching 1 website
Message-ID: <2ca03652d7d5fa430e1dd04c981255a5.NginxMailingListEnglish@forum.nginx.org>
Hi,
Im relativly new to the Linux world but am learning bloody quick (you have
too, its unforgiving! :) )
I am setting up a new web server and im nearly ready to go live but cant
iron out 1 last issue - and thats i have multiple wordpress websites setup.
Each wordpress website has its own install and installation directory and
seperate database.
I have configured nginx with the fastcgi_cache module and it works - but
only for the very first website i setup on the server. Every subsequent
website gets nothing cached.
Running nginx/php7 on Ubuntu Server 16.04
Here is my nginx/nginx.conf file
user www-data;
worker_processes 1;
worker_rlimit_nofile 100000;
pid /run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
server_tokens off;
reset_timedout_connection on;
# add_header X-Powered-By "EasyEngine";
add_header rt-Fastcgi-Cache $upstream_cache_status;
# Limit Request
limit_req_status 403;
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
# Proxy Settings
# set_real_ip_from proxy-server-ip;
# real_ip_header X-Forwarded-For;
fastcgi_read_timeout 300;
client_max_body_size 100m;
##
# SSL Settings
##
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_ciphers
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
##
# Basic Settings
##
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Log format Settings
log_format rt_cache '$remote_addr $upstream_response_time
$upstream_cache_status [$time_local] '
'$http_host "$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 2;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component
text/xml
text/javascript;
##
# Cache Settings
##
add_header Fastcgi-Cache $upstream_cache_status;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80 default_server;
server_name _;
return 444;
}
}
Here is the cache working websites config
fastcgi_cache_path /var/www/html/1stwebsite.com/cache levels=1:2
keys_zone=1stwebsite.com:100m inactive=60m;
server {
server_name 1stwebsite.com www.1stwebsite.com;
access_log /var/www/html/1stwebsite.com/logs/access.log;
error_log /var/www/html/1stwebsite.com/logs/error.log;
root /var/www/html/1stwebsite.com/public/;
index index.php index.html index.htm;
set $skip_cache 0;
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
if ($request_uri ~*
"/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml")
{
set $skip_cache 1;
}
if ($http_cookie ~*
"comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in")
{
set $skip_cache 1;
}
if ($http_cookie ~* "PHPSESSID"){
set $skip_cache 1;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location /phpmyadmin {
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/allow_phpmyadmin;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache 1stwebsite.com;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge 1stwebsite.com "$scheme$request_method$host$1";
}
}
Here is 1 of the non working cache websites config
fastcgi_cache_path /var/www/html/2ndwebiste.co.uk/cache levels=1:2
keys_zone=2ndwebiste.co.uk:100m inactive=60m;
server {
server_name 2ndwebiste.co.uk www.2ndwebiste.co.uk;
access_log /var/www/html/2ndwebiste.co.uk/logs/access.log;
error_log /var/www/html/2ndwebiste.co.uk/logs/error.log;
root /var/www/html/2ndwebiste.co.uk/public/;
index index.php index.html index.htm;
set $skip_cache 0;
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
if ($request_uri ~*
"/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml")
{
set $skip_cache 1;
}
if ($http_cookie ~*
"comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in")
{
set $skip_cache 1;
}
if ($http_cookie ~* "PHPSESSID"){
set $skip_cache 1;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location /phpmyadmin {
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/allow_phpmyadmin;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache 2ndwebiste.co.uk;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge 2ndwebiste.co.uk "$scheme$request_method$host$1";
}
}
I think its to do with the very top line of both config files?
fastcgi_cache_path /var/www/html/2ndwebiste.co.uk/cache levels=1:2
keys_zone=2ndwebiste.co.uk:100m inactive=60m;
Does this need to be in the main nginx.conf file and not in each individual
website config?
If so - am i not meant to have a cache folder for each individual website,
should there just be 1 central cache folder for all websites?
I thought the "keys_zone" directive needs to be individual for each website,
and thus created a seperate cache location for each website hosted.
Thanks to anybody that can walk with me over the finishing line
Regards
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270284,270284#msg-270284
From black.fledermaus at arcor.de Fri Oct 14 17:31:24 2016
From: black.fledermaus at arcor.de (basti)
Date: Fri, 14 Oct 2016 19:31:24 +0200
Subject: proxy_pass for subfolders
In-Reply-To: <4e29340607eab0e846575690eedf889d.NginxMailingListEnglish@forum.nginx.org>
References: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de>
<4e29340607eab0e846575690eedf889d.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <518ea390-ef5f-3995-a0b3-55120d702b2b@arcor.de>
Sorry, not working is not an error message, so nobody can help you.
Perhaps you *should* edit this example?
What have you try?
What is the error? What's in the access-/errorlog (srv1 and srv2)?
On 14.10.2016 16:28, avk wrote:
> Thank you for reply, but not working.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270278#msg-270278
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From francis at daoine.org Sat Oct 15 08:22:04 2016
From: francis at daoine.org (Francis Daly)
Date: Sat, 15 Oct 2016 09:22:04 +0100
Subject: Multiple proxy_cache_path location
In-Reply-To:
References:
Message-ID: <20161015082204.GH11677@daoine.org>
On Thu, Oct 13, 2016 at 05:40:21AM -0400, lancee83 wrote:
Hi there,
> I'm using nginx with Unified Streaming - I would like to have different
> cache settings per channel. Is it possible to state different
> proxy_cache_path parameters?
I think that you can have multiple proxy_cache_path directives with
different parameters, each with their own path and zone.
And then you can use a proxy_cache with a different zone, in different
locations.
So for different parameters per channel, you want different location{}s
per channel.
f
--
Francis Daly francis at daoine.org
From JEDC at ramboll.com Sat Oct 15 12:18:11 2016
From: JEDC at ramboll.com (Jens Dueholm Christensen)
Date: Sat, 15 Oct 2016 12:18:11 +0000
Subject: Static or dynamic content
In-Reply-To: <20160930105450.GG11677@daoine.org>
References:
<20160929220228.GF11677@daoine.org>
<20160930105450.GG11677@daoine.org>
Message-ID:
On Friday, September 30, 2016 12:55 AM Francis Daly wrote,
>> No, I have an "error_page 503" and a similar one for 404 that points to two named locations, but that's it.
> That might matter.
> I can now get a 503, 404, or 405 result from nginx, when upstream sends a 503.
[...]
> Now make /tmp/x exist, and /tmp/y not exist.
>
> A GET request for /x is proxied, gets a 503, and returns the content of /tmp/x with a 503 status.
>
> A GET request for /y is proxied, gets a 503, and returns a 404 status.
>
> A POST request for /x is proxied, gets a 503, and returns a 405 status.
>
> A POST request for /y is proxied, gets a 503, and returns a 404 status.
>
> Since you also have an error_page for 404, perhaps that does something that leads to the output that you see.
>
> I suspect that when you show your error_page config and the relevant
> locations, it may become clearer what you want to end up with.
My local test config looks like this (log specifications and other stuff left out):
server {
listen 80;
server_name localhost;
location / {
root html;
try_files /offline.html @xact;
add_header Cache-Control "no-cache, max-age=0, no-store, must-revalidate";
}
location @xact {
proxy_pass http://127.0.0.1:4431;
proxy_redirect default;
proxy_read_timeout 2s;
proxy_send_timeout 2s;
proxy_connect_timeout 2s;
proxy_intercept_errors on;
}
error_page 404 @error_404;
error_page 503 @error_503;
location @error_404 {
root error;
rewrite (logo.png)$ /$1 break;
rewrite ^(.*)$ /error404.html break;
}
location @error_503 {
root error;
rewrite (logo.png)$ /$1 break;
rewrite ^(.*)$ /error503.html break;
}
> A test system which talks to a local HAProxy which has no "up" backends
> would probably be quicker to build.
Yes, thats what I had listening on 127.0.0.1:4431, and it did give me the same behaviour as I'm seeing in our production environment.
I got the following captures via pcap and wireshark:
Conditions are: HAProxy has a backend with no available servers, so every request results in a 503 to upstream client (nginx).
A POST request to some resource from a browser:
POST /2 HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en
Accept-Encoding: gzip, deflate
DNT: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 0
Cookie: new-feature=1; Language_In_Use=
Connection: keep-alive
This makes nginx send this request to HAProxy:
POST /2 HTTP/1.0
Host: 127.0.0.1:4431
Connection: close
Content-Length: 0
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en
Accept-Encoding: gzip, deflate
DNT: 1
Content-Type: application/x-www-form-urlencoded
Cookie: new-feature=1; Language_In_Use=
HAProxy returns this:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

503 Service Unavailable

No server is available to handle this request.
HAProxy also logs this (raw syslog packet):
<134>Oct 15 13:17:33 jedc-local haproxy[10104]: 127.0.0.1:64746 [15/Oct/2016:13:17:33.800] xact_in-DK xact_admin/ 0/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 "POST /2 HTTP/1.0"
This makes nginx return this back to the browser:
HTTP/1.1 405 Not Allowed
Server: nginx/1.8.0
Date: Sat, 15 Oct 2016 11:17:33 GMT
Content-Type: text/html
Content-Length: 172
Connection: keep-alive
nginx also logs this:
localhost 127.0.0.1 "-" [15/Oct/2016:13:17:33 +0200] "POST /2 HTTP/1.1" 405 172 503 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" http "-" "-" "-" "-" -/-
There is no mention of the error_page 503 location or any of the resources they specify (logo.png or error503.html) in any of nginx' logs, so I assume that they are not really connected to the problems I see.
Any ideas?
Regards,
Jens Dueholm Christensen
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From emailgrant at gmail.com Sat Oct 15 12:20:42 2016
From: emailgrant at gmail.com (Grant)
Date: Sat, 15 Oct 2016 05:20:42 -0700
Subject: keepalive upstream
In-Reply-To:
References:
Message-ID:
> I've been struggling with a very difficult to diagnose problem when
> using apache2 and Odoo in a reverse proxy configuration with nginx.
> Enabling keepalive for upstream in nginx seems to have fixed it. Why
> is it not enabled upstream by default as it is downstream?
Does anyone know why this isn't a default?
- Grant
From me at myconan.net Sun Oct 16 08:01:36 2016
From: me at myconan.net (Edho Arief)
Date: Sun, 16 Oct 2016 17:01:36 +0900
Subject: Index fallback?
In-Reply-To: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com>
Message-ID: <1476604896.1747068.757323793.19F5A248@webmail.messagingengine.com>
Hi,
Just updating myself, I realized I don't even need any weird setup, just
change the fallback location from
location @something { }
into
location = /.something { }
and set index parameter to
index index.html /.something;
It works because the last element of the list can be an absolute path as
mentioned in documentation.
On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote:
> I somehow can't make this scenario work:
>
> root structure:
> /a/index.html
> /b/
> accessing:
> 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html
> 2. site.com/b -> redirect to site.com/b/ -> show @fallback
>
>
> Using
>
> try_files $uri $uri/index.html @fallback;
>
> doesn't work quite well because #1 becomes this instead:
>
> 1. site.com/a -> show /a/index.html
>
> and breaks relative path javascript/css files (because it's `/a` in
> browser, not `/a/`).
>
> And using
>
> try_files $uri @fallback;
>
> Just always show @fallback for both scenarios.
>
> Whereas
>
> try_files $uri $uri/ @fallback;
>
> Always return 403 for #2 because the directory exists and there's no
> index.
>
> As a side note,
>
> error_page 404 = @fallback;
>
> Wouldn't work because as mentioned in the previous one, it returns 403
> for #2 (directory exists, no index), not 404.
>
> Is there any way to do it without specifying separate location for each
> of them?
From francis at daoine.org Sun Oct 16 08:33:43 2016
From: francis at daoine.org (Francis Daly)
Date: Sun, 16 Oct 2016 09:33:43 +0100
Subject: keepalive upstream
In-Reply-To:
References:
Message-ID: <20161016083343.GI11677@daoine.org>
On Sat, Oct 15, 2016 at 05:20:42AM -0700, Grant wrote:
Hi there,
> > I've been struggling with a very difficult to diagnose problem when
> > using apache2 and Odoo in a reverse proxy configuration with nginx.
> > Enabling keepalive for upstream in nginx seems to have fixed it. Why
> > is it not enabled upstream by default as it is downstream?
>
> Does anyone know why this isn't a default?
My guess?
Historical reasons and consistency.
proxy_pass from nginx to upstream was HTTP/1.0, which default assumes
"Connection: close" unless the client explicitly says otherwise. And
(without checking) I guess that nginx was explicit about "close".
Then the option of proxy_http_version came about, which would allow you
take advantage of some extra features if you know that your upstream
uses them.
Arguably, "proxy_http_version 1.1" should imply "Connection: keep-alive"
if not explicitly overridden, but (again, my guess) it was cleaner and
simpler to make the minimum changes when adding support for the new
client http version, and the upstream server must be able to handle
"Connection: close".
If you want close, keep-alive, upgrade, or something else, you can add
it yourself. nginx has to pick one to use if none is specified, and it
looks like it picked the one that was already its default. Principle of
Least Surprise, for current users.
On the "downstream" side, nginx is a http/1.1 (or 2) server. If the
client connects using http/1.0, nginx will respond with "Connection:
close" unless the client said otherwise. If the client connects using
http/1.1, nginx will respond with "Connection: keep-alive" unless the
client said otherwise. That's the http server-side rules.
Also, I guess that nginx generally assumes that the things that it talks
to are correct. I'm not sure, but it sounds like you are reporting that
some combination of things that is your upstream was receiving whatever
http request nginx was making, and was responding in a way that nginx
was not expecting. If some part of that sequence was breaking the http
request/response rules, it would be good to know so that it could be fixed
in the right place.
Naively, it sounds like your upstream was taking the nginx "Connection:
close" part of the request and ignoring it. If that is the case, your
upstream is wrong and should be fixed, independent of whatever nginx does.
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Mon Oct 17 13:50:15 2016
From: nginx-forum at forum.nginx.org (CarstenK.)
Date: Mon, 17 Oct 2016 09:50:15 -0400
Subject: Problem with cache key
Message-ID: <243ee282884a2112a03b2d278a3f396f.NginxMailingListEnglish@forum.nginx.org>
Hello,
i have a problem.
If i send a request to url A with Chrom and another request with curl or
firefox i have a cache miss.
If isend a request to the same url with curl on two different machines the
answer is a cache hit, thats fine.
But i don't know, why ich became a cache miss if i test with Chrome/Firefox,
Chrome/curl or Firefox/curl on the second request.
curl -I http://meinedomain/test.html
I think it is a problem with cache_key but i can't find the reason.
First step i only consider the url with arguments, no cookie or something
else. In a further step i want to consider a special cookie.
Version: nginx version: nginx/1.11.3 (nginx-plus-r10) (30 days trial)
Hier meine Konfiguration
### proxy.conf
proxy_cache_path /srv/nginx/cache/test levels=1:2 keys_zone=test_cache:128m
inactive=120d max_size=25G;
map $request_method $purge_method {
PURGE 1;
default 0;
}
server {
listen 80;
server_name ;
access_log /var/log/nginx/fliesenrabatte.access.log shop;
error_log /var/log/nginx/fliesenrabatte.error.log;
proxy_cache fliesenrabatte_cache;
rewrite_log on;
proxy_set_header Host ;
proxy_cache_key $request_uri;
# Caching deaktivieren
# NoCache URLs
if ($request_uri ~* "(/admin.*|/brand.*|/user.*|/login.*)") {
set $no_cache 1;
}
proxy_no_cache $no_cache;
# Startseite
location ~ /$ {
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_pass http://meinupstream;
proxy_cache_purge $purge_method;
}
# Cachen
location ~* \.(html|gif|jpg|png|js|css|pdf|woff|woff2|otf|ttf|eot|svg)$ {
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_pass http://meinupstream;
proxy_cache_purge $purge_method;
}
# nicht cachen (Warenkorb usw)
location ~* \.(cfc|cfm|htm)$ {
proxy_cache off;
proxy_pass http://meinupstream;
}
# Fuer Wildcard purging notwendig, da String auf Wildcard endet
location / {
allow 1.1.1.1;
deny all;
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_pass http://meinupstream;
proxy_cache_purge $purge_method;
}
}
### site-conf
server_tokens off;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_cache_valid 200 120d;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
add_header X-Cache-Status $upstream_cache_status;
upstream meinupstream {
server meinedomain.de:80;
}
I hope someone can help me.
Sorry for my bad english :(
Best,
Carsten
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270324,270324#msg-270324
From nginx-forum at forum.nginx.org Mon Oct 17 14:24:45 2016
From: nginx-forum at forum.nginx.org (avk)
Date: Mon, 17 Oct 2016 10:24:45 -0400
Subject: proxy_pass for subfolders
In-Reply-To: <518ea390-ef5f-3995-a0b3-55120d702b2b@arcor.de>
References: <518ea390-ef5f-3995-a0b3-55120d702b2b@arcor.de>
Message-ID:
>From proxy server :
2016/10/17 14:19:38 [error] 6735#6735: *236 open()
"/var/lib/nginx/html/000001" failed (2: No such file or directory),
client:ip, server: localhost, request: "GET /000001 HTTP/1.1"
>From srv1 & srv2:
- [17/Oct/2016:14:16:47 +0000] "GET /123456 HTTP/1.1" 301 185 "-"
"Mozilla/5.0 (X11; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0"
nginx (proxy) conf:
location /123456/ {
rewrite /123456/(.*)$ /$1 break;
proxy_pass http://SRV1/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270330#msg-270330
From matthewceroni at gmail.com Mon Oct 17 21:44:00 2016
From: matthewceroni at gmail.com (Matthew Ceroni)
Date: Mon, 17 Oct 2016 14:44:00 -0700
Subject: NGINX Open Source TCP Load Balancing - service discovery
Message-ID:
https://www.nginx.com/blog/dns-service-discovery-nginx-plus/
Testing out the options provided in the above link. Specifically the
"Setting the Domain Name in a Variable". The example given is L7 load
balancing.
I have a need for L4 using upstream, yet I am not able to get this method
to work (if it even does). The Note seems to indicate that it does by
stating that this method is available in 1.11.3 of the Open source version.
The issue is around where to set the variable. I have tried setting it in
the upstream block but that errors saying set is not valid in this context.
Tried setting it in the stream context, same error.
I have also tried this on both Plus and open source and get the same errors
on both.
Any insight would be helpful. Thanks
config:
# TCP/UDP proxy and load balancing block
#
stream {
proxy_protocol on;
resolver 10.6.0.10 valid=10s;
# Example configuration for TCP load balancing
upstream stream_backend {
zone tcp_servers 64k;
server backend_servers:25 max_fails=3;
}
server {
listen 25;
status_zone tcp_server;
proxy_pass stream_backend;
}
}
So basically I need to replace server backend_servers with server
$backend_servers but need to set that variable somewhere.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From zhanght1 at lenovo.com Tue Oct 18 02:50:57 2016
From: zhanght1 at lenovo.com (Felix HT1 Zhang)
Date: Tue, 18 Oct 2016 02:50:57 +0000
Subject: Error when download the big file of 2MB
Message-ID: <3B8195E42ECF3D4DA1072EF35B4F39F80138255BCA@CNMAILEX01.lenovo.com>
Dears,
We could download the 20MB file from web in internal APP,but it is failed when used nginx.
Here is the error info:the server responsed with a status of 504(Gateway Time-out).
How could I fix this problem?
BR
Felix zhang
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Tue Oct 18 06:28:27 2016
From: francis at daoine.org (Francis Daly)
Date: Tue, 18 Oct 2016 07:28:27 +0100
Subject: Static or dynamic content
In-Reply-To:
References:
<20160929220228.GF11677@daoine.org>
<20160930105450.GG11677@daoine.org>
Message-ID: <20161018062827.GJ11677@daoine.org>
On Sat, Oct 15, 2016 at 12:18:11PM +0000, Jens Dueholm Christensen wrote:
> On Friday, September 30, 2016 12:55 AM Francis Daly wrote,
Hi there,
> > I suspect that when you show your error_page config and the relevant
> > locations, it may become clearer what you want to end up with.
>
> My local test config looks like this (log specifications and other stuff left out):
> location / {
> root html;
> try_files /offline.html @xact;
> }
> location @xact {
> proxy_pass http://127.0.0.1:4431;
> proxy_intercept_errors on;
> }
> error_page 503 @error_503;
> location @error_503 {
> root error;
> rewrite (logo.png)$ /$1 break;
> rewrite ^(.*)$ /error503.html break;
> }
So: a POST for /x will be handled in @xact, which will return 503,
which will be handled in @error_503, which will be rewritten to a POST
for /error503.html which will be sent to the file error/error503.html,
which will return a 405.
Is that what you see?
Two sets of questions remain:
what output do you get when you use the test config in the earlier mail?
what output do you want?
That last one is probably "http 503 with the content of *this* file";
and is probably completely obvious to you; but I don't think it has been
explicitly stated here, and it is better to know than to try guessing.
> HAProxy returns this:
>
> HTTP/1.0 503 Service Unavailable
> Cache-Control: no-cache
> Connection: close
> Content-Type: text/html
>
>

503 Service Unavailable

> No server is available to handle this request.
>
Ok, that's a normal 503.
> HAProxy also logs this (raw syslog packet):
>
> <134>Oct 15 13:17:33 jedc-local haproxy[10104]: 127.0.0.1:64746 [15/Oct/2016:13:17:33.800] xact_in-DK xact_admin/ 0/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 "POST /2 HTTP/1.0"
>
> This makes nginx return this back to the browser:
>
> HTTP/1.1 405 Not Allowed
> Server: nginx/1.8.0
> Date: Sat, 15 Oct 2016 11:17:33 GMT
> Content-Type: text/html
> Content-Length: 172
> Connection: keep-alive
And that's the 405 because your config sends the 503 to a static file.
> nginx also logs this:
>
> localhost 127.0.0.1 "-" [15/Oct/2016:13:17:33 +0200] "POST /2 HTTP/1.1" 405 172 503 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" http "-" "-" "-" "-" -/-
> There is no mention of the error_page 503 location or any of the resources they specify (logo.png or error503.html) in any of nginx' logs, so I assume that they are not really connected to the problems I see.
>
Unless you are looking at the nginx debug log, you are not seeing anything
about nginx's internal subrequests.
If you remove the error_page 503 part or the proxy_intercept_errors part,
does the expected http status code get to your client?
> Any ideas?
I think that the nginx handling of subrequests from a POST for error
handling is a bit awkward here. But until someone who cares comes up with
an elegant and consistent alternative, I expect that it will remain as-is.
Possibly in your case you could convert the POST to a GET by using
proxy_method and proxy_pass within your error_page location.
That also feels inelegant, but may give the output that you want.
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Tue Oct 18 07:14:38 2016
From: francis at daoine.org (Francis Daly)
Date: Tue, 18 Oct 2016 08:14:38 +0100
Subject: NGINX Open Source TCP Load Balancing - service discovery
In-Reply-To:
References:
Message-ID: <20161018071438.GK11677@daoine.org>
On Mon, Oct 17, 2016 at 02:44:00PM -0700, Matthew Ceroni wrote:
Hi there,
Untested, but...
> https://www.nginx.com/blog/dns-service-discovery-nginx-plus/
> The issue is around where to set the variable. I have tried setting it in
> the upstream block but that errors saying set is not valid in this context.
> Tried setting it in the stream context, same error.
The documentation suggests that "set" is not available within the
"stream" system.
So you need a different way of setting a variable.
Perhaps "map" will do what you want?
f
--
Francis Daly francis at daoine.org
From mdounin at mdounin.ru Tue Oct 18 15:34:03 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 18 Oct 2016 18:34:03 +0300
Subject: nginx-1.10.2
Message-ID: <20161018153403.GJ73038@mdounin.ru>
Changes with nginx 1.10.2 18 Oct 2016
*) Change: the "421 Misdirected Request" response now used when
rejecting requests to a virtual server different from one negotiated
during an SSL handshake; this improves interoperability with some
HTTP/2 clients when using client certificates.
*) Change: HTTP/2 clients can now start sending request body
immediately; the "http2_body_preread_size" directive controls size of
the buffer used before nginx will start reading client request body.
*) Bugfix: a segmentation fault might occur in a worker process when
using HTTP/2 and the "proxy_request_buffering" directive.
*) Bugfix: the "Content-Length" request header line was always added to
requests passed to backends, including requests without body, when
using HTTP/2.
*) Bugfix: "http request count is zero" alerts might appear in logs when
using HTTP/2.
*) Bugfix: unnecessary buffering might occur when using the "sub_filter"
directive; the issue had appeared in 1.9.4.
*) Bugfix: socket leak when using HTTP/2.
*) Bugfix: an incorrect response might be returned when using the "aio
threads" and "sendfile" directives; the bug had appeared in 1.9.13.
*) Workaround: OpenSSL 1.1.0 compatibility.
--
Maxim Dounin
http://nginx.org/
From francis at daoine.org Tue Oct 18 19:12:40 2016
From: francis at daoine.org (Francis Daly)
Date: Tue, 18 Oct 2016 20:12:40 +0100
Subject: Problem with cache key
In-Reply-To: <243ee282884a2112a03b2d278a3f396f.NginxMailingListEnglish@forum.nginx.org>
References: <243ee282884a2112a03b2d278a3f396f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20161018191240.GL11677@daoine.org>
On Mon, Oct 17, 2016 at 09:50:15AM -0400, CarstenK. wrote:
Hi there,
> If i send a request to url A with Chrom and another request with curl or
> firefox i have a cache miss.
> If isend a request to the same url with curl on two different machines the
> answer is a cache hit, thats fine.
If you look at the request from nginx to upstream, and the response from
upstream to nginx, can you see the headers?
Particularly the Vary: header of the response -- often it will include
"User-Agent", which would explain what you see.
If that is the issue, and you know that upstream sends the same content
to all user-agents, then you can configure nginx so that that piece is
not used in nginx's decision to cache.
According to http://nginx.org/r/proxy_ignore_headers,
> proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
"Vary" is the most likely of the fields that you could ignore that you
do not.
Cheers,
f
--
Francis Daly francis at daoine.org
From kworthington at gmail.com Wed Oct 19 14:09:37 2016
From: kworthington at gmail.com (Kevin Worthington)
Date: Wed, 19 Oct 2016 10:09:37 -0400
Subject: [nginx-announce] nginx-1.10.2
In-Reply-To: <20161018153408.GK73038@mdounin.ru>
References: <20161018153408.GK73038@mdounin.ru>
Message-ID:
Hello Nginx users,
Now available: Nginx 1.10.2 for Windows https://kevinworthington.com/
nginxwin1102 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Google+ https://plus.google.com/+KevinWorthington/
Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
http://kevinworthington.com/
http://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
On Tue, Oct 18, 2016 at 11:34 AM, Maxim Dounin wrote:
> Changes with nginx 1.10.2 18 Oct
> 2016
>
> *) Change: the "421 Misdirected Request" response now used when
> rejecting requests to a virtual server different from one negotiated
> during an SSL handshake; this improves interoperability with some
> HTTP/2 clients when using client certificates.
>
> *) Change: HTTP/2 clients can now start sending request body
> immediately; the "http2_body_preread_size" directive controls size
> of
> the buffer used before nginx will start reading client request body.
>
> *) Bugfix: a segmentation fault might occur in a worker process when
> using HTTP/2 and the "proxy_request_buffering" directive.
>
> *) Bugfix: the "Content-Length" request header line was always added to
> requests passed to backends, including requests without body, when
> using HTTP/2.
>
> *) Bugfix: "http request count is zero" alerts might appear in logs
> when
> using HTTP/2.
>
> *) Bugfix: unnecessary buffering might occur when using the
> "sub_filter"
> directive; the issue had appeared in 1.9.4.
>
> *) Bugfix: socket leak when using HTTP/2.
>
> *) Bugfix: an incorrect response might be returned when using the "aio
> threads" and "sendfile" directives; the bug had appeared in 1.9.13.
>
> *) Workaround: OpenSSL 1.1.0 compatibility.
>
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx-announce mailing list
> nginx-announce at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From marques+nginx at linode.com Wed Oct 19 20:39:41 2016
From: marques+nginx at linode.com (Marques Johansson)
Date: Wed, 19 Oct 2016 16:39:41 -0400
Subject: proxy_next_upstream exemption for 429 "too many requests"
Message-ID:
"proxy_next_upstream error" has exemptions for 402 and 403. Should it not
have exemptions for 429 "Too many requests" as well?
I want proxied servers' 503 and 429 responses with "Retry-After" to be
delivered to the client as the server responded. The 429s in this case
contain json bodies.
I assume I should use proxy_pass_header to get Retry-After preserved in the
responses, but what should I do to get 429 responses returned without
modification (short of a feature request that proxy_next_upstream be
modified)?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From piotrsikora at google.com Thu Oct 20 00:21:27 2016
From: piotrsikora at google.com (Piotr Sikora)
Date: Wed, 19 Oct 2016 17:21:27 -0700
Subject: proxy_next_upstream exemption for 429 "too many requests"
In-Reply-To:
References:
Message-ID:
Hey Marques,
coincidentally, I sent patches for 429 yesterday:
http://mailman.nginx.org/pipermail/nginx-devel/2016-October/009003.html
http://mailman.nginx.org/pipermail/nginx-devel/2016-October/009004.html
Best regards,
Piotr Sikora
On Wed, Oct 19, 2016 at 1:39 PM, Marques Johansson wrote:
> "proxy_next_upstream error" has exemptions for 402 and 403. Should it not
> have exemptions for 429 "Too many requests" as well?
>
> I want proxied servers' 503 and 429 responses with "Retry-After" to be
> delivered to the client as the server responded. The 429s in this case
> contain json bodies.
>
> I assume I should use proxy_pass_header to get Retry-After preserved in
> the responses, but what should I do to get 429 responses returned without
> modification (short of a feature request that proxy_next_upstream be
> modified)?
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From piotrsikora at google.com Thu Oct 20 00:37:16 2016
From: piotrsikora at google.com (Piotr Sikora)
Date: Wed, 19 Oct 2016 17:37:16 -0700
Subject: proxy_next_upstream exemption for 429 "too many requests"
In-Reply-To:
References:
Message-ID:
Hey Marques,
> "proxy_next_upstream error" has exemptions for 402 and 403. Should it not
> have exemptions for 429 "Too many requests" as well?
>
> I want proxied servers' 503 and 429 responses with "Retry-After" to be
> delivered to the client as the server responded. The 429s in this case
> contain json bodies.
Actually, after re-reading your email, I'm confused... 429 responses
aren't matched by "proxy_next_upstream error" (with or without my
patches), and are passed as-is to the client.
Maybe you're using "proxy_intercept_errors" with custom error pages?
Best regards,
Piotr Sikora
From JEDC at ramboll.com Thu Oct 20 09:19:29 2016
From: JEDC at ramboll.com (Jens Dueholm Christensen)
Date: Thu, 20 Oct 2016 09:19:29 +0000
Subject: Static or dynamic content
In-Reply-To: <20161018062827.GJ11677@daoine.org>
References:
<20160929220228.GF11677@daoine.org>
<20160930105450.GG11677@daoine.org>
<20161018062827.GJ11677@daoine.org>
Message-ID:
On Tuesday, October 18, 2016 08:28 AM Francis Daly wrote,
> So: a POST for /x will be handled in @xact, which will return 503,
> which will be handled in @error_503, which will be rewritten to a POST
> for /error503.html which will be sent to the file error/error503.html,
> which will return a 405.
>
> Is that what you see?
Yes - per your comments later in your reply about internal redirects and the debug log, I enabled the debug log which confirms it (several lines have been removed from the following sniplet, but its pretty clear):
---
2016/10/20 10:23:45 [debug] 8408#2492: *1 http upstream request: "/2?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http proxy status 503 "503 Service Unavailable"
2016/10/20 10:23:45 [debug] 8408#2492: *1 finalize http upstream request: 503
2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 503, "/2?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 test location: "@error_503"
2016/10/20 10:23:45 [debug] 8408#2492: *1 using location: @error_503 "/2?"
2016/10/20 10:23:45 [notice] 8408#2492: *1 "^(.*)$" matches "/2" while sending to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2", host: "localhost"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http script copy: "/error503.html"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http script regex end
2016/10/20 10:23:45 [notice] 8408#2492: *1 rewritten data: "/error503.html", args: "" while sending to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2", host: "localhost"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http finalize request: 405, "/error503.html?" a:1, c:2
2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 405, "/error503.html?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 HTTP/1.1 405 Not Allowed
Server: nginx/1.8.0
Date: Thu, 20 Oct 2016 08:23:45 GMT
Content-Type: text/html
Content-Length: 172
Connection: keep-alive
---
> Two sets of questions remain:
> what output do you get when you use the test config in the earlier mail?
Alas I did not try that config yet, but I would assume that my tests would show exactly the same as yours - should I try or is it purely academic?
> what output do you want?
> That last one is probably "http 503 with the content of *this* file";
> and is probably completely obvious to you; but I don't think it has been
> explicitly stated here, and it is better to know than to try guessing.
100% correct.
If upstream returns 503 or 404 I would like to have the contents of the error_page for 404 or 503 returned to the client regardless of the HTTP request method used.
> If you remove the error_page 503 part or the proxy_intercept_errors part,
> does the expected http status code get to your client?
Yes!
> I think that the nginx handling of subrequests from a POST for error
> handling is a bit awkward here. But until someone who cares comes up with
> an elegant and consistent alternative, I expect that it will remain as-is.
Alas..
> Possibly in your case you could convert the POST to a GET by using
> proxy_method and proxy_pass within your error_page location.
> That also feels inelegant, but may give the output that you want.
Yes, similar "solutions" like this (http://leandroardissone.com/post/19690882654/nginx-405-not-allowed ) and others are IMO really ugly and it does make the configfile harder to understand and maintain over time.
The "best" (but still ugly!) version I could find is where I catch the 405 error inside the @error_503 location (as described in the answer to this question http://stackoverflow.com/questions/16180947/return-503-for-post-request-in-nginx ), but I dislike the use of if and $request_filename in that solution - and it still doesn't make for easy understanding.
How would you suggest I could use proxy_method and proxy_pass within the @error_503 location?
I'm comming up short on how to do that without beginning to resend a POST request as a GET request to upstream - a new request that could now potentially succeed (since a haproxy backend server could become available between the POST failed and the request is retried as a GET)?
Regards,
Jens Dueholm Christensen
From marques+nginx at linode.com Thu Oct 20 12:30:21 2016
From: marques+nginx at linode.com (Marques Johansson)
Date: Thu, 20 Oct 2016 08:30:21 -0400
Subject: proxy_next_upstream exemption for 429 "too many requests"
Message-ID:
I was mistaken. I wasn't triggering 429s reliably. They are being passed
through as expected.
I will use proxy_pass_header Retry-After to get the behavior I wanted for
503s.
Some of my server 503s may be application/json while others are text/html.
I would like to pass the json responses through while nginx returns its
own 503 response instead of server 503 html responses.
That doesn't seem to be possible with the existing proxy options.
On Wed, Oct 19, 2016 at 8:37 PM, Piotr Sikora
wrote:
> Hey Marques,
>
> > "proxy_next_upstream error" has exemptions for 402 and 403. Should it
> not
> > have exemptions for 429 "Too many requests" as well?
> >
> > I want proxied servers' 503 and 429 responses with "Retry-After" to be
> > delivered to the client as the server responded. The 429s in this case
> > contain json bodies.
>
> Actually, after re-reading your email, I'm confused... 429 responses
> aren't matched by "proxy_next_upstream error" (with or without my
> patches), and are passed as-is to the client.
>
> Maybe you're using "proxy_intercept_errors" with custom error pages?
>
> Best regards,
> Piotr Sikora
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From yushang at outlook.com Thu Oct 20 15:06:02 2016
From: yushang at outlook.com (shang yu)
Date: Thu, 20 Oct 2016 15:06:02 +0000
Subject: content-type does not match mime type
Message-ID:
Hi dear all,
When I GET a xlsx file from nginx server , the response header Content-Type is application/octet-stream , not the expected application/vnd.openxmlformats-officedocument.spreadsheetml.sheet , why ? many thanks !!!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Oct 19 12:52:06 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Wed, 19 Oct 2016 08:52:06 -0400
Subject: Using Nginx as proxy content are "striped"
In-Reply-To: <941175a1ca8ff11c7cf152258f97a27e.NginxMailingListEnglish@forum.nginx.org>
References: <941175a1ca8ff11c7cf152258f97a27e.NginxMailingListEnglish@forum.nginx.org>
Message-ID: