From nginx-forum at forum.nginx.org Thu Sep 1 01:17:32 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Wed, 31 Aug 2016 21:17:32 -0400
Subject: Nginx multiple upstream map conditions
In-Reply-To: <280fd0b1018a1e7f7c2fa77767e84c52.NginxMailingListEnglish@forum.nginx.org>
References: <20160831215722.GZ12280@daoine.org>
<280fd0b1018a1e7f7c2fa77767e84c52.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <902e81330ae127972f7ea733eb282bdf.NginxMailingListEnglish@forum.nginx.org>
c0nw0nk Wrote:
-------------------------------------------------------
> Francis Daly Wrote:
> -------------------------------------------------------
> > On Wed, Aug 31, 2016 at 01:30:30PM -0400, c0nw0nk wrote:
> >
> > Hi there,
> >
> > > Thanks works a treat is it possible or allowed to do the
> following
> > in a
> > > nginx upstream map ? and if so how i can't figure it out.
> >
> > I think it is logically impossible.
> >
> > > I cache with the following key.
> > > fastcgi_cache_key
> > > "$session_id_value$scheme$host$request_uri$request_method";
> >
> > fastcgi_cache_key is the thing that nginx calculates from the
> request,
> > before it decides whether to send the response from cache, or
> whether
> > to pass the request to upstream.
> >
> > > if the upstream_cookie_logged_in value is not equal to 1 how can
> I
> > set
> > > $session_id_value ''; make empty
> >
> > $upstream_cookie_something is part of the response from upstream,
> > so is not available to nginx at the time that it is calculating
> > fastcgi_cache_key for the "read from cache or not" decision.
> >
> > Am I missing something?
> >
> > f
> > --
> > Francis Daly francis at daoine.org
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
>
> Thanks :) so changes to that value will have no effect.
>
> What about the following scenario.
>
> I remove all Set-Cookie headers.
> fastcgi_hide_header Set-Cookie;
>
>
> Then add them back in with :
> add_header Set-Cookie "$upstream_http_set_cookie";
>
> Will requests that get a cache hit ever contain a Set-Cookie header or
> isit only the ones that reach the origin php server.
>
> From my tests it appears to be working that no set-cookie headers are
> present on "X-Cache-Status : HIT" headers.
With :
fastcgi_hide_header Set-Cookie;
I think i should allow myself to add set-cookie headers to fresh origin
requests like this.
map $upstream_cache_status $upstream_value_status {
~MISS $upstream_http_set_cookie;
~BYPASS $upstream_http_set_cookie;
~EXPIRED $upstream_http_set_cookie;
}
add_header Set-Cookie $upstream_value_status;
I have not tested this yet though.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269296,269330#msg-269330
From gwenoleg at alinket.com Thu Sep 1 06:58:50 2016
From: gwenoleg at alinket.com (Gwenole Gendrot)
Date: Thu, 1 Sep 2016 14:58:50 +0800
Subject: UDP load balancing - 1 new socket for each incoming packet?
Message-ID: <53938119-d29a-4dec-9b94-816be78a59d8@alinket.com>
Hi,
I've been using nginx 1.11.3 to test the UDP load balancing feature,
using a "basic" configuration.
The functionality is working out of the box, but a new socket will be
created by the proxy for each packet sent from a client (to the same
connection). This leads to resource exhaustion under heavy load (even
with only 1 client / 1 server).
My question: is it the intended behaviour to open a new socket for each
incoming packet?
- if no => is this a bug? some misconfiguration from my part (either in
nginx or Linux)? has anyone observed this behaviour?
- if yes => is reusing the socket for the same connection a missing
feature / future improvement?
Tx!
Gwn
P.S.: my current workaround is to set the proxy timeout to a very low
value and increase the maximum number of concurrent connections & opened
files/sockets.
P.P.S: Logs were empty of warnings & errors. My coonfiguration (nothing
fancy, pretty much all the system & SW are from a fresh install) as
attachment.
BR,
Gwenole Gendrot
156 1835 3270
-------------- next part --------------
$ uname -a
Linux AiDMS 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$
$ nginx -v
nginx version: nginx/1.11.3
$
$ cat /etc/nginx/streams-available/udp_balancing_test.conf
stream {
upstream udp_cluster {
# hash $remote_addr consistent;
hash $remote_addr;
server 127.0.0.1:17000;
server 127.0.0.1:17001;
server 127.0.0.1:17002;
server 127.0.0.1:17003;
}
server {
# listen 0.0.0.0:16583 udp reuseport;
listen 0.0.0.0:16583 udp;
#UDP traffic will be proxied to the "udp_cluster" upstream group
proxy_pass udp_cluster;
# proxy_buffer_size 1024k;
proxy_timeout 5s;
}
}
$
$ cat /etc/nginx/nginx.conf
user nginx;
# worker_processes 1;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 8192;
# use epoll;
# multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# remove default http configuration
#include /etc/nginx/conf.d/*.conf;
}
include streams-enabled/*.conf;
From arut at nginx.com Thu Sep 1 08:02:09 2016
From: arut at nginx.com (Roman Arutyunyan)
Date: Thu, 1 Sep 2016 11:02:09 +0300
Subject: UDP load balancing - 1 new socket for each incoming packet?
In-Reply-To: <53938119-d29a-4dec-9b94-816be78a59d8@alinket.com>
References: <53938119-d29a-4dec-9b94-816be78a59d8@alinket.com>
Message-ID: <20160901080209.GZ55147@Romans-MacBook-Air.local>
Hello,
On Thu, Sep 01, 2016 at 02:58:50PM +0800, Gwenole Gendrot wrote:
> Hi,
>
>
> I've been using nginx 1.11.3 to test the UDP load balancing feature, using a
> "basic" configuration.
> The functionality is working out of the box, but a new socket will be
> created by the proxy for each packet sent from a client (to the same
> connection). This leads to resource exhaustion under heavy load (even with
> only 1 client / 1 server).
>
> My question: is it the intended behaviour to open a new socket for each
> incoming packet?
Yes, a new socket is created for an incoming UDP datagram to proxy it
to the upstream server and to proxy the response datagram(s) back to client.
> - if no => is this a bug? some misconfiguration from my part (either in
> nginx or Linux)? has anyone observed this behaviour?
> - if yes => is reusing the socket for the same connection a missing feature
> / future improvement?
Datagrams sent from the same client are not considered as a part of a single
connection. In fact, they can even be received by different nginx workers.
And yes, this is a subject for the future improvement.
>
>
> Tx!
> Gwn
>
>
> P.S.: my current workaround is to set the proxy timeout to a very low value
> and increase the maximum number of concurrent connections & opened
> files/sockets.
If you know in advance how many datagrams you are expecting in response to a
single client datagram, you can use the proxy_responses directive to set it.
In this case nginx will close the session (and release the socket) once the
required number of datagrams is sent back to client.
> P.P.S: Logs were empty of warnings & errors. My coonfiguration (nothing
> fancy, pretty much all the system & SW are from a fresh install) as
> attachment.
>
> BR,
>
> Gwenole Gendrot
> 156 1835 3270
[..]
--
Roman Arutyunyan
From brentgclarklist at gmail.com Thu Sep 1 08:43:55 2016
From: brentgclarklist at gmail.com (Brent Clark)
Date: Thu, 1 Sep 2016 10:43:55 +0200
Subject: Help understanding rate limiting log entry
Message-ID:
Good day Guys
I just implemented rate limiting.
Could someone please explain what
*42450 is / means
109154#109154 is / means
and what also
10.195 (I take it 10 is the size my bucket, but its the 195 I dont
understand)
Here is the log entry:
2016/09/01 10:06:29 [error] 109154#109154: *42450 limiting requests,
excess: 10.195 by zone "req_limit_per_ip", client: 54.237.120.210,
server: default, request: "GET
Many thanks
Brent
From lukasz at tasz.eu Thu Sep 1 11:34:39 2016
From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=)
Date: Thu, 1 Sep 2016 13:34:39 +0200
Subject: nginx with caching
Message-ID:
Hi all,
since some time I'm using nginx as reverse proxy with caching for serving
images files.
looks pretty good since proxy is located per each location.
but I noticed problematic behaviour, when cache is empty, and there will
pop-up a lot of requests at the same time, nginx don't understand that all
request are same, and will fetch from upstream only onece and serve it to
the rest, but all requests are handovered to upstream.
side effects?
- upstream server limit rate since there is to much connections to one
client,
- in some cases there are issues with temp - not enough space to finish all
requests
any ideas?
is it known problem?
I know that problem can be solved with warming up caches, but since there
is a lot of locations, I would like to keep it transparent.
regards
?ukasz Tasz
RTKW
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Thu Sep 1 13:25:19 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 1 Sep 2016 16:25:19 +0300
Subject: Help understanding rate limiting log entry
In-Reply-To:
References:
Message-ID: <20160901132519.GU1855@mdounin.ru>
Hello!
On Thu, Sep 01, 2016 at 10:43:55AM +0200, Brent Clark wrote:
> I just implemented rate limiting.
>
> Could someone please explain what
>
> *42450 is / means
This is a connection number, also available as $connection.
> 109154#109154 is / means
This is nginx worker PID (also available as $pid) and thread identifier.
> and what also
>
> 10.195 (I take it 10 is the size my bucket, but its the 195 I dont
> understand)
This is number of requests acumulated in the bucket. If this
number is more than burst defined (10 in your case), further
request will be rejected.
Number of requests in the bucket is reduced according to the rate
defined and current time, and may not be integer. The ".195"
means that an additional request will be allowed in about 195
milliseconds assuming rate 1r/s.
> Here is the log entry:
>
> 2016/09/01 10:06:29 [error] 109154#109154: *42450 limiting requests, excess:
> 10.195 by zone "req_limit_per_ip", client: 54.237.120.210, server: default,
> request: "GET
--
Maxim Dounin
http://nginx.org/
From mdounin at mdounin.ru Thu Sep 1 13:31:41 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 1 Sep 2016 16:31:41 +0300
Subject: nginx with caching
In-Reply-To:
References:
Message-ID: <20160901133141.GV1855@mdounin.ru>
Hello!
On Thu, Sep 01, 2016 at 01:34:39PM +0200, ?ukasz Tasz wrote:
> Hi all,
> since some time I'm using nginx as reverse proxy with caching for serving
> images files.
> looks pretty good since proxy is located per each location.
>
> but I noticed problematic behaviour, when cache is empty, and there will
> pop-up a lot of requests at the same time, nginx don't understand that all
> request are same, and will fetch from upstream only onece and serve it to
> the rest, but all requests are handovered to upstream.
> side effects?
> - upstream server limit rate since there is to much connections to one
> client,
> - in some cases there are issues with temp - not enough space to finish all
> requests
>
> any ideas?
> is it known problem?
>
> I know that problem can be solved with warming up caches, but since there
> is a lot of locations, I would like to keep it transparent.
There is the proxy_cache_lock directive to address such use cases,
see http://nginx.org/r/proxy_cache_lock.
Additionally, for updating cache items there is
"proxy_cache_use_stale updating", see
http://nginx.org/r/proxy_cache_use_stale.
--
Maxim Dounin
http://nginx.org/
From brentgclarklist at gmail.com Thu Sep 1 14:01:53 2016
From: brentgclarklist at gmail.com (Brent Clark)
Date: Thu, 1 Sep 2016 16:01:53 +0200
Subject: Help understanding rate limiting log entry
In-Reply-To: <20160901132519.GU1855@mdounin.ru>
References:
<20160901132519.GU1855@mdounin.ru>
Message-ID: <755557b8-4573-4eb1-9ec8-e81d881ae598@gmail.com>
Good day Maxim
Thank you for taking the time in your explanation.
Regards
Brent Clark
On 01/09/2016 15:25, Maxim Dounin wrote:
> Hello!
>
> On Thu, Sep 01, 2016 at 10:43:55AM +0200, Brent Clark wrote:
>
>> I just implemented rate limiting.
>>
>> Could someone please explain what
>>
>> *42450 is / means
> This is a connection number, also available as $connection.
>
>> 109154#109154 is / means
> This is nginx worker PID (also available as $pid) and thread identifier.
>
>> and what also
>>
>> 10.195 (I take it 10 is the size my bucket, but its the 195 I dont
>> understand)
> This is number of requests acumulated in the bucket. If this
> number is more than burst defined (10 in your case), further
> request will be rejected.
>
> Number of requests in the bucket is reduced according to the rate
> defined and current time, and may not be integer. The ".195"
> means that an additional request will be allowed in about 195
> milliseconds assuming rate 1r/s.
>
>> Here is the log entry:
>>
>> 2016/09/01 10:06:29 [error] 109154#109154: *42450 limiting requests, excess:
>> 10.195 by zone "req_limit_per_ip", client: 54.237.120.210, server: default,
>> request: "GET
From lukasz at tasz.eu Thu Sep 1 14:06:07 2016
From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=)
Date: Thu, 1 Sep 2016 16:06:07 +0200
Subject: nginx with caching
In-Reply-To: <20160901133141.GV1855@mdounin.ru>
References:
<20160901133141.GV1855@mdounin.ru>
Message-ID:
looks like something what I'm looking for!
thanks a lot, starting my tests
br
L.
?ukasz Tasz
RTKW
2016-09-01 15:31 GMT+02:00 Maxim Dounin :
> Hello!
>
> On Thu, Sep 01, 2016 at 01:34:39PM +0200, ?ukasz Tasz wrote:
>
> > Hi all,
> > since some time I'm using nginx as reverse proxy with caching for serving
> > images files.
> > looks pretty good since proxy is located per each location.
> >
> > but I noticed problematic behaviour, when cache is empty, and there will
> > pop-up a lot of requests at the same time, nginx don't understand that
> all
> > request are same, and will fetch from upstream only onece and serve it to
> > the rest, but all requests are handovered to upstream.
> > side effects?
> > - upstream server limit rate since there is to much connections to one
> > client,
> > - in some cases there are issues with temp - not enough space to finish
> all
> > requests
> >
> > any ideas?
> > is it known problem?
> >
> > I know that problem can be solved with warming up caches, but since there
> > is a lot of locations, I would like to keep it transparent.
>
> There is the proxy_cache_lock directive to address such use cases,
> see http://nginx.org/r/proxy_cache_lock.
>
> Additionally, for updating cache items there is
> "proxy_cache_use_stale updating", see
> http://nginx.org/r/proxy_cache_use_stale.
>
> --
> Maxim Dounin
> http://nginx.org/
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri Sep 2 02:47:54 2016
From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad)
Date: Thu, 01 Sep 2016 22:47:54 -0400
Subject: how to completely disable request body buffering
In-Reply-To:
References:
Message-ID:
Hi B.R
Please find the nginx confiuration below that we are using. and any help
would be greatful.
nginx -V
=================
nginx version: nginx/1.8.0
built with OpenSSL 1.0.2h-fips 3 May 2016
TLS SNI support enabled
configure arguments: --crossbuild=Linux::arm
--with-cc=arm-linux-gnueabihf-gcc --with-cpp=arm-linux-gnueabihf-gcc
--with-cc-opt='-pipe -Os -gdwarf-4 -mfpu=neon
--sysroot=/work/autobuild/project_hub_release/nginx/service/001.1635A/sol_aux_build/sbq_sysroot'
--with-ld-opt=--sysroot=/work/autobuild/project_hub_release/nginx/service/001.1635A/sol_aux_build/sbq_sysroot
--prefix=/usr --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx
--pid-path=/var/run/nginx.pid --lock-path=/var/run/lock/nginx.lock
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log
--http-client-body-temp-path=/var/tmp/nginx/client-body
--http-proxy-temp-path=/var/tmp/nginx/proxy
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi
--http-scgi-temp-path=/var/tmp/nginx/scgi
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --user=www-data --group=www-data
--with-ipv6 --with-http_ssl_module --with-http_gzip_static_module
--with-debug
nginx.conf
=================
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
listen [::]:80;
listen 8080;
listen [::]:8080;
listen 127.0.0.1:14200; #usb port
listen 443 ssl;
listen [::]:443 ssl;
listen 127.0.0.1:14199; # for internal LEDM
requests to bypass authentication check
listen 127.0.0.1:6015; # websocket internal port to
talk to nginx.
server_name localhost;
include /project/ram/secutils/*.conf;
include /project/rom/httpmgr_nginx/*.conf;
fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
proj_server.conf:
=================
server {
listen [::]:5678 ssl ipv6only=off;
ssl_certificate /project/rw/cert_svc/dev_cert.pem;
ssl_certificate_key /mnt/encfs/cert_svc/dev_key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
gzip on;
gzip_types *;
gzip_min_length 0;
# If the incoming request body is greater than client_max_body_size,
# NGINX will return 413 request entitiy too large.
# Setting to 0 will disable this size check.
client_max_body_size 0;
# By default, NGINX will try to buffer up the entire request body
before
# sending it to the backend server.
# Turning it off should stop this behavior and pass the request on
immediately.
fastcgi_request_buffering off;
# By default, NGINX will try to buffer up the entire response before
# sending it to the client.
# Turning it off should stop this behavior and pass the response on
immediately.
fastcgi_buffering off;
# Default timeout is 60s and there is no way to disable the read
timeout.
# If a read has not been performed in the specified interval
# a 504 response is sent from NGINX to the client.
# This could happen if there is a flow stoppage in the upstream.
fastcgi_read_timeout 7d;
# Default timeout is 60s and there is no way to disable the send
timeout.
# If NGINX has not sent data to the FastCGI server in the specified
interval
# a 504 response is sent from NGINX to the client.
# This could happen if there is a flow stoppage in the upstream.
fastcgi_send_timeout 7d;
# This server's listen directive says to use SSL on port 5678.
# When HTTP requests come to an SSL port NGINX throws a 497 HTTP
Request Sent to HTTPS Port
# Since our requests will be HTTP on port 5678, NGINX will throw error
code 497
# To fix this, when NGINX throws 497, we tell it to use the status
code
# from the upstream server.
error_page 497 = $request_uri;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HOST $host;
include fastcgi_params;
location = /path/to/resource1 {
fastcgi_pass 127.0.0.1:14052;
}
location = /path/to/resource2 {
fastcgi_pass 127.0.0.1:14052;
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269196,269353#msg-269353
From brentgclarklist at gmail.com Fri Sep 2 11:43:00 2016
From: brentgclarklist at gmail.com (Brent Clark)
Date: Fri, 2 Sep 2016 13:43:00 +0200
Subject: Nginx to real time minifying
Message-ID: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com>
Good day Guys
I heard companies like cloudflare have an option for minifying on their
proxies.
I would like to ask, is there such a feature for nginx.
Is there a third party module?
Many thanks
Brent
From pablo.platt at gmail.com Fri Sep 2 11:51:01 2016
From: pablo.platt at gmail.com (pablo platt)
Date: Fri, 2 Sep 2016 14:51:01 +0300
Subject: Nginx to real time minifying
In-Reply-To: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com>
References: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com>
Message-ID:
The gzip module compresses in realtime (uses the CPU):
http://nginx.org/en/docs/http/ngx_http_gzip_module.html
The gzip_static module use existing compressed files:
http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
On Fri, Sep 2, 2016 at 2:43 PM, Brent Clark
wrote:
> Good day Guys
>
> I heard companies like cloudflare have an option for minifying on their
> proxies.
>
> I would like to ask, is there such a feature for nginx.
>
> Is there a third party module?
>
> Many thanks
>
> Brent
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From miguelmclara at gmail.com Fri Sep 2 12:14:55 2016
From: miguelmclara at gmail.com (Miguel C)
Date: Fri, 2 Sep 2016 13:14:55 +0100
Subject: Nginx to real time minifying
In-Reply-To:
References: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com>
Message-ID:
Maybe this: https://github.com/mrclay/minify/blob/2.x/README.md
Note that I never used in in production, since I run mostly WP sites,
plug-ins worked best so far.
One awesome alternative is ngx-pagespeed it's a pity it's not supported on
FreeBSD though but on Linux server pagespeed will handle that and much more
and with the corrected configuration for your site (u might need to play
with it for a while) it delivers the best results, and includes nice things
like auto resizing for different view ports making things much faster on
mobile :)
--
Miguel Clara,
Sent from Gmail Mobile
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From pablo.platt at gmail.com Fri Sep 2 12:19:49 2016
From: pablo.platt at gmail.com (pablo platt)
Date: Fri, 2 Sep 2016 15:19:49 +0300
Subject: Nginx to real time minifying
In-Reply-To:
References: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com>
Message-ID:
There is also google pagespeed (didn't use it)
https://developers.google.com/speed/pagespeed/module/
https://github.com/pagespeed/ngx_pagespeed
On Fri, Sep 2, 2016 at 3:14 PM, Miguel C wrote:
> Maybe this: https://github.com/mrclay/minify/blob/2.x/README.md
>
> Note that I never used in in production, since I run mostly WP sites,
> plug-ins worked best so far.
>
> One awesome alternative is ngx-pagespeed it's a pity it's not supported on
> FreeBSD though but on Linux server pagespeed will handle that and much more
> and with the corrected configuration for your site (u might need to play
> with it for a while) it delivers the best results, and includes nice things
> like auto resizing for different view ports making things much faster on
> mobile :)
>
>
>
>
> --
> Miguel Clara,
> Sent from Gmail Mobile
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri Sep 2 12:49:13 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Fri, 02 Sep 2016 08:49:13 -0400
Subject: pcre.org down?
Message-ID: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
Anyone any idea what happened to www.pcre.org ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269359#msg-269359
From anoopalias01 at gmail.com Fri Sep 2 15:04:16 2016
From: anoopalias01 at gmail.com (Anoop Alias)
Date: Fri, 2 Sep 2016 20:34:16 +0530
Subject: pcre.org down?
In-Reply-To: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
; Received 682 bytes from 192.228.79.201#53(b.root-servers.net) in 405 ms
pcre.org. 86400 IN NS ns.figure1.net.
pcre.org. 86400 IN NS monid01.nebcorp.com.
pcre.org. 86400 IN NS meow.raye.com.
pcre.org. 86400 IN NS koffing.ivysaur.com.
h9p7u7tr2u91d0v0ljs9l1gidnp90u3h.org. 86400 IN NSEC3 1 1 1 D399EAAB
H9PARR669T6U8O1GSG9E1LMITK4DEM0T NS SOA RRSIG DNSKEY NSEC3PARAM
h9p7u7tr2u91d0v0ljs9l1gidnp90u3h.org. 86400 IN RRSIG NSEC3 7 2 86400
20160923150233 20160902140233 48497 org.
EBTmSR2rCyGj0HzJr5zL5uMIWD6K7inbPUctZ4iWRKfpQjOy02jW+ETu
psvQCa3dtWGGWUfTM820sMbsG7Uue3BX+/2Utrq0lB0XAcL/Z/p9Fwra
h2W8fKHOMyy+6TimoR45A7PnLwqLdLLhY03ISp9pcd7WTGJQ/V/0M5nO Ss8=
jnqfik42o561r7a65jpdqln7gouvgjbs.org. 86400 IN NSEC3 1 1 1 D399EAAB
JNRF2EBH2M0FOJG163S5KVHSBO31O5RF NS DS RRSIG
jnqfik42o561r7a65jpdqln7gouvgjbs.org. 86400 IN RRSIG NSEC3 7 2 86400
20160923095353 20160902085353 48497 org.
Zt8KcXmYsykQQV1hnF3X012jXqorxh8Hj4X12HzQftD/U/CmH03x925I
rvRSY4wYXzlNaHyJ5vDTeYzAG9TIdxG66RDHeOwn3HRGqht2u14oc+sE
pNbYm/cE2ozbf4ohQ0VBT3ma5UInu6ATU9pkJ1nOldYW+LtmPY4/MYFJ DVs=
couldn't get address for 'monid01.nebcorp.com': failure
;; Received 645 bytes from 199.19.57.1#53(d0.org.afilias-nst.org) in 435 ms
;; Received 37 bytes from 66.93.34.236#53(ns.figure1.net) in 329 ms
On Fri, Sep 2, 2016 at 6:19 PM, itpp2012 wrote:
> Anyone any idea what happened to www.pcre.org ?
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269359#msg-269359
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Anoop P Alias
From nginx-forum at forum.nginx.org Fri Sep 2 18:30:27 2016
From: nginx-forum at forum.nginx.org (erankor)
Date: Fri, 02 Sep 2016 14:30:27 -0400
Subject: Cancelling aio operations on Linux
Message-ID: <6a5409e671edb87db45a5e19da0e183f.NginxMailingListEnglish@forum.nginx.org>
Hi,
Recently while reloading/restarting nginx I've been getting errors such as:
2016/09/02 11:13:44 [alert] 16480#16480: *1234 open socket #123 left in
connection 123
After setting `debug_points abort` and checking the core dump, I found that
all requests were blocked on file aio (they had r->blocked and r->aio both
set to 1)
I then looked at the nginx source and saw this comment:
/*
* FreeBSD file AIO features and quirks:
....
* aio_cancel() cannot cancel file AIO: it returns AIO_NOTCANCELED
always.
*/
My question is - from your knowledge, does aio_cancel work correctly on
Linux ?
If so, can you provide some high level guidance for implementing it ?
Btw, it is clear that there is some problem with the storage that makes aio
read operations hang forever, and cancelling them isn't the ideal solution,
but that will at least prevent them from having a cumulative negative effect
on the server.
Thank you !
Eran
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269366,269366#msg-269366
From nginx-forum at forum.nginx.org Sat Sep 3 13:09:19 2016
From: nginx-forum at forum.nginx.org (mastercan)
Date: Sat, 03 Sep 2016 09:09:19 -0400
Subject: Multi Certificate Support with OCSP not working right
Message-ID: <390a6995094152dee5bbabb945893b3f.NginxMailingListEnglish@forum.nginx.org>
Hello,
When using 2 certificates, 1 RSA (using AlphaSSL) and 1 ECDSA (using Lets
Encrypt), and I try to connect via RSA SSL connection, nginx throws this
error:
"OCSP response not successful (6: unauthorized) while requesting certificate
status, responder: ocsp.int-x3.letsencrypt.org"
So it is using the wrong responder.
Following build (custom compiled):
Nginx 1.11.3
Openssl 1.1.0
AFAIK OpenSSL 1.1.0 should support multiple certificate chains. I don't
quite understand why OCSP then is not working right?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269371,269371#msg-269371
From wplatnick at gmail.com Sun Sep 4 00:06:09 2016
From: wplatnick at gmail.com (Will Platnick)
Date: Sun, 04 Sep 2016 00:06:09 +0000
Subject: Server very delayed in sending SYN/ACK
Message-ID:
Hello,
I have run into a very interesting issue. I am replacing a set of nginx
reverse proxy servers with a new set running an updated OS/nginx. These
nginx servers front a popular API that's mostly used by mobile apps, but
also a website that's hosted on a nearby subnet. I put the new servers into
service last night, and this morning as traffic picked up (only a couple
thousand requests per second), I got alerts from my DNS provider that
requests to the new server were starting to timeout in the Connect phase.
I hopped into New Relic, and I could see tons of requests from my website
to the nginx reverse proxy timing out after it hit our limit of 10s. I did
some curl requests with timing information, and I could see long times only
in the time_connect level, confirming the issue was only in the connection
phase. I hopped on the new nginx server and started a packet capture
filtered to a machine on a nearby subnet, did the curl from there, got it
taking a 9+ seconds in the connect phase, stopped the packet capture, and
moved the traffic over to my old setup. No issues over there.
Here's everything I know/think is relevant:
* In the packet capture from the server, I see the SYN packet come in, then
3 more retransmits of that same syn come in before the server sent back the
SYN/ACK. To me this indicates the issue in kernel or nginx side.
* There's absolutely no slowdown in the backends as measured from the same
nginx server.
* There's nothing in the nginx error log
* There's nothing from the kernel in dmesg when this is happening
* NIC duplex is fine, no dropped queues from ethtool -S (but, again, it
doesn't seem like a networking issue, we got the SYNs just fine, we just
didn't send the syn/ack)
* I tried to artificially load test afterwords using ab and loader.io,
doing 3x as many requests, but couldn't replicate the issue. I'm not sure
if it's some weird issue due to misbehaving mobile clients and SSL filling
up some sort of queue, but whatever it is, I can't replicate the issue on
demand.
* Load on the box was fine (<4) and no crazy I/O.
* Keepalives were turned on
* Some relevant sysctl values:
cat /proc/sys/net/core/somaxconn (backlog is set to the same in the nginx
config)
16384
cat /proc/sys/net/core/netdev_max_backlog
15000
cat /proc/sys/net/ipv4/tcp_max_syn_backlog
262144
NGINX: 1.11.3
OS: Ubuntu 16.04.1 x64
Kernel: 4.4.0-36-generic
It seems to me the issue is at the kernel/app level, but I can't think of
where to go from here.
If anybody has any ideas for me try, or if I've forgotten to mention
something relevant, please let me know.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From krishna.ku at flipkart.com Sun Sep 4 02:49:51 2016
From: krishna.ku at flipkart.com (Krishna Kumar (Engineering))
Date: Sun, 4 Sep 2016 08:19:51 +0530
Subject: Server very delayed in sending SYN/ACK
In-Reply-To:
References:
Message-ID:
Hi Will,
> * In the packet capture from the server, I see the SYN packet come in,
then 3 more retransmits of that same syn come in before the server sent
back the SYN/ACK. To me this indicates the issue in kernel or nginx side.
Many times clients sends multiple SYN's. You can run wireshark to check
the time stamps. If the packets are very close (in milliseconds), that is
normal,
else you have a problem on the server.
nginx does not come into the picture during TCP handshake, it's job is
done when nginx indicates that this socket is ready to accept connection
using the listen() system call. Once the final ack is done, the connection
is
ready and if an accept() is called, it will succeed (as in does-not-block).
However, the client would get success on connect() at the time the TCP
handshake finished, not when the application finished the accept() call.
Maybe attaching tcpdump will be useful for someone to take a look at what
is wrong. Are the initial packets being dropped at the kernel due to bad
checksums? Do you have any IPTable rules that might drop syn's or rate limit
it? Do you see retransmissions (netstat -s)? Maybe you can run netstat -s
before and after to see which counters increase and derive some clues
from that?
On Sun, Sep 4, 2016 at 5:36 AM, Will Platnick wrote:
> Hello,
> I have run into a very interesting issue. I am replacing a set of nginx
> reverse proxy servers with a new set running an updated OS/nginx. These
> nginx servers front a popular API that's mostly used by mobile apps, but
> also a website that's hosted on a nearby subnet. I put the new servers into
> service last night, and this morning as traffic picked up (only a couple
> thousand requests per second), I got alerts from my DNS provider that
> requests to the new server were starting to timeout in the Connect phase.
> I hopped into New Relic, and I could see tons of requests from my website
> to the nginx reverse proxy timing out after it hit our limit of 10s. I did
> some curl requests with timing information, and I could see long times only
> in the time_connect level, confirming the issue was only in the connection
> phase. I hopped on the new nginx server and started a packet capture
> filtered to a machine on a nearby subnet, did the curl from there, got it
> taking a 9+ seconds in the connect phase, stopped the packet capture, and
> moved the traffic over to my old setup. No issues over there.
>
> Here's everything I know/think is relevant:
>
> * In the packet capture from the server, I see the SYN packet come in,
> then 3 more retransmits of that same syn come in before the server sent
> back the SYN/ACK. To me this indicates the issue in kernel or nginx side.
>
> * There's absolutely no slowdown in the backends as measured from the same
> nginx server.
>
> * There's nothing in the nginx error log
>
> * There's nothing from the kernel in dmesg when this is happening
>
> * NIC duplex is fine, no dropped queues from ethtool -S (but, again, it
> doesn't seem like a networking issue, we got the SYNs just fine, we just
> didn't send the syn/ack)
>
> * I tried to artificially load test afterwords using ab and loader.io,
> doing 3x as many requests, but couldn't replicate the issue. I'm not sure
> if it's some weird issue due to misbehaving mobile clients and SSL filling
> up some sort of queue, but whatever it is, I can't replicate the issue on
> demand.
>
> * Load on the box was fine (<4) and no crazy I/O.
>
> * Keepalives were turned on
>
> * Some relevant sysctl values:
>
> cat /proc/sys/net/core/somaxconn (backlog is set to the same in the nginx
> config)
> 16384
>
> cat /proc/sys/net/core/netdev_max_backlog
> 15000
>
> cat /proc/sys/net/ipv4/tcp_max_syn_backlog
> 262144
>
> NGINX: 1.11.3
> OS: Ubuntu 16.04.1 x64
> Kernel: 4.4.0-36-generic
>
> It seems to me the issue is at the kernel/app level, but I can't think of
> where to go from here.
>
> If anybody has any ideas for me try, or if I've forgotten to mention
> something relevant, please let me know.
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sun Sep 4 09:10:11 2016
From: nginx-forum at forum.nginx.org (squonk)
Date: Sun, 04 Sep 2016 05:10:11 -0400
Subject: reverse proxy with TLS termination and DNS lookup
Message-ID: <01759b35e5be8efcf63b74b9c4ef3f8d.NginxMailingListEnglish@forum.nginx.org>
hi all..
I am trying to configure a reverse proxy which redirects a URL of the form:
https://mydomain.com/myapp/abcd/...
to:
http://myapp:5100/abcd/...
with DNS resolution of "myapp" to an IP address at runtime.
My current configuration file is:
server{
listen 80 default_server;
server_name mydomain.com;
return 301 https://www.mydomain.com$request_uri;
}
server{
listen 443 ssl default_server;
server_name mydomain.com;
;
resolver 123.4.5.6 valid=60s; // DNS name server.. 'nslookup myapp' does
work
set app_upstream http://myapp:5100;
location /myapp/ {
rewrite ^/myapp/(.*) /$1 break;
proxy_pass $app_upstream;
}
}
When i try:
https://mydomain.com/myapp/
it resolves to:
http://myapp/
but the log shows that the port isn't appended. I would prefer it if the
caller didn't have to know the port. I could iterate, but don't have enough
experience to say whether the overall approach is consistent with Nginx best
practice and i need to proxy servers other than myapp so any feedback would
be appreciated.
thanks!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269375,269375#msg-269375
From nginx-forum at forum.nginx.org Sun Sep 4 10:49:35 2016
From: nginx-forum at forum.nginx.org (George)
Date: Sun, 04 Sep 2016 06:49:35 -0400
Subject: pcre.org down?
In-Reply-To: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
yeah ran into the same problem and still seems to be down right now
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269379#msg-269379
From nginx-forum at forum.nginx.org Sun Sep 4 10:50:30 2016
From: nginx-forum at forum.nginx.org (NuLL3rr0r)
Date: Sun, 04 Sep 2016 06:50:30 -0400
Subject: Nginx SNI and Letsencrypt on FreeBSD; Wrong certificate?
In-Reply-To: <20160829104910.GD1855@mdounin.ru>
References: <20160829104910.GD1855@mdounin.ru>
Message-ID: <69501fc7dcd85ebd69554d5234a78b89.NginxMailingListEnglish@forum.nginx.org>
Tahnk you Maxim for the answer and sorry for my tardy response. I'm sure
that's not the case since I have a server block with redirect to www. Here
is the actual config:
server {
server_tokens off;
listen 80;
listen [::]:80;
server_name learnmyway.net;
location / {
return 301 https://www.$server_name$request_uri; # enforce https /
www
}
# Error Pages
include /path/to/snippets/error;
# Anti-DDoS
include /path/to/snippets/anti-ddos;
# letsencrypt acme challenges
include /path/to/snippets/letsencrypt-acme-challenge;
}
server {
server_tokens off;
listen 80;
listen [::]:80;
server_name *.learnmyway.net;
location / {
return 301 https://$host$request_uri; # enforce https
}
# Error Pages
include /path/to/snippets/error;
# Anti-DDoS
include /path/to/snippets/anti-ddos;
# letsencrypt acme challenges
include /path/to/snippets/letsencrypt-acme-challenge;
}
server {
server_tokens off;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.learnmyway.net;
# Hardened SSL
include /path/to/snippets/hardened-ssl;
ssl_certificate /path/to/certs/learnmyway.net.pem;
ssl_certificate_key /path/to/keys/learnmyway.net.pem;
ssl_trusted_certificate /path/to/certs/learnmyway.net.pem;
#error_log /path/to/learnmyway.net/log/www_error_log;
#access_log /path/to/learnmyway.net/log/www_access_log;
root /path/to/learnmyway.net/www/;
index index.html;
# Error Pages
include /path/to/snippets/error;
# Anti-DDoS
include /path/to/snippets/anti-ddos;
# letsencrypt acme challenges
include /path/to/snippets/letsencrypt-acme-challenge;
# Compression
include /path/to/snippets/compression;
# Static Resource Caching
include /path/to/snippets/static-resource-caching;
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269263,269380#msg-269380
From nginx-forum at forum.nginx.org Sun Sep 4 11:07:28 2016
From: nginx-forum at forum.nginx.org (NuLL3rr0r)
Date: Sun, 04 Sep 2016 07:07:28 -0400
Subject: Nginx SNI and Letsencrypt on FreeBSD; Wrong certificate?
In-Reply-To: <69501fc7dcd85ebd69554d5234a78b89.NginxMailingListEnglish@forum.nginx.org>
References: <20160829104910.GD1855@mdounin.ru>
<69501fc7dcd85ebd69554d5234a78b89.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <172e70cce4fe587e53ca39fcf9a86547.NginxMailingListEnglish@forum.nginx.org>
Ops! Thank you so much Maxim. You are right! Reading your response again, I
just figured it out. Adding the following block solved the issue:
server {
server_tokens off;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name learnmyway.net;
# Hardened SSL
include /path/to/snippets/hardened-ssl;
ssl_certificate /path/to/certs/learnmyway.net.pem;
ssl_certificate_key /path/to/keys/learnmyway.net.pem;
ssl_trusted_certificate /path/to/certs/learnmyway.net.pem;
return 301 https://www.$server_name$request_uri; # enforce www
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269263,269381#msg-269381
From brentgclarklist at gmail.com Tue Sep 6 06:46:25 2016
From: brentgclarklist at gmail.com (Brent Clark)
Date: Tue, 6 Sep 2016 08:46:25 +0200
Subject: curl -I says X-Cache-Status: MISS
Message-ID: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com>
Good day Guys
Im trying to get to grips and understand caching.
So I can see nginx caching wonderfully (all in all everything is
working), but for my own understanding and peeking under the hood.
I just decided to see what inside one of the cache files and one of the
things that got my attention was
cat /storage/imgcache/2/05/f8484e99c2d4e7659020a0fa96a22052
KEY:
httpGETdomain.com/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3
So here now is my question, if I do :
bclark at bclark:~$ curl -I
http://domain/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 06 Sep 2016 06:34:23 GMT
Content-Type: image/jpeg
Content-Length: 9052
Connection: keep-alive
Last-Modified: Mon, 05 Sep 2016 15:22:03 GMT
ETag: "235c-53bc43e026b87"
Cache-Control: max-age=31536000, public
Expires: Wed, 06 Sep 2017 06:34:23 GMT
Vary: User-Agent
Pragma: public
X-Powered-By: W3 Total Cache/0.9.4.1
X-Cache-Status: MISS
Accept-Ranges: bytes
See the X-Cache-Status.
Does anyone know why it says MISS?
If I run the same curl command again, it says HIT.
Many thanks
Brent
From medvedev.yp at gmail.com Tue Sep 6 06:58:39 2016
From: medvedev.yp at gmail.com (Yuriy Medvedev)
Date: Tue, 06 Sep 2016 09:58:39 +0300
Subject: curl -I says X-Cache-Status: MISS
References: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com>
Message-ID:
Miss status, it's file should be caching in next request. It's normal for file not in cache.
?????????? ? ????? ASUS
-------- ???????? ????????? --------
???????????:Brent Clark
????????????:Tue, 06 Sep 2016 09:46:25 +0300
??????????:nginx at nginx.org
????:curl -I says X-Cache-Status: MISS
>Good day Guys
>
>Im trying to get to grips and understand caching.
>
>So I can see nginx caching wonderfully (all in all everything is
>working), but for my own understanding and peeking under the hood.
>
>I just decided to see what inside one of the cache files and one of the
>things that got my attention was
>
>cat /storage/imgcache/2/05/f8484e99c2d4e7659020a0fa96a22052
>
>KEY:
>httpGETdomain.com/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3
>
>
>So here now is my question, if I do :
>
>bclark at bclark:~$ curl -I
>http://domain/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3
>HTTP/1.1 200 OK
>Server: nginx
>Date: Tue, 06 Sep 2016 06:34:23 GMT
>Content-Type: image/jpeg
>Content-Length: 9052
>Connection: keep-alive
>Last-Modified: Mon, 05 Sep 2016 15:22:03 GMT
>ETag: "235c-53bc43e026b87"
>Cache-Control: max-age=31536000, public
>Expires: Wed, 06 Sep 2017 06:34:23 GMT
>Vary: User-Agent
>Pragma: public
>X-Powered-By: W3 Total Cache/0.9.4.1
>X-Cache-Status: MISS
>Accept-Ranges: bytes
>
>See the X-Cache-Status.
>
>Does anyone know why it says MISS?
>
>If I run the same curl command again, it says HIT.
>
>Many thanks
>
>Brent
>
>_______________________________________________
>nginx mailing list
>nginx at nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Sep 6 07:46:41 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Tue, 06 Sep 2016 03:46:41 -0400
Subject: curl -I says X-Cache-Status: MISS
In-Reply-To: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com>
References: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com>
Message-ID: <357da3d32d70eb4286fc9d7fd14038e3.NginxMailingListEnglish@forum.nginx.org>
Brent Clark Wrote:
-------------------------------------------------------
> Vary: User-Agent
See https://forum.nginx.org/read.php?2,262943,262943#msg-262943
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269389,269391#msg-269391
From brentgclarklist at gmail.com Tue Sep 6 08:07:42 2016
From: brentgclarklist at gmail.com (Brent Clark)
Date: Tue, 6 Sep 2016 10:07:42 +0200
Subject: curl -I says X-Cache-Status: MISS
In-Reply-To: <357da3d32d70eb4286fc9d7fd14038e3.NginxMailingListEnglish@forum.nginx.org>
References: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com>
<357da3d32d70eb4286fc9d7fd14038e3.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <6cb80dbf-0889-eddd-ac53-3899bce92e76@gmail.com>
Thank you so much.
Regards
Brent
On 06/09/2016 09:46, itpp2012 wrote:
> Brent Clark Wrote:
> -------------------------------------------------------
>> Vary: User-Agent
> See https://forum.nginx.org/read.php?2,262943,262943#msg-262943
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269389,269391#msg-269391
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From atomyuk at gmail.com Tue Sep 6 08:25:58 2016
From: atomyuk at gmail.com (=?UTF-8?B?0JDRgNGC0ZHQvCDQotC+0LzRjtC6?=)
Date: Tue, 6 Sep 2016 11:25:58 +0300
Subject: libluajit-5.1.so.2()(64bit) is needed by nginx-1.11.3-1.el6.ngx.x86_64
Message-ID:
Hi all.
I am trying to install nginx builded(via rpmbuild) with lua and luajit
support, and getting error
libluajit-5.1.so.2()(64bit) is needed by nginx-1.11.3-1.el6.ngx.x86_64.
libluajit-5.1.so.2 is present in /usr/lib and /usr/lib64 but during install
it some how doesn't visible.
Maybe during build of nginx package i can point installed where to find
this lib?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From emiel.mols at gmail.com Tue Sep 6 14:08:22 2016
From: emiel.mols at gmail.com (Emiel Mols)
Date: Tue, 06 Sep 2016 14:08:22 +0000
Subject: keep-alive to backend + non-idempotent requests = race condition?
In-Reply-To:
References:
Message-ID:
Anyone?
On Thu, Aug 25, 2016 at 3:44 PM Emiel Mols wrote:
> Hey,
>
> I've been haunted by this for quite some time, seen it in different
> deployments, and think might make for some good ol' mailing list discussion.
>
> When
>
> - using keep-alive connections to a backend service (eg php, rails, python)
> - this backend needs to be updatable (it is not okay to have lingering
> workers for hours or days)
> - requests are often not idem-potent (can't repeat them)
>
> current deployments need to close the kept-alive connection from the
> backend-side, always opening up a race condition where nginx has just sent
> a request and the connection gets closed. This leaves nginx in limbo not
> knowing if the request has been executed and can be repeated.
>
> When using keep-alive connections the only reliable way of closing them is
> from the client-side (in this case: nginx). I would therefor expect either
>
> - a feature to signal nginx to close all connections to the backend after
> having deployed new backend code.
>
> - an upstream keepAliveIdleTimeout config value that guarantees that
> kept-alive connections are not left lingering indefinitely long. If nginx
> guarantees it closes idle connections after 5 seconds, we can be sure that
> 5s+max_request_time after a new backend is deployed all old workers are
> gone.
>
> - (variant on the previous) support for a http header from the backend to
> indicate such a timeout value. It's funny that this header kind-of already
> exists in the spec <
> https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#keep-alive
> >, but in practice is implemented by no-one.
>
> The 2nd and/or 3rd options seem most elegant to me. I wouldn't mind
> implementing myself if someone versed in the architecture would give some
> pointers.
>
> Best regards,
>
> - Emiel
> BTW: a similar issue should exist between browsers and web servers. Since
> latency is a lot higher on these links, I can only assume it to happen a
> lot.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Sep 7 09:27:27 2016
From: nginx-forum at forum.nginx.org (Kurogane)
Date: Wed, 07 Sep 2016 05:27:27 -0400
Subject: Problem with SSL
Message-ID:
Hi,
I've a problem with non ssl.
I got this setup.
domain1.com
domain2.com SSL
The certificate i not have issue all is fine here. The problem is when
someone go to this https://domain1.com is show domain2.com content.
How i can solve this issue? i have multi domain using same IP and all
domains go to "SSL" if not the right SSL domain always show domain2.com
content.
What i want to archive if domain1.com not have setup SSL not load or load
SSL cert error but in the same domain not the current SSL domain
configurate.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269401#msg-269401
From medvedev.yp at gmail.com Wed Sep 7 09:29:42 2016
From: medvedev.yp at gmail.com (Yuriy Medvedev)
Date: Wed, 7 Sep 2016 12:29:42 +0300
Subject: Problem with SSL
In-Reply-To:
References:
Message-ID:
Hi, you must use vhost configuration for domains.
2016-09-07 12:27 GMT+03:00 Kurogane :
> Hi,
>
> I've a problem with non ssl.
>
> I got this setup.
>
> domain1.com
> domain2.com SSL
>
> The certificate i not have issue all is fine here. The problem is when
> someone go to this https://domain1.com is show domain2.com content.
>
> How i can solve this issue? i have multi domain using same IP and all
> domains go to "SSL" if not the right SSL domain always show domain2.com
> content.
>
> What i want to archive if domain1.com not have setup SSL not load or load
> SSL cert error but in the same domain not the current SSL domain
> configurate.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269401,269401#msg-269401
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Sep 7 14:30:18 2016
From: nginx-forum at forum.nginx.org (shiz)
Date: Wed, 07 Sep 2016 10:30:18 -0400
Subject: emergency msg after changing cache path
Message-ID: <98acbb5b73564c76c64bf03065d48fe4.NginxMailingListEnglish@forum.nginx.org>
Got this message after changing the cache path? Could not find a solution
after googling it. Any help?
[emerg] 15154#15154: cache "my_zone" uses the "/dev/shm/nginx" cache path
while previously it used the "/tmp/nginx" cache path
nginx -V
nginx version: nginx/1.11.3
built with OpenSSL 1.0.2h 3 May 2016
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong
-Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-fPIE
-pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx
--conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
--with-ipv6 --with-http_ssl_module --with-http_stub_status_module
--with-http_realip_module --with-http_auth_request_module
--with-http_addition_module --with-http_dav_module --with-http_geoip_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_image_filter_module --with-http_v2_module --with-http_sub_module
--with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail
--with-mail_ssl_module --with-threads
--add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-auth-pam
--add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-cache-purge
--add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-dav-ext-module
--add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-echo
--add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-upstream-fair
--add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/ngx_http_substitutions_filter_module
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269405,269405#msg-269405
From mdounin at mdounin.ru Wed Sep 7 14:59:15 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 7 Sep 2016 17:59:15 +0300
Subject: emergency msg after changing cache path
In-Reply-To: <98acbb5b73564c76c64bf03065d48fe4.NginxMailingListEnglish@forum.nginx.org>
References: <98acbb5b73564c76c64bf03065d48fe4.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20160907145915.GF86582@mdounin.ru>
Hello!
On Wed, Sep 07, 2016 at 10:30:18AM -0400, shiz wrote:
> Got this message after changing the cache path? Could not find a solution
> after googling it. Any help?
>
> [emerg] 15154#15154: cache "my_zone" uses the "/dev/shm/nginx" cache path
> while previously it used the "/tmp/nginx" cache path
You are trying to reload a configuration to an incompatible one,
with a shared memory zone used for different cache. It's not
something nginx is prepared to handle, so it refuses to reload the
configuration. Available options are:
- change the configuration to a compatible one (e.g., rename the
cache zone so nginx will create a new one);
- do a binary upgrade to start a new instance of nginx with only
listening sockets inherited (see http://nginx.org/en/docs/control.html#upgrade,
usually can be simplified to "service nginx upgrade");
- just restart nginx.
--
Maxim Dounin
http://nginx.org/
From nginx-forum at forum.nginx.org Wed Sep 7 16:33:45 2016
From: nginx-forum at forum.nginx.org (Kurogane)
Date: Wed, 07 Sep 2016 12:33:45 -0400
Subject: Problem with SSL
In-Reply-To:
References:
Message-ID:
Nginx is not suppose work in block/vhost? that is not the issue here.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269413#msg-269413
From medvedev.yp at gmail.com Wed Sep 7 19:34:59 2016
From: medvedev.yp at gmail.com (Yuriy Medvedev)
Date: Wed, 7 Sep 2016 22:34:59 +0300
Subject: Problem with SSL
In-Reply-To:
References:
Message-ID:
Can you show your configuration?
7 ????. 2016 ?. 19:33 ???????????? "Kurogane"
???????:
> Nginx is not suppose work in block/vhost? that is not the issue here.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269401,269413#msg-269413
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Sep 7 20:52:59 2016
From: nginx-forum at forum.nginx.org (shiz)
Date: Wed, 07 Sep 2016 16:52:59 -0400
Subject: emergency msg after changing cache path
In-Reply-To: <20160907145915.GF86582@mdounin.ru>
References: <20160907145915.GF86582@mdounin.ru>
Message-ID: <9d1b6c88564dabf58cd7f302ff3ff2a0.NginxMailingListEnglish@forum.nginx.org>
Interesting! Thank you so much!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269405,269415#msg-269415
From nginx-forum at forum.nginx.org Thu Sep 8 03:58:57 2016
From: nginx-forum at forum.nginx.org (Kurogane)
Date: Wed, 07 Sep 2016 23:58:57 -0400
Subject: Problem with SSL
In-Reply-To:
References:
Message-ID: <384fc950add55eb859214d683f3ce5cb.NginxMailingListEnglish@forum.nginx.org>
Domain 1
server {
listen 80;
server_name domain1.com;
return 301 $scheme://www.$host$request_uri;
}
server {
listen 80;
server_name www.domain1.com;
root /home/domain1/public_html;
...
}
Domain 2 (SSL)
server {
listen 80;
server_name domain2.com;
return 301 $scheme://www.$host$request_uri;
}
server {
listen 80;
server_name www.domain2.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name www.domain2.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /home/nginx/ssl/domain2.com/domain2.com.crt;
ssl_certificate_key /home/nginx/ssl/domain2.com/domain2.com.key;
root /home/domain2/public_html;
...
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269417#msg-269417
From nhadie at gmail.com Thu Sep 8 04:01:27 2016
From: nhadie at gmail.com (ron ramos)
Date: Thu, 8 Sep 2016 12:01:27 +0800
Subject: Problem with SSL
In-Reply-To: <384fc950add55eb859214d683f3ce5cb.NginxMailingListEnglish@forum.nginx.org>
References:
<384fc950add55eb859214d683f3ce5cb.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Just add another server block on domain1 that listens to 443 ..and redirect
it to http if you ..or just give an error
On 8 Sep 2016 11:59 a.m., "Kurogane" wrote:
> Domain 1
>
> server {
> listen 80;
> server_name domain1.com;
> return 301 $scheme://www.$host$request_uri;
> }
>
> server {
> listen 80;
> server_name www.domain1.com;
> root /home/domain1/public_html;
> ...
> }
>
> Domain 2 (SSL)
>
> server {
> listen 80;
> server_name domain2.com;
> return 301 $scheme://www.$host$request_uri;
> }
>
> server {
> listen 80;
> server_name www.domain2.com;
> return 301 https://$host$request_uri;
> }
>
> server {
> listen 443 ssl http2;
> server_name www.domain2.com;
>
> ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
> ssl_certificate /home/nginx/ssl/domain2.com/domain2.com.crt;
> ssl_certificate_key /home/nginx/ssl/domain2.com/domain2.com.key;
> root /home/domain2/public_html;
> ...
> }
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269401,269417#msg-269417
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jiangmuhui at gmail.com Thu Sep 8 05:11:00 2016
From: jiangmuhui at gmail.com (Muhui Jiang)
Date: Thu, 8 Sep 2016 13:11:00 +0800
Subject: nginx input
Message-ID:
Hi
I am using program analysis to locate the bottleneck of nginx. I know the
file nginx under the directory of nginx/sbin is the binary file. My
question is that what is the input of the binary. I mean the format. Since
a general URL doesn't seem to be a right input. Many Thanks
Regards
Muhui
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Sep 8 05:19:19 2016
From: nginx-forum at forum.nginx.org (Kurogane)
Date: Thu, 08 Sep 2016 01:19:19 -0400
Subject: Problem with SSL
In-Reply-To:
References:
Message-ID: <0bd92558c0b4829390305e9c991967ee.NginxMailingListEnglish@forum.nginx.org>
I never thought about it very ingenious indeed.
If there is another way to accomplish it let me know.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269420#msg-269420
From medvedev.yp at gmail.com Thu Sep 8 05:36:14 2016
From: medvedev.yp at gmail.com (Yuriy Medvedev)
Date: Thu, 8 Sep 2016 08:36:14 +0300
Subject: Problem with SSL
In-Reply-To: <0bd92558c0b4829390305e9c991967ee.NginxMailingListEnglish@forum.nginx.org>
References:
<0bd92558c0b4829390305e9c991967ee.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
For all domain use ip+port.
8 ????. 2016 ?. 8:19 ???????????? "Kurogane"
???????:
> I never thought about it very ingenious indeed.
>
> If there is another way to accomplish it let me know.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269401,269420#msg-269420
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From oscaretu at gmail.com Thu Sep 8 06:04:27 2016
From: oscaretu at gmail.com (oscaretu .)
Date: Thu, 8 Sep 2016 08:04:27 +0200
Subject: nginx input
In-Reply-To:
References:
Message-ID:
Hello
Do you know that NGINX needs a configuration file
?
Kind regards,
Oscar
On Thu, Sep 8, 2016 at 7:11 AM, Muhui Jiang wrote:
> Hi
>
> I am using program analysis to locate the bottleneck of nginx. I know the
> file nginx under the directory of nginx/sbin is the binary file. My
> question is that what is the input of the binary. I mean the format. Since
> a general URL doesn't seem to be a right input. Many Thanks
>
> Regards
> Muhui
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
Oscar Fernandez Sierra
oscaretu at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Sep 8 08:53:47 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Thu, 08 Sep 2016 04:53:47 -0400
Subject: pcre.org down?
In-Reply-To:
References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Still down and so are many mirrors (broken or empty updates)
Found one still working at https://fourdots.com/mirror/exim/exim-ftp/pcre/
Or get a copy here
http://nginx-win.ecsds.eu/download/pcre-8.40-r1664-10-8-2016-svn-src.zip
Both cross-platform sources.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269423#msg-269423
From kurt at x64architecture.com Thu Sep 8 18:47:26 2016
From: kurt at x64architecture.com (Kurt Cancemi)
Date: Thu, 8 Sep 2016 14:47:26 -0400
Subject: pcre.org down?
In-Reply-To:
References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
It appears to have just come back online.
--
Kurt Cancemi
https://www.x64architecture.com
From mdounin at mdounin.ru Thu Sep 8 21:11:37 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 Sep 2016 00:11:37 +0300
Subject: Multi Certificate Support with OCSP not working right
In-Reply-To: <390a6995094152dee5bbabb945893b3f.NginxMailingListEnglish@forum.nginx.org>
References: <390a6995094152dee5bbabb945893b3f.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20160908211137.GK86582@mdounin.ru>
Hello!
On Sat, Sep 03, 2016 at 09:09:19AM -0400, mastercan wrote:
> When using 2 certificates, 1 RSA (using AlphaSSL) and 1 ECDSA (using Lets
> Encrypt), and I try to connect via RSA SSL connection, nginx throws this
> error:
>
> "OCSP response not successful (6: unauthorized) while requesting certificate
> status, responder: ocsp.int-x3.letsencrypt.org"
>
> So it is using the wrong responder.
>
> Following build (custom compiled):
> Nginx 1.11.3
> Openssl 1.1.0
>
> AFAIK OpenSSL 1.1.0 should support multiple certificate chains. I don't
> quite understand why OCSP then is not working right?
It looks like there is a bug which prevents nginx from using
different OCSP reponders when using OCSP stapling with multiple
certificates. It uses the responder from the last certificate in
the server{} block for all OCSP requests.
Please try the following patch:
# HG changeset patch
# User Maxim Dounin
# Date 1473367064 -10800
# Thu Sep 08 23:37:44 2016 +0300
# Node ID 2037cc64cdceb5b8cb36103cdd9d00e05b8e7ec3
# Parent 4a16fceea03bde6653e05d337e87907f085535b3
OCSP stapling: fixed using wrong responder with multiple certs.
diff --git a/src/event/ngx_event_openssl_stapling.c
b/src/event/ngx_event_openssl_stapling.c
--- a/src/event/ngx_event_openssl_stapling.c
+++ b/src/event/ngx_event_openssl_stapling.c
@@ -376,6 +376,7 @@ ngx_ssl_stapling_responder(ngx_conf_t *c
{
ngx_url_t u;
char *s;
+ ngx_str_t rsp;
STACK_OF(OPENSSL_STRING) *aia;
if (responder->len == 0) {
@@ -403,6 +404,8 @@ ngx_ssl_stapling_responder(ngx_conf_t *c
return NGX_DECLINED;
}
+ responder = &rsp;
+
responder->len = ngx_strlen(s);
responder->data = ngx_palloc(cf->pool, responder->len);
if (responder->data == NULL) {
--
Maxim Dounin
http://nginx.org/
From nginx-forum at forum.nginx.org Thu Sep 8 22:26:39 2016
From: nginx-forum at forum.nginx.org (mastercan)
Date: Thu, 08 Sep 2016 18:26:39 -0400
Subject: Multi Certificate Support with OCSP not working right
In-Reply-To: <20160908211137.GK86582@mdounin.ru>
References: <20160908211137.GK86582@mdounin.ru>
Message-ID:
Hello Maxim,
Thank you! Good news: The patch seems to work.
br,
Can
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269371,269433#msg-269433
From emailgrant at gmail.com Thu Sep 8 22:31:58 2016
From: emailgrant at gmail.com (Grant)
Date: Thu, 8 Sep 2016 15:31:58 -0700
Subject: limit-req: better message for users?
Message-ID:
Has anyone experimented with displaying a more informative message
than "503 Service Temporarily Unavailable" when someone exceeds the
limit-req?
- Grant
From emailgrant at gmail.com Fri Sep 9 01:23:43 2016
From: emailgrant at gmail.com (Grant)
Date: Thu, 8 Sep 2016 18:23:43 -0700
Subject: limit-req and greedy UAs
Message-ID:
Has anyone considered the problem of legitimate UAs which request a
series of files which don't necessarily exist when they access your
site? Requests for files like robots.txt, sitemap.xml,
crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA
to exceed the limit-req burst value. What is the right way to deal
with this?
- Grant
From lists at lazygranch.com Fri Sep 9 01:39:40 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Thu, 08 Sep 2016 18:39:40 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
Message-ID: <20160909013940.5501012.10243.10085@lazygranch.com>
?Since this limit is per IP, is the scenario you stated really a problem? Only that IP is effected. Or as is often the case, did I miss something?
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
? Original Message ?
From: Grant
Sent: Thursday, September 8, 2016 6:24 PM
To: nginx at nginx.org
Reply To: nginx at nginx.org
Subject: limit-req and greedy UAs
Has anyone considered the problem of legitimate UAs which request a
series of files which don't necessarily exist when they access your
site? Requests for files like robots.txt, sitemap.xml,
crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA
to exceed the limit-req burst value. What is the right way to deal
with this?
- Grant
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From nginx-forum at forum.nginx.org Fri Sep 9 07:01:39 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Fri, 09 Sep 2016 03:01:39 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
Message-ID:
So i read that IE8 and older browsers do not support "Max-Age" inside of
set-cookie headers. (but all browsers and modern support expires)
add_header Set-Cookie
"value=1;Domain=.networkflare.com;Path=/;Max-Age=2592000"; #+1 month 30
days
Apprently they support "expires" though so i changed the above to the
following but now the cookie says it will expire at end of every session.
add_header Set-Cookie
"value=1;Domain=.networkflare.com;Path=/;expires=2592000"; #+1 month 30
days
how can i tell this to expire 1 month into the future all the examples i
find mean i have to set a date manually what would mean restarting and
editing my config constantly. (automated would be nice)
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269438#msg-269438
From nginx-forum at forum.nginx.org Fri Sep 9 07:41:12 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Fri, 09 Sep 2016 03:41:12 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
Message-ID: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
In Lua it's as easy as:
https://github.com/openresty/lua-nginx-module/issues/19#issuecomment-19966018
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269439#msg-269439
From sca at andreasschulze.de Fri Sep 9 08:56:59 2016
From: sca at andreasschulze.de (A. Schulze)
Date: Fri, 09 Sep 2016 10:56:59 +0200
Subject: limit-req: better message for users?
In-Reply-To:
Message-ID: <20160909105659.Horde.kUSojwGhHFfiC3RJxg_fpOR@andreasschulze.de>
Grant:
> Has anyone experimented with displaying a more informative message
> than "503 Service Temporarily Unavailable" when someone exceeds the
> limit-req?
maybe https://tools.ietf.org/html/rfc6585#section-4 ?
Andreas
From nginx-forum at forum.nginx.org Fri Sep 9 08:57:25 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Fri, 09 Sep 2016 04:57:25 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
if ($host ~* www(.*)) {
set $host_without_www $1;
}
header_filter_by_lua '
ngx.header["Set-Cookie"] = "value=1; path=/; domain=$host_without_www;
Expires=" .. ngx.cookie_time(ngx.time()+2592000) -- +1 month 30 days
';
So i added this to my config but does not work for me :(
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269441#msg-269441
From nginx-forum at forum.nginx.org Fri Sep 9 09:38:21 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Fri, 09 Sep 2016 05:38:21 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
Solved it now i forgot in lua i declare vars from nginx different.
header_filter_by_lua '
ngx.header["Set-Cookie"] = "value=1; path=/; domain=" ..
ngx.var.host_without_www .. "; Expires=" ..
ngx.cookie_time(ngx.time()+2592000) -- +1 month 30 days
';
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269443#msg-269443
From nginx-forum at forum.nginx.org Fri Sep 9 11:33:24 2016
From: nginx-forum at forum.nginx.org (itpp2012)
Date: Fri, 09 Sep 2016 07:33:24 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To: <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
<9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Good, keep in mind that "ngx.time()" can be expensive, it would be advisable
to use a global var to store time and update this var once every hour.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269444#msg-269444
From nginx-forum at forum.nginx.org Fri Sep 9 12:03:43 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Fri, 09 Sep 2016 08:03:43 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
<9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Can you provide a example also I seem to have a new issue with my code above
it is overwriting all my other set-cookie headers how can i have it set that
cookie but not overwrite / remove the others it seems to be a unwanted /
unexpected side effect.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269445#msg-269445
From r1ch+nginx at teamliquid.net Fri Sep 9 13:00:36 2016
From: r1ch+nginx at teamliquid.net (Richard Stanway)
Date: Fri, 9 Sep 2016 15:00:36 +0200
Subject: limit-req and greedy UAs
In-Reply-To: <20160909013940.5501012.10243.10085@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
Message-ID:
You can put limit_req in a location, for example do not limit static files
and only limit expensive backend hits, or use two different thresholds.
On Fri, Sep 9, 2016 at 3:39 AM, wrote:
> ?Since this limit is per IP, is the scenario you stated really a problem?
> Only that IP is effected. Or as is often the case, did I miss something?
>
> http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
>
> Original Message
> From: Grant
> Sent: Thursday, September 8, 2016 6:24 PM
> To: nginx at nginx.org
> Reply To: nginx at nginx.org
> Subject: limit-req and greedy UAs
>
> Has anyone considered the problem of legitimate UAs which request a
> series of files which don't necessarily exist when they access your
> site? Requests for files like robots.txt, sitemap.xml,
> crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA
> to exceed the limit-req burst value. What is the right way to deal
> with this?
>
> - Grant
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From rpaprocki at fearnothingproductions.net Fri Sep 9 16:07:51 2016
From: rpaprocki at fearnothingproductions.net (Robert Paprocki)
Date: Fri, 9 Sep 2016 11:07:51 -0500
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
<9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <032C3F6D-8272-40C0-975B-0BCB441268EF@fearnothingproductions.net>
Actually no, ngx.time() is not expensive, it uses the cached value stored in the request so it doesn't need to make a syscall.
> On Sep 9, 2016, at 06:33, itpp2012 wrote:
>
> Good, keep in mind that "ngx.time()" can be expensive, it would be advisable
> to use a global var to store time and update this var once every hour.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269444#msg-269444
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From lists at lazygranch.com Fri Sep 9 16:30:36 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Fri, 09 Sep 2016 09:30:36 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
Message-ID: <20160909163036.5501012.8924.10125@lazygranch.com>
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sat Sep 10 12:46:51 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Sat, 10 Sep 2016 08:46:51 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
<9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Just fixed my problem completely now :)
For anyone who also uses Lua and wants to overcome this cross browser
compatibility issue with expires and max-age cookie vars.
if ($host ~* www(.*)) {
set $host_without_www $1;
}
set_by_lua $expires_time 'return ngx.cookie_time(ngx.time()+2592000)';
add_header Set-Cookie
"value=1;domain=$host_without_www;path=/;expires=$expires_time;Max-Age=2592000";
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269452#msg-269452
From reallfqq-nginx at yahoo.fr Sat Sep 10 13:54:50 2016
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Sat, 10 Sep 2016 15:54:50 +0200
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
<15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org>
<9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
I just hope that code won't be used by the owner of wwwooowww.wtf for
example.
---
*B. R.*
On Sat, Sep 10, 2016 at 2:46 PM, c0nw0nk
wrote:
> Just fixed my problem completely now :)
>
> For anyone who also uses Lua and wants to overcome this cross browser
> compatibility issue with expires and max-age cookie vars.
>
> if ($host ~* www(.*)) {
> set $host_without_www $1;
> }
> set_by_lua $expires_time 'return ngx.cookie_time(ngx.time()+2592000)';
> add_header Set-Cookie
> "value=1;domain=$host_without_www;path=/;expires=$expires_
> time;Max-Age=2592000";
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269438,269452#msg-269452
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sat Sep 10 14:39:44 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Sat, 10 Sep 2016 10:39:44 -0400
Subject: add_header Set-Cookie The difference between Max-Age and Expires
In-Reply-To:
References:
Message-ID: <014b5549498a46b63d81b880eb1054f6.NginxMailingListEnglish@forum.nginx.org>
I am sure (well would hope) they would have the common sense to edit it to
their own needs.
B.R. Wrote:
-------------------------------------------------------
> I just hope that code won't be used by the owner of wwwooowww.wtf for
> example.
> ---
> *B. R.*
>
> On Sat, Sep 10, 2016 at 2:46 PM, c0nw0nk
> wrote:
>
> > Just fixed my problem completely now :)
> >
> > For anyone who also uses Lua and wants to overcome this cross
> browser
> > compatibility issue with expires and max-age cookie vars.
> >
> > if ($host ~* www(.*)) {
> > set $host_without_www $1;
> > }
> > set_by_lua $expires_time 'return
> ngx.cookie_time(ngx.time()+2592000)';
> > add_header Set-Cookie
> > "value=1;domain=$host_without_www;path=/;expires=$expires_
> > time;Max-Age=2592000";
> >
> > Posted at Nginx Forum: https://forum.nginx.org/read.
> > php?2,269438,269452#msg-269452
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269456#msg-269456
From nginx-forum at forum.nginx.org Sun Sep 11 10:56:17 2016
From: nginx-forum at forum.nginx.org (jchannon)
Date: Sun, 11 Sep 2016 06:56:17 -0400
Subject: nginx not returning updated headers from origin server on conditional
GET
Message-ID:
I have nginx and its cache working as expected apart from one minor issue.
When a request is made for the first time it hits the origin server, returns
a 200 and nginx caches that response. If I make another request I can see
from the X-Cache-Status header that the cache has been hit. When I wait a
while knowing the cache will have expired I can see nginx hit my origin
server doing a conditional GET because I have proxy_cache_revalidate on;
defined.
When I check if the resource has changed in my app on the origin server I
see it hasn't and return a 304 with a new Expires header. Some may argue why
are you returning a new Expires header if the origin server says nothing has
changed and you are returning 304. The answer is, the HTTP RFC says that
this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4
One thing I have noticed, no matter what headers are I add or modify, when
the origin server returns 304 nginx will give a response with the first set
of response headers it saw for that resource.
Also if I change the Cache-Control:max-age header value from the first
request when I return the 304 response it appears nginx obeys the new value
as my resource is cached for that time however the response header value is
that of what was given on the first request not the value that I modified on
the 304 response. This applies to all subsequent requests if the origin
server issues a 304.
I am running nginx version: nginx/1.10.1
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269457,269457#msg-269457
From nginx-forum at forum.nginx.org Sun Sep 11 12:12:00 2016
From: nginx-forum at forum.nginx.org (khav)
Date: Sun, 11 Sep 2016 08:12:00 -0400
Subject: Rewrite rules not working
Message-ID:
I am trying to make pretty urls using rewrite rules but they are not
working
1.
https://example.com/s1/video.mp4 should be rewrite to
https://example.com/file/server/video.mp4
location = /s1/(.*)$ {
rewrite ^/s1/(.*) /file/server/$1 permanent;
}
2.
https://example.com/view/video5353 should be rewrite to
https://example.com/view.php?id=video5353
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269458,269458#msg-269458
From emailgrant at gmail.com Sun Sep 11 12:29:56 2016
From: emailgrant at gmail.com (Grant)
Date: Sun, 11 Sep 2016 05:29:56 -0700
Subject: limit-req and greedy UAs
In-Reply-To: <20160909163036.5501012.8924.10125@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
Message-ID:
> What looks to me to be a real resource hog that quite frankly you cant do much about are download managers. They open up multiple connections, but the rate limits apply to each individual connection. (this is why you want to limit the number of connections.)
Does this mean an attacker (for example) could get around rate limits
by opening a new connection for each request? How are the number of
connections limited?
- Grant
From emailgrant at gmail.com Sun Sep 11 12:36:24 2016
From: emailgrant at gmail.com (Grant)
Date: Sun, 11 Sep 2016 05:36:24 -0700
Subject: limit-req and greedy UAs
In-Reply-To: <20160909013940.5501012.10243.10085@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
Message-ID:
> ?Since this limit is per IP, is the scenario you stated really a problem? Only that IP is effected. Or as is often the case, did I miss something?
The idea (which I used bad examples to illustrate) is that some
mainstream browsers make a series of requests for files which don't
necessarily exist. Too many of those requests triggers limiting even
though the user didn't do anything wrong.
- Grant
> Has anyone considered the problem of legitimate UAs which request a
> series of files which don't necessarily exist when they access your
> site? Requests for files like robots.txt, sitemap.xml,
> crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA
> to exceed the limit-req burst value. What is the right way to deal
> with this?
>
> - Grant
From francis at daoine.org Sun Sep 11 13:44:36 2016
From: francis at daoine.org (Francis Daly)
Date: Sun, 11 Sep 2016 14:44:36 +0100
Subject: Rewrite rules not working
In-Reply-To:
References:
Message-ID: <20160911134436.GD11677@daoine.org>
On Sun, Sep 11, 2016 at 08:12:00AM -0400, khav wrote:
Hi there,
> I am trying to make pretty urls using rewrite rules but they are not
> working
"Pretty urls" usually means that the browser *only* sees the original
url, and the internal mangling remains hidden.
A rewrite that leads to a HTTP redirect gets the browser to change the
url that it shows.
Sometimes that is wanted; you can judge that for yourself.
> https://example.com/s1/video.mp4 should be rewrite to
> https://example.com/file/server/video.mp4
>
> location = /s1/(.*)$ {
http://nginx.org/r/location. You have used "=", but your pattern resembles
a regex. This location as-is will probably not be matched by any request.
> rewrite ^/s1/(.*) /file/server/$1 permanent;
http://nginx.org/r/rewrite. "permanent" there means "issue
a HTTP redirect", so the browser will make a new request for
/file/server/video.mp4.
I suggest changing it to
location ^~ /s1/ {
rewrite ^/s1/(.*) /file/server/$1 permanent;
}
You can remove the "permanent" if you do not want the external redirect
to be issued; either way, you will also need a location{} which handles
the request for /file/server/video.mp4 and does the right thing.
> https://example.com/view/video5353 should be rewrite to
> https://example.com/view.php?id=video5353
With a few caveats about edge cases, something like
location ^~ /view/ {
rewrite ^/view/(.*) /view.php?id=$1 permanent;
}
should probably do what you want.
Similarly, you will need a location{} to handle the /view.php request
and do the right thing; and removing "permanent" may be useful. If you do
remove "permanent", then you probably could avoid the rewrite altogether
and just "fastcgi_pass" directly, with a hardcoded SCRIPT_FILENAME and
a manually-defined QUERY_STRING.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From lists at lazygranch.com Sun Sep 11 14:30:38 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Sun, 11 Sep 2016 07:30:38 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
Message-ID: <20160911143038.5484628.23383.10220@lazygranch.com>
I suspect you are referring to the countless variations on the favicon, with Apple being the worst offender since they have many "touch" files. Android has them too. Just make the files. They don't have to be works of art.?
http://iconifier.net/
One of many generators.
Clearly Apple has no respect for the webmaster. But Microsoft has gone one step beyond that, requiring some sort of XML file.?
?https://msdn.microsoft.com/en-us/library/dn320426(v=vs.85).aspx
The good news is you don't get many requests for that XML.?
There are many schemes to keep these files out of your logs.
https://github.com/h5bp/server-configs/issues/132
I look at my logs with scripts, so I haven't bothered to do this, but it is probably good advice.
Are there other files browsers request?
? Original Message ?
From: Grant
Sent: Sunday, September 11, 2016 5:36 AM
To: nginx at nginx.org
Reply To: nginx at nginx.org
Subject: Re: limit-req and greedy UAs
> ?Since this limit is per IP, is the scenario you stated really a problem? Only that IP is effected. Or as is often the case, did I miss something?
The idea (which I used bad examples to illustrate) is that some
mainstream browsers make a series of requests for files which don't
necessarily exist. Too many of those requests triggers limiting even
though the user didn't do anything wrong.
- Grant
> Has anyone considered the problem of legitimate UAs which request a
> series of files which don't necessarily exist when they access your
> site? Requests for files like robots.txt, sitemap.xml,
> crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA
> to exceed the limit-req burst value. What is the right way to deal
> with this?
>
> - Grant
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From lists at lazygranch.com Sun Sep 11 15:21:41 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Sun, 11 Sep 2016 08:21:41 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
Message-ID: <20160911152141.5484628.98176.10223@lazygranch.com>
?This page has all the secret sauce, including how to limit the number of connections.?
https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
I set up the firewall with a higher number as a "just in case." Also note if you do streaming outside nginx, then you have to limit connections for that service in the program providing it.?
Mind you while I think this page has good advice, what is listed here won't stop a real ddos attack. The first D is for distributed, meaning the attack come from many IP addresses. You probably have to pay for one of those reverse proxy services to avoid a real ddos, but I personally find them them a bit creepy since I have seen hacking attempts come from behind them.?
The tips on this nginx page will limit the teenage boy in his parents basement, which is a more real life scenario to be attacked. But note that every photo you load is a request, so I wouldn't make the limit ?any lower than 5 to10 per second. You can play with the limits and watch the results on your own system. Just remember to:?
service nginx reload
service nginx restart
If you do fancy caching, you may have to clear your browser cache.
In theory, Google page ranking takes speed into account. ?There are many websites that will evaluate your nginx set up.?
https://www.webpagetest.org/
One thing to remember is nginx limits are in bytes per second, not bits per second. So the 512k limit in this example is really quite generous.
?http://www.webhostingtalk.com/showthread.php?t=1433413
There are programs you can run on your server to flog nginx.
https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-mysql-with-sysbench
I did this with htperf, but sysbench is supposed to be better. Nginx is very efficient. Your limiting factor will probably be your server network connection. If you sftp files from your server, it will be at the maximum rate you can deliver, and this depends on time of day since you are sharing the pipe. I'm using a VPS that does 40mbps on a good day. Figure 10 users at a time and the 512kbyes per second put me at the limit.?
If you use the nginx map module, you can block download managers if they are honest with their user agents.?
http://nginx.org/en/docs/http/ngx_http_map_module.html
http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html
Beware of creating false positives with such rules. When developing code, I return a 444 then search the access.log for what it found, just to insure I wrote the rule correctly.
? Original Message ?
From: Grant
Sent: Sunday, September 11, 2016 5:30 AM
To: nginx at nginx.org
Reply To: nginx at nginx.org
Subject: Re: limit-req and greedy UAs
> What looks to me to be a real resource hog that quite frankly you cant do much about are download managers. They open up multiple connections, but the rate limits apply to each individual connection. (this is why you want to limit the number of connections.)
Does this mean an attacker (for example) could get around rate limits
by opening a new connection for each request? How are the number of
connections limited?
- Grant
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From emailgrant at gmail.com Sun Sep 11 17:22:24 2016
From: emailgrant at gmail.com (Grant)
Date: Sun, 11 Sep 2016 10:22:24 -0700
Subject: limit-req and greedy UAs
In-Reply-To: <20160911143038.5484628.23383.10220@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160911143038.5484628.23383.10220@lazygranch.com>
Message-ID:
> I suspect you are referring to the countless variations on the favicon, with Apple being the worst offender since they have many "touch" files. Android has them too. Just make the files.
I disagree but maybe because of my webmastering style. I don't know
what more of these files will show up in the future and I want to be
as hands-off as possible to save time.
> Clearly Apple has no respect for the webmaster. But Microsoft has gone one step beyond that, requiring some sort of XML file.
>
> There are many schemes to keep these files out of your logs.
> https://github.com/h5bp/server-configs/issues/132
> I look at my logs with scripts, so I haven't bothered to do this, but it is probably good advice.
I don't think not logging those requests is a good idea unless you
need the disk space.
> Are there other files browsers request?
Today: I don't know. Tomorrow: nobody knows.
- Grant
From emailgrant at gmail.com Sun Sep 11 17:28:19 2016
From: emailgrant at gmail.com (Grant)
Date: Sun, 11 Sep 2016 10:28:19 -0700
Subject: limit-req and greedy UAs
In-Reply-To: <20160911152141.5484628.98176.10223@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
Message-ID:
> ?This page has all the secret sauce, including how to limit the number of connections.
>
> https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
>
> I set up the firewall with a higher number as a "just in case."
Should I basically duplicate my limit_req and limit_req_zone
directives into limit_conn and limit_conn_zone? In what sort of
situation would someone not do that?
- Grant
From emailgrant at gmail.com Sun Sep 11 17:34:00 2016
From: emailgrant at gmail.com (Grant)
Date: Sun, 11 Sep 2016 10:34:00 -0700
Subject: limit-req: better message for users?
In-Reply-To: <20160909105659.Horde.kUSojwGhHFfiC3RJxg_fpOR@andreasschulze.de>
References:
<20160909105659.Horde.kUSojwGhHFfiC3RJxg_fpOR@andreasschulze.de>
Message-ID:
>> Has anyone experimented with displaying a more informative message
>> than "503 Service Temporarily Unavailable" when someone exceeds the
>> limit-req?
>
>
> maybe https://tools.ietf.org/html/rfc6585#section-4 ?
That's awesome. Any idea why it isn't the default? Do you remember
the directive that will set this and roughly where it should go?
- Grant
From emailgrant at gmail.com Sun Sep 11 18:03:47 2016
From: emailgrant at gmail.com (Grant)
Date: Sun, 11 Sep 2016 11:03:47 -0700
Subject: Back button causes limiting?
Message-ID:
I just saw some strange stuff in my logs and it only makes sense if
pressing the back button creates a new request on an iPad. So if an
iPad user presses the back button 5 times quickly, they will have
generated 5 requests in a very short period of time which could turn
on rate limiting if so configured. Has anyone else noticed this?
- Grant
From lists at lazygranch.com Sun Sep 11 19:16:06 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Sun, 11 Sep 2016 12:16:06 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
Message-ID: <20160911191606.5484628.46851.10233@lazygranch.com>
?https://www.nginx.com/blog/tuning-nginx/
?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities.?
My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher.?
I don't believe their is one answer here because it depends on how the user interacts with the website. I only serve static content. In fact, I only allow the verbs "head" and "get" to limit the attack surface. A page of text and photos itself can be many things. Think of a photo gallery versus a forum page. The forum has mostly text sprinkled with avatar photos, while the gallery can be mostly images with just a line of text each.?
Basically you need to experiment. Even then, your setup may be better or worse than the typical user. That said, if you limited the rate to 512k bytes per second, most users could achieve that rate?.?
I just don't see evidence of download managers. I see plenty of wget, curl, and python. Those people get my 444 treatment. I use the map module as indicated in my other post to do this.?
What I haven't mentioned is filtering out machines. If you are really concerned about your system being overloaded, think about the search engines you want to support. Assuming you want Google, you need to set up your website in a manner so that Google knows you own it, then you can throttle it back. Google is maybe 20% of my referrals.
If you have a lot of photos, you can set up nginx to block hit linking. This is significant because Google images will hot link everything you have. What you want is for Google itself to see your images, which it will present in reduced resolution, but block the Google hot link. If someone really wants to see your image, Google supplies the referal page.?
http://nginx.org/en/docs/http/ngx_http_referer_module.html
I make my own domain a valid, but maybe that is assumed. If you want to place a link to an image on your website in a forum, you need to make that forum valid.?
Facebook will steal your images.
http://badbots.vps.tips/info/facebookexternalhit-bot
I would use the nginx map module since you will probably be blocking many bots.?
Finally, you may want to block "the cloud"? using your firewall. Only block the browser ports since mail servers will be on the cloud. I block all of AWS for example. My nginx.conf also flags certain requests such as logging into WordPress since I'm not using WordPress! Clearly that IP is a hacker. I have plenty more signatures in the map. I have a script that pulls the IP addresses out of the access.log. I get maybe 20 addresses a day. I feed them to ip2location. Any address that goes to a cloud, VPS, colo, hosting company gets added to the firewall blocking list. I don't just block the IP, but I use the Hurricane Electric BGP tool to get the entire IP space to block. As a rule, I don't block schools, libraries, or ISPs. The idea here is to allow eyeballs but not machines.?
You can also use commercial blocking services if you trust them. (I don't. )
? Original Message ?
From: Grant
Sent: Sunday, September 11, 2016 10:28 AM
To: nginx at nginx.org
Reply To: nginx at nginx.org
Subject: Re: limit-req and greedy UAs
> ?This page has all the secret sauce, including how to limit the number of connections.
>
> https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
>
> I set up the firewall with a higher number as a "just in case."
Should I basically duplicate my limit_req and limit_req_zone
directives into limit_conn and limit_conn_zone? In what sort of
situation would someone not do that?
- Grant
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From reallfqq-nginx at yahoo.fr Mon Sep 12 08:07:31 2016
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 12 Sep 2016 10:07:31 +0200
Subject: limit-req and greedy UAs
In-Reply-To: <20160911191606.5484628.46851.10233@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
<20160911191606.5484628.46851.10233@lazygranch.com>
Message-ID:
You could also generate 304 responses for content you won't provide (cf.
return).
nginx is good at dealing with loads of requests, no problem on that side.
And since return generates an in-memory answer by default, you won't be
hammering your resources. If yo uare CPU or RAM-limited because of those
requests, then I would suggest you evaluate the sizing of your server(s).
You might wish to seperate logging for these requests from the standard
flow to improve their readability, or deactivate them altogether if you
consider they add little-to-no value.
My 2?,
---
*B. R.*
On Sun, Sep 11, 2016 at 9:16 PM, wrote:
> ?https://www.nginx.com/blog/tuning-nginx/
>
> ?I have far more faith in this write up regarding tuning than the
> anti-ddos, though both have similarities.
>
> My interpretation is the user bandwidth is connections times rate. But you
> can't limit the connection to one because (again my interpretation) there
> can be multiple users behind one IP. Think of a university reading your
> website. Thus I am more comfortable limiting bandwidth than I am limiting
> the number of connections. ?The 512k rate limit is fine. I wouldn't go any
> higher.
>
> I don't believe their is one answer here because it depends on how the
> user interacts with the website. I only serve static content. In fact, I
> only allow the verbs "head" and "get" to limit the attack surface. A page
> of text and photos itself can be many things. Think of a photo gallery
> versus a forum page. The forum has mostly text sprinkled with avatar
> photos, while the gallery can be mostly images with just a line of text
> each.
>
> Basically you need to experiment. Even then, your setup may be better or
> worse than the typical user. That said, if you limited the rate to 512k
> bytes per second, most users could achieve that rate?.
>
> I just don't see evidence of download managers. I see plenty of wget,
> curl, and python. Those people get my 444 treatment. I use the map module
> as indicated in my other post to do this.
>
> What I haven't mentioned is filtering out machines. If you are really
> concerned about your system being overloaded, think about the search
> engines you want to support. Assuming you want Google, you need to set up
> your website in a manner so that Google knows you own it, then you can
> throttle it back. Google is maybe 20% of my referrals.
>
> If you have a lot of photos, you can set up nginx to block hit linking.
> This is significant because Google images will hot link everything you
> have. What you want is for Google itself to see your images, which it will
> present in reduced resolution, but block the Google hot link. If someone
> really wants to see your image, Google supplies the referal page.
>
> http://nginx.org/en/docs/http/ngx_http_referer_module.html
>
> I make my own domain a valid, but maybe that is assumed. If you want to
> place a link to an image on your website in a forum, you need to make that
> forum valid.
>
> Facebook will steal your images.
> http://badbots.vps.tips/info/facebookexternalhit-bot
>
> I would use the nginx map module since you will probably be blocking many
> bots.
>
> Finally, you may want to block "the cloud"? using your firewall. Only
> block the browser ports since mail servers will be on the cloud. I block
> all of AWS for example. My nginx.conf also flags certain requests such as
> logging into WordPress since I'm not using WordPress! Clearly that IP is a
> hacker. I have plenty more signatures in the map. I have a script that
> pulls the IP addresses out of the access.log. I get maybe 20 addresses a
> day. I feed them to ip2location. Any address that goes to a cloud, VPS,
> colo, hosting company gets added to the firewall blocking list. I don't
> just block the IP, but I use the Hurricane Electric BGP tool to get the
> entire IP space to block. As a rule, I don't block schools, libraries, or
> ISPs. The idea here is to allow eyeballs but not machines.
>
> You can also use commercial blocking services if you trust them. (I don't.
> )
>
>
> Original Message
> From: Grant
> Sent: Sunday, September 11, 2016 10:28 AM
> To: nginx at nginx.org
> Reply To: nginx at nginx.org
> Subject: Re: limit-req and greedy UAs
>
> > ?This page has all the secret sauce, including how to limit the number
> of connections.
> >
> > https://www.nginx.com/blog/mitigating-ddos-attacks-with-
> nginx-and-nginx-plus/
> >
> > I set up the firewall with a higher number as a "just in case."
>
>
> Should I basically duplicate my limit_req and limit_req_zone
> directives into limit_conn and limit_conn_zone? In what sort of
> situation would someone not do that?
>
> - Grant
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reallfqq-nginx at yahoo.fr Mon Sep 12 08:16:37 2016
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 12 Sep 2016 10:16:37 +0200
Subject: nginx not returning updated headers from origin server on
conditional GET
In-Reply-To:
References:
Message-ID:
>From what I understand, 304 answers should not try to modify headers, as
the cache having made the conditional request to check the correctness of
its entry will not necessarily update it:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5.
The last sentence sums it all: '*If* a cache uses a received 304 response
to update a cache entry, [...]'
---
*B. R.*
On Sun, Sep 11, 2016 at 12:56 PM, jchannon
wrote:
> I have nginx and its cache working as expected apart from one minor issue.
> When a request is made for the first time it hits the origin server,
> returns
> a 200 and nginx caches that response. If I make another request I can see
> from the X-Cache-Status header that the cache has been hit. When I wait a
> while knowing the cache will have expired I can see nginx hit my origin
> server doing a conditional GET because I have proxy_cache_revalidate on;
> defined.
>
> When I check if the resource has changed in my app on the origin server I
> see it hasn't and return a 304 with a new Expires header. Some may argue
> why
> are you returning a new Expires header if the origin server says nothing
> has
> changed and you are returning 304. The answer is, the HTTP RFC says that
> this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4
>
> One thing I have noticed, no matter what headers are I add or modify, when
> the origin server returns 304 nginx will give a response with the first set
> of response headers it saw for that resource.
>
> Also if I change the Cache-Control:max-age header value from the first
> request when I return the 304 response it appears nginx obeys the new value
> as my resource is cached for that time however the response header value is
> that of what was given on the first request not the value that I modified
> on
> the 304 response. This applies to all subsequent requests if the origin
> server issues a 304.
>
> I am running nginx version: nginx/1.10.1
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,269457,269457#msg-269457
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lists at lazygranch.com Mon Sep 12 09:26:29 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Mon, 12 Sep 2016 02:26:29 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
<20160911191606.5484628.46851.10233@lazygranch.com>
Message-ID: <20160912092629.5484629.32659.10252@lazygranch.com>
?I picked 444 based on the following, though I see your point in that it is a non-standard code. I guess from a multiplier standpoint, returning nothing is as minimal as it gets, but the hacker often sends the message twice due to lack of response. A 304 return to an attempt to log into WordPress would seem a bit weird. All I really need is a unique code to find in the log file.?
444?CONNECTION CLOSED WITHOUT RESPONSE
A non-standard status code used to instruct?nginx to close the connection without sending a response to the client, most commonly used to deny malicious or malformed requests.
?
This status code is not seen by the client, it only appears in nginx log files.?
? Original Message ?
From: B.R.
Sent: Monday, September 12, 2016 1:08 AM
To: nginx ML
Reply To: nginx at nginx.org
Subject: Re: limit-req and greedy UAs
You could also generate 304 responses for content you won't provide (cf. return).
nginx is good at dealing with loads of requests, no problem on that side. And since return generates an in-memory answer by default, you won't be hammering your resources. If yo uare CPU or RAM-limited because of those requests, then I would suggest you evaluate the sizing of your server(s).
You might wish to seperate logging for these requests from the standard flow to improve their readability, or deactivate them altogether if you consider they add little-to-no value.
My 2?,
---
B. R.
On Sun, Sep 11, 2016 at 9:16 PM, wrote:
?https://www.nginx.com/blog/tuning-nginx/
?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities.?
My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher.?
I don't believe their is one answer here because it depends on how the user interacts with the website. I only serve static content. In fact, I only allow the verbs "head" and "get" to limit the attack surface. A page of text and photos itself can be many things. Think of a photo gallery versus a forum page. The forum has mostly text sprinkled with avatar photos, while the gallery can be mostly images with just a line of text each.?
Basically you need to experiment. Even then, your setup may be better or worse than the typical user. That said, if you limited the rate to 512k bytes per second, most users could achieve that rate?.?
I just don't see evidence of download managers. I see plenty of wget, curl, and python. Those people get my 444 treatment. I use the map module as indicated in my other post to do this.?
What I haven't mentioned is filtering out machines. If you are really concerned about your system being overloaded, think about the search engines you want to support. Assuming you want Google, you need to set up your website in a manner so that Google knows you own it, then you can throttle it back. Google is maybe 20% of my referrals.
If you have a lot of photos, you can set up nginx to block hit linking. This is significant because Google images will hot link everything you have. What you want is for Google itself to see your images, which it will present in reduced resolution, but block the Google hot link. If someone really wants to see your image, Google supplies the referal page.?
http://nginx.org/en/docs/http/ngx_http_referer_module.html
I make my own domain a valid, but maybe that is assumed. If you want to place a link to an image on your website in a forum, you need to make that forum valid.?
Facebook will steal your images.
http://badbots.vps.tips/info/facebookexternalhit-bot
I would use the nginx map module since you will probably be blocking many bots.?
Finally, you may want to block "the cloud"? using your firewall. Only block the browser ports since mail servers will be on the cloud. I block all of AWS for example. My nginx.conf also flags certain requests such as logging into WordPress since I'm not using WordPress! Clearly that IP is a hacker. I have plenty more signatures in the map. I have a script that pulls the IP addresses out of the access.log. I get maybe 20 addresses a day. I feed them to ip2location. Any address that goes to a cloud, VPS, colo, hosting company gets added to the firewall blocking list. I don't just block the IP, but I use the Hurricane Electric BGP tool to get the entire IP space to block. As a rule, I don't block schools, libraries, or ISPs. The idea here is to allow eyeballs but not machines.?
You can also use commercial blocking services if you trust them. (I don't. )
? Original Message ?
From: Grant
Sent: Sunday, September 11, 2016 10:28 AM
To: nginx at nginx.org
Reply To: nginx at nginx.org
Subject: Re: limit-req and greedy UAs
> ?This page has all the secret sauce, including how to limit the number of connections.
>
> https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
>
> I set up the firewall with a higher number as a "just in case."
Should I basically duplicate my limit_req and limit_req_zone
directives into limit_conn and limit_conn_zone? In what sort of
situation would someone not do that?
- Grant
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From nginx-forum at forum.nginx.org Mon Sep 12 12:51:54 2016
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Mon, 12 Sep 2016 08:51:54 -0400
Subject: limit-req and greedy UAs
In-Reply-To: <20160911152141.5484628.98176.10223@lazygranch.com>
References: <20160911152141.5484628.98176.10223@lazygranch.com>
Message-ID: <4159f4fa5336483595c0193bbf5d3b95.NginxMailingListEnglish@forum.nginx.org>
gariac Wrote:
-------------------------------------------------------
> ?This page has all the secret sauce, including how to limit the number
> of connections.?
>
> https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-ngin
> x-plus/
>
> I set up the firewall with a higher number as a "just in case." Also
> note if you do streaming outside nginx, then you have to limit
> connections for that service in the program providing it.?
>
> Mind you while I think this page has good advice, what is listed here
> won't stop a real ddos attack. The first D is for distributed, meaning
> the attack come from many IP addresses. You probably have to pay for
> one of those reverse proxy services to avoid a real ddos, but I
> personally find them them a bit creepy since I have seen hacking
> attempts come from behind them.?
>
> The tips on this nginx page will limit the teenage boy in his parents
> basement, which is a more real life scenario to be attacked. But note
> that every photo you load is a request, so I wouldn't make the limit
> ?any lower than 5 to10 per second. You can play with the limits and
> watch the results on your own system. Just remember to:?
> service nginx reload
> service nginx restart
>
> If you do fancy caching, you may have to clear your browser cache.
>
> In theory, Google page ranking takes speed into account. ?There are
> many websites that will evaluate your nginx set up.?
> https://www.webpagetest.org/
>
> One thing to remember is nginx limits are in bytes per second, not
> bits per second. So the 512k limit in this example is really quite
> generous.
> ?http://www.webhostingtalk.com/showthread.php?t=1433413
>
> There are programs you can run on your server to flog nginx.
> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-my
> sql-with-sysbench
>
> I did this with htperf, but sysbench is supposed to be better. Nginx
> is very efficient. Your limiting factor will probably be your server
> network connection. If you sftp files from your server, it will be at
> the maximum rate you can deliver, and this depends on time of day
> since you are sharing the pipe. I'm using a VPS that does 40mbps on a
> good day. Figure 10 users at a time and the 512kbyes per second put me
> at the limit.?
>
> If you use the nginx map module, you can block download managers if
> they are honest with their user agents.?
>
> http://nginx.org/en/docs/http/ngx_http_map_module.html
> http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.htm
> l
>
> Beware of creating false positives with such rules. When developing
> code, I return a 444 then search the access.log for what it found,
> just to insure I wrote the rule correctly.
>
>
>
>
>
>
> ? Original Message ?
> From: Grant
> Sent: Sunday, September 11, 2016 5:30 AM
> To: nginx at nginx.org
> Reply To: nginx at nginx.org
> Subject: Re: limit-req and greedy UAs
>
> > What looks to me to be a real resource hog that quite frankly you
> cant do much about are download managers. They open up multiple
> connections, but the rate limits apply to each individual connection.
> (this is why you want to limit the number of connections.)
>
>
> Does this mean an attacker (for example) could get around rate limits
> by opening a new connection for each request? How are the number of
> connections limited?
>
> - Grant
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
The following is a good resource also if you are having issues with slow DOS
attacks where they are trying to keep connections open for long periods of
time.
OWASP : https://www.owasp.org/index.php/SCG_WS_nginx
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269435,269473#msg-269473
From mdounin at mdounin.ru Mon Sep 12 14:57:43 2016
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 12 Sep 2016 17:57:43 +0300
Subject: nginx not returning updated headers from origin server on
conditional GET
In-Reply-To:
References:
Message-ID: <20160912145743.GA1527@mdounin.ru>
Hello!
On Sun, Sep 11, 2016 at 06:56:17AM -0400, jchannon wrote:
> I have nginx and its cache working as expected apart from one minor issue.
> When a request is made for the first time it hits the origin server, returns
> a 200 and nginx caches that response. If I make another request I can see
> from the X-Cache-Status header that the cache has been hit. When I wait a
> while knowing the cache will have expired I can see nginx hit my origin
> server doing a conditional GET because I have proxy_cache_revalidate on;
> defined.
>
> When I check if the resource has changed in my app on the origin server I
> see it hasn't and return a 304 with a new Expires header. Some may argue why
> are you returning a new Expires header if the origin server says nothing has
> changed and you are returning 304. The answer is, the HTTP RFC says that
> this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4
>
> One thing I have noticed, no matter what headers are I add or modify, when
> the origin server returns 304 nginx will give a response with the first set
> of response headers it saw for that resource.
Conditional revalidation as available with
"proxy_cache_revalidate on" doesn't try to merge any new/updated
headers to the response stored. This is by design - merging and
updating headers will be just too costly.
This is normally not an issue as you can (and should) use
"Cache-Control: max-age=..." instead of Expires, and with max-age
you don't need to update anything in the response.
If you can't afford this behaviour for some reason, the only
solution is to avoid using proxy_cache_revalidate.
--
Maxim Dounin
http://nginx.org/
From emailgrant at gmail.com Mon Sep 12 17:17:06 2016
From: emailgrant at gmail.com (Grant)
Date: Mon, 12 Sep 2016 10:17:06 -0700
Subject: Don't process requests containing folders
Message-ID:
My site doesn't have any folders in its URL structure so I'd like to
have nginx process any request which includes a folder (cheap 404)
instead of sending the request to my backend (expensive 404).
Currently I'm using a series of location blocks to check for a valid
request. Here's the last one before nginx internal takes over:
location ~ (^/|.html)$ {
}
Can I expand that to only match requests with a single / or ending in
.html like this:
location ~ (^[^/]+/?[^/]+$|.html$) {
}
Should that work as expected?
- Grant
From jschaeffer0922 at gmail.com Mon Sep 12 19:04:28 2016
From: jschaeffer0922 at gmail.com (Joshua Schaeffer)
Date: Mon, 12 Sep 2016 13:04:28 -0600
Subject: Connecting Nginx to LDAP/Kerberos
Message-ID:
Greetings Nginx list,
I've setup git-http-backend on a sandbox nginx server to host my git
projects inside my network. I'm trying to get everything setup so that I
can require auth to that server block using SSO, which I have setup and
working with LDAP and Kerberos.
I have all my accounts in Kerberos which is stored in OpenLDAP and
authentication works via GSSAPI. How do I get my git server block to use my
central authentication? Does anybody have any experience in setting this up?
I've found a couple git projects which enhances Nginx to support LDAP
authentication:
- https://github.com/kvspb/nginx-auth-ldap
- https://github.com/nginxinc/nginx-ldap-auth
I've gone through the reference implementation (nginx-ldap-auth), but found
that this won't work for me as I use GSSAPI for my authentication.
Looking to see if anybody has done something like this and what their
experience was. Let me know if you'd like to see any of my nginx
configuration files.
Thanks,
Joshua Schaeffer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From sca at andreasschulze.de Mon Sep 12 19:22:03 2016
From: sca at andreasschulze.de (A. Schulze)
Date: Mon, 12 Sep 2016 21:22:03 +0200
Subject: Connecting Nginx to LDAP/Kerberos
In-Reply-To:
References:
Message-ID: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de>
Am 12.09.2016 um 21:04 schrieb Joshua Schaeffer:
> - https://github.com/kvspb/nginx-auth-ldap
I'm using that one to authenticate my users.
auth_ldap_cache_enabled on;
ldap_server my_ldap_server {
url ldaps://ldap.example.org/dc=users,dc=mybase?uid?sub;
binddn cn=nginx,dc=mybase;
binddn_passwd ...;
require valid_user;
}
server {
...
location / {
auth_ldap "foobar";
auth_ldap_servers "my_ldap_server";
root /srv/www/...;
}
}
this is like documented on https://github.com/kvspb/nginx-auth-ldap exept my auth_ldap statements are inside the location.
while docs suggest them outside.
Q: does that matter?
I found it useful to explicit set "auth_ldap_cache_enabled on" but cannot remember the detailed reasons.
Finally: it's working as expected for me (basic auth, no Kerberos)
BUT: I fail to compile this module with openssl-1.1.0
I send a message to https://github.com/kvspb some days ago but got no response till now.
the problem (nginx-1.11.3 + openssl-1.1.0 + nginx-auth-ldap)
cc -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall -I src/core -I src/event -I src/event/modules -I src/os/unix -I /opt/local/include -I objs -I src/http -I src/http/modules -I src/http/v2 \
-o objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o \
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c: In function 'ngx_http_auth_ldap_ssl_handshake':
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c:1325:79: error: dereferencing pointer to incomplete type
int setcode = SSL_CTX_load_verify_locations(transport->ssl->connection->ctx,
^
./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c:1335:80: error: dereferencing pointer to incomplete type
int setcode = SSL_CTX_set_default_verify_paths(transport->ssl->connection->ctx);
^
make[2]: *** [objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o] Error 1
objs/Makefile:1343: recipe for target 'objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o' failed
Maybe the list have a suggestion...
From jschaeffer0922 at gmail.com Mon Sep 12 19:33:04 2016
From: jschaeffer0922 at gmail.com (Joshua Schaeffer)
Date: Mon, 12 Sep 2016 13:33:04 -0600
Subject: Connecting Nginx to LDAP/Kerberos
In-Reply-To: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de>
References:
<3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de>
Message-ID:
>
>
>> I'm using that one to authenticate my users.
>
> auth_ldap_cache_enabled on;
> ldap_server my_ldap_server {
> url ldaps://ldap.example.org/dc=u
> sers,dc=mybase?uid?sub;
> binddn cn=nginx,dc=mybase;
> binddn_passwd ...;
> require valid_user;
> }
>
> server {
> ...
> location / {
> auth_ldap "foobar";
> auth_ldap_servers "my_ldap_server";
>
> root /srv/www/...;
> }
> }
>
Thanks having a config to compare against is always helpful for me.
>
> this is like documented on https://github.com/kvspb/nginx-auth-ldap exept
> my auth_ldap statements are inside the location.
> while docs suggest them outside.
> Q: does that matter?
>
>From my understanding of Nginx, no, since location is lower in the
hierarchy it will just override any auth_ldap directives outside of it.
>
> I found it useful to explicit set "auth_ldap_cache_enabled on" but cannot
> remember the detailed reasons.
> Finally: it's working as expected for me (basic auth, no Kerberos)
>
Any chance anybody has played around with Kerberos auth? Currently my SSO
environment uses GSSAPI for most authentication.
Thanks,
Joshua Schaeffer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From sca at andreasschulze.de Mon Sep 12 19:37:51 2016
From: sca at andreasschulze.de (A. Schulze)
Date: Mon, 12 Sep 2016 21:37:51 +0200
Subject: Connecting Nginx to LDAP/Kerberos
In-Reply-To:
References:
<3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de>
Message-ID:
Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer:
> Any chance anybody has played around with Kerberos auth? Currently my SSO
> environment uses GSSAPI for most authentication.
I compile also the module https://github.com/stnoonan/spnego-http-auth-nginx-module
but I've no time to configure / learn how to configure it
... unfortunately ...
Andreas
From jschaeffer0922 at gmail.com Mon Sep 12 19:52:16 2016
From: jschaeffer0922 at gmail.com (Joshua Schaeffer)
Date: Mon, 12 Sep 2016 13:52:16 -0600
Subject: Connecting Nginx to LDAP/Kerberos
In-Reply-To:
References:
<3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de>
Message-ID:
On Mon, Sep 12, 2016 at 1:37 PM, A. Schulze wrote:
>
>
> Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer:
>
>> Any chance anybody has played around with Kerberos auth? Currently my SSO
>> environment uses GSSAPI for most authentication.
>>
>
> I compile also the module https://github.com/stnoonan/sp
> nego-http-auth-nginx-module
> but I've no time to configure / learn how to configure it
> ... unfortunately ...
I did actually see this module as well, but didn't look into it too much.
Perhaps it would be best for me to take a closer look and then report back
on what I find.
Thanks,
Joshua Schaeffer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From emailgrant at gmail.com Mon Sep 12 20:23:08 2016
From: emailgrant at gmail.com (Grant)
Date: Mon, 12 Sep 2016 13:23:08 -0700
Subject: limit-req and greedy UAs
In-Reply-To: <20160911191606.5484628.46851.10233@lazygranch.com>
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
<20160911191606.5484628.46851.10233@lazygranch.com>
Message-ID:
> ?https://www.nginx.com/blog/tuning-nginx/
>
> ?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities.
>
> My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher.
If I understand correctly, limit_req only works if the same connection
is used for each request. My goal with limit_conn and limit_conn_zone
would be to prevent someone from circumventing limit_req by opening a
new connection for each request. Given that, why would my
limit_conn/limit_conn_zone config be any different from my
limit_req/limit_req_zone config?
- Grant
> Should I basically duplicate my limit_req and limit_req_zone
> directives into limit_conn and limit_conn_zone? In what sort of
> situation would someone not do that?
>
> - Grant
From francis at daoine.org Mon Sep 12 20:27:14 2016
From: francis at daoine.org (Francis Daly)
Date: Mon, 12 Sep 2016 21:27:14 +0100
Subject: Don't process requests containing folders
In-Reply-To:
References:
Message-ID: <20160912202714.GE11677@daoine.org>
On Mon, Sep 12, 2016 at 10:17:06AM -0700, Grant wrote:
Hi there,
> My site doesn't have any folders in its URL structure so I'd like to
> have nginx process any request which includes a folder (cheap 404)
> instead of sending the request to my backend (expensive 404).
The location-matching rules are at http://nginx.org/r/location
At the point of location-matching, nginx does not know anything about
folders; it only knows about the incoming request and the defined
"location" patterns.
That probably sounds like it is being pedantic; but once you know what the
rules are, it may be clearer how to configure nginx to do what you want.
"doesn't have any folders" might mean "no valid url has a second
slash". (Unless you are using something like a fastcgi service which
makes use of PATH_INFO.)
> Currently I'm using a series of location blocks to check for a valid
> request. Here's the last one before nginx internal takes over:
>
> location ~ (^/|.html)$ {
> }
I think that says "is exactly /, or ends in html".
It might be simpler to understand if you write it as two locations:
location = / {}
location ~ html$ {}
partly because if that is *not* what you want, that should be obvious
from the simpler expression.
I'm actually not sure whether this is intended to be the "good"
request, or the "bad" request. If it is the "bad" one, then "return
404;" can easily be copied in to each. If it is the "good" one, with a
complicated config, then you may need to have many duplicate lines in
the two locations; or just "include" a file with the good" configuration.
> Can I expand that to only match requests with a single / or ending in
> .html like this:
>
> location ~ (^[^/]+/?[^/]+$|.html$) {
Since every real request starts with a /, I think that that pattern
effectively says "ends in html", which matches fewer requests than the
earlier one.
> Should that work as expected?
Only if you expect it to be the same as "location ~ html$ {}". So:
probably "no".
If you want to match "requests with a second slash", do just that:
location ~ ^/.*/ {}
(the "^" is not necessary there, but I guess-without-testing that
it helps.)
If you want to match "requests without a second slash", you could do
location ~ ^/[^/]*$ {}
but I suspect you'll be better off with the positive match, plus a
"location /" for "all the rest".
Good luck with it,
f
--
Francis Daly francis at daoine.org
From emailgrant at gmail.com Mon Sep 12 20:55:35 2016
From: emailgrant at gmail.com (Grant)
Date: Mon, 12 Sep 2016 13:55:35 -0700
Subject: Don't process requests containing folders
In-Reply-To: <20160912202714.GE11677@daoine.org>
References:
<20160912202714.GE11677@daoine.org>
Message-ID:
>> My site doesn't have any folders in its URL structure so I'd like to
>> have nginx process any request which includes a folder (cheap 404)
>> instead of sending the request to my backend (expensive 404).
>
>> Currently I'm using a series of location blocks to check for a valid
>> request. Here's the last one before nginx internal takes over:
>>
>> location ~ (^/|.html)$ {
>> }
>
> I think that says "is exactly /, or ends in html".
Yes that is my intention.
> I'm actually not sure whether this is intended to be the "good"
> request, or the "bad" request. If it is the "bad" one, then "return
> 404;" can easily be copied in to each. If it is the "good" one, with a
> complicated config, then you may need to have many duplicate lines in
> the two locations; or just "include" a file with the good" configuration.
That's the good request. I do need it in multiple locations but an
include is working well for that.
>> Can I expand that to only match requests with a single / or ending in
>> .html like this:
>>
>> location ~ (^[^/]+/?[^/]+$|.html$) {
>
> Since every real request starts with a /, I think that that pattern
> effectively says "ends in html", which matches fewer requests than the
> earlier one.
That is not what I intended.
> If you want to match "requests with a second slash", do just that:
>
> location ~ ^/.*/ {}
>
> (the "^" is not necessary there, but I guess-without-testing that
> it helps.)
When you say it helps, you mean for performance?
> If you want to match "requests without a second slash", you could do
>
> location ~ ^/[^/]*$ {}
>
> but I suspect you'll be better off with the positive match, plus a
> "location /" for "all the rest".
I want to keep my location blocks to a minimum so I think I should use
the following as my last location block which will send all remaining
good requests to my backend:
location ~ (^/[^/]*|.html)$ {}
And let everything else match the following, most of which will 404 (cheaply):
location / { internal; }
- Grant
From r1ch+nginx at teamliquid.net Mon Sep 12 21:39:38 2016
From: r1ch+nginx at teamliquid.net (Richard Stanway)
Date: Mon, 12 Sep 2016 23:39:38 +0200
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
<20160911191606.5484628.46851.10233@lazygranch.com>
Message-ID:
limit_req works with multiple connections, it is usually configured per IP
using $binary_remote_addr. See
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone
- you can use variables to set the key to whatever you like.
limit_req generally helps protect eg your backend against request floods
from a single IP and any amount of connections. limit_conn protects against
excessive connections tying up resources on the webserver itself.
On Mon, Sep 12, 2016 at 10:23 PM, Grant wrote:
> > ?https://www.nginx.com/blog/tuning-nginx/
> >
> > ?I have far more faith in this write up regarding tuning than the
> anti-ddos, though both have similarities.
> >
> > My interpretation is the user bandwidth is connections times rate. But
> you can't limit the connection to one because (again my interpretation)
> there can be multiple users behind one IP. Think of a university reading
> your website. Thus I am more comfortable limiting bandwidth than I am
> limiting the number of connections. ?The 512k rate limit is fine. I
> wouldn't go any higher.
>
>
> If I understand correctly, limit_req only works if the same connection
> is used for each request. My goal with limit_conn and limit_conn_zone
> would be to prevent someone from circumventing limit_req by opening a
> new connection for each request. Given that, why would my
> limit_conn/limit_conn_zone config be any different from my
> limit_req/limit_req_zone config?
>
> - Grant
>
>
> > Should I basically duplicate my limit_req and limit_req_zone
> > directives into limit_conn and limit_conn_zone? In what sort of
> > situation would someone not do that?
> >
> > - Grant
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Mon Sep 12 21:49:30 2016
From: francis at daoine.org (Francis Daly)
Date: Mon, 12 Sep 2016 22:49:30 +0100
Subject: Don't process requests containing folders
In-Reply-To:
References:
<20160912202714.GE11677@daoine.org>
Message-ID: <20160912214930.GF11677@daoine.org>
On Mon, Sep 12, 2016 at 01:55:35PM -0700, Grant wrote:
Hi there,
> > If you want to match "requests with a second slash", do just that:
> >
> > location ~ ^/.*/ {}
> >
> > (the "^" is not necessary there, but I guess-without-testing that
> > it helps.)
>
> When you say it helps, you mean for performance?
Yes - I guess that anchoring this regex at a point where it will always
match anyway, will do no harm.
> > If you want to match "requests without a second slash", you could do
> >
> > location ~ ^/[^/]*$ {}
> >
> > but I suspect you'll be better off with the positive match, plus a
> > "location /" for "all the rest".
>
>
> I want to keep my location blocks to a minimum so I think I should use
> the following as my last location block which will send all remaining
> good requests to my backend:
>
> location ~ (^/[^/]*|.html)$ {}
Yes, that should do what you describe.
Note that the . is a metacharacter for "any one"; if you really want
the five-character string ".html" at the end of the request, you should
escape the . to \.
> And let everything else match the following, most of which will 404 (cheaply):
>
> location / { internal; }
Testing and measuring might show that "return 404;" is even cheaper than
"internal;" in the cases where they have the same output. But if there
are cases where the difference in output matters, or if the difference
is not measurable, then leaving it as-is is fine.
Cheers,
f
--
Francis Daly francis at daoine.org
From lists at lazygranch.com Mon Sep 12 22:30:01 2016
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Mon, 12 Sep 2016 15:30:01 -0700
Subject: limit-req and greedy UAs
In-Reply-To:
References:
<20160909013940.5501012.10243.10085@lazygranch.com>
<20160909163036.5501012.8924.10125@lazygranch.com>
<20160911152141.5484628.98176.10223@lazygranch.com>
<20160911191606.5484628.46851.10233@lazygranch.com>
Message-ID: <20160912223001.5484629.85886.10299@lazygranch.com>
Most of the chatter on the interwebs believes that the rate limit is per connection, so if some IP opens up multiple connections, they get more bandwidth.?
It shouldn't be that hard to just test this by installing a manager and seeing what happens. I will give this a try tonight, but hopefully someone will beat me to it.
Relevant post follows:
?-----------
On 17 February 2014 10:02, Bozhidara Marinchovska
wrote:?
> My question is what may be the reason when downloading the example file with
> download manager not to match limit_rate directive
"Download managers" open multiple connections and grab different byte
ranges of the same file across those connections. Nginx's limit_rate
function limits the data transfer rate of a single connection.?
?
http://mailman.nginx.org/pipermail/nginx/2014-February/042337.html
-------
?
? Original Message ?
From: Richard Stanway
Sent: Monday, September 12, 2016 2:39 PM
To: nginx at nginx.org
Reply To: nginx at nginx.org
Subject: Re: limit-req and greedy UAs
limit_req works with multiple connections, it is usually configured per IP using $binary_remote_addr. See http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone - you can use variables to set the key to whatever you like.
limit_req generally helps protect eg your backend against request floods from a single IP and any amount of connections. limit_conn protects against excessive connections tying up resources on the webserver itself.
On Mon, Sep 12, 2016 at 10:23 PM, Grant wrote:
> ?https://www.nginx.com/blog/tuning-nginx/
>
> ?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities.
>
> My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher.
If I understand correctly, limit_req only works if the same connection
is used for each request.? My goal with limit_conn and limit_conn_zone
would be to prevent someone from circumventing limit_req by opening a
new connection for each request.? Given that, why would my
limit_conn/limit_conn_zone config be any different from my
limit_req/limit_req_zone config?
- Grant
> Should I basically duplicate my limit_req and limit_req_zone
> directives into limit_conn and limit_conn_zone? In what sort of
> situation would someone not do that?
>
> - Grant
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From emailgrant at gmail.com Mon Sep 12 23:32:28 2016
From: emailgrant at gmail.com (Grant)
Date: Mon, 12 Sep 2016 16:32:28 -0700
Subject: Don't process requests containing folders
In-Reply-To: <20160912214930.GF11677@daoine.org>
References:
<20160912202714.GE11677@daoine.org>
<20160912214930.GF11677@daoine.org>
Message-ID:
>> location ~ (^/[^/]*|.html)$ {}
>
> Yes, that should do what you describe.
I realize now that I didn't define the requirement properly. I said:
"match requests with a single / or ending in .html" but what I need
is: "match requests with a single / *and* ending in .html, also match
/". Will this do it:
location ~ ^(/[^/]*\.html|/)$ {}
> Note that the . is a metacharacter for "any one"; if you really want
> the five-character string ".html" at the end of the request, you should
> escape the . to \.
Fixed. Do I ever need to escape / in location blocks?
>> And let everything else match the following, most of which will 404 (cheaply):
>>
>> location / { internal; }
>
> Testing and measuring might show that "return 404;" is even cheaper than
> "internal;" in the cases where they have the same output. But if there
> are cases where the difference in output matters, or if the difference
> is not measurable, then leaving it as-is is fine.
I'm sure you're right. I'll switch to:
location / { return 404; }
- Grant
From cainjonm at gmail.com Tue Sep 13 04:29:21 2016
From: cainjonm at gmail.com (Cain)
Date: Tue, 13 Sep 2016 16:29:21 +1200
Subject: Websockets - recommended settings question
In-Reply-To:
References:
Message-ID:
Hi,
In the nginx documentation (https://www.nginx.com/blog/websocket-nginx), it
is recommended to set the 'Connection' header to 'close' (if there is no
upgrade header) - from my understanding, this disables keep alive from
nginx to the upstream - is there a reason for this?
Additionally, is keep alive the default behaviour when connecting to
upstreams?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Sep 13 06:13:59 2016
From: nginx-forum at forum.nginx.org (maltris)
Date: Tue, 13 Sep 2016 02:13:59 -0400
Subject: "502 Bad Gateway" on first request in a setup with Apache
2.4-servers as upstreams
In-Reply-To: