From jombik at platon.org Tue May 1 06:47:15 2018
From: jombik at platon.org (Ondrej Jombik)
Date: Tue, 1 May 2018 08:47:15 +0200 (CEST)
Subject: Knowing the server port inside Perl code
Message-ID:
When using mail module for SMTP and doing auth using Perl code, it might
be handy to know entry port number. For example 25/TCP, 465/TCP or
587/TCP; those are the most used ones.
I thought this would be somewhere among provided headers:
$request->header_in('Auth-Method');
$request->header_in('Auth-Protocol');
$request->header_in('Auth-User');
$request->header_in('Auth-Pass');
$request->header_in('Auth-Salt');
$request->header_in('Client-IP');
$request->header_in('Client-Host');
[... ...]
However there is nothing like 'Auth-Port', or 'Client-Port' or
'Server-Port' or any port.
'Auth-Protocol' is no help, because we have same protocol running on
multiple ports; typically 25/TCP is the same as 587/TCP when sending
e-mails with auth.
So I tried to help myself:
proxy on;
auth_http_header Auth-Port $server_port;
auth_http 127.0.0.1:80/auth;
proxy_pass_error_message on;
- or -
auth_http_header Auth-Port $proxy_port;
But none of those worked.
How I can know entry port number inside Perl code?
--
Ondrej JOMBIK
Platon Technologies s.r.o., Hlavna 3, Sala SK-92701
+421222111321 - info at platon.net - http://platon.net
Read our latest blog:
https://blog.platon.sk/icann-sknic-tld-problemy/
My current location: Phoenix, Arizona
My current timezone: -0700 GMT (MST)
(updated automatically)
From nginx-forum at forum.nginx.org Tue May 1 09:28:44 2018
From: nginx-forum at forum.nginx.org (Winfried)
Date: Tue, 01 May 2018 05:28:44 -0400
Subject: Simple steps to harden Nginx for home use?
Message-ID: <70532bb69e43078c576f3540d0b03c20.NginxMailingListEnglish@forum.nginx.org>
Hello,
I use Nginx on a home Debian appliance to run a couple of personal web
sites.
It's the only port reachable from the Net through the ADSL model with NAT
firewall enabled.
Recently, the server was no longer responding and I couldn't log on:
[code]
(initramfs) root
/bin/sh: root: not found
[/code]
Since I was in a rush, I simply wiped the USB keydrive clean, reinstalled
Debian and the htdocs.
Provided it was a hack and no some internal issue (keydrive?), are there
simple steps I can take to harden Nginx ?
Thank you.
PS: I use apt to install applications. FWIW, here's what "nginx -V" says
after installing it from the repository:
nginx version: nginx/1.10.3
built with OpenSSL 1.1.0f 25 May 2017
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2
-fdebug-prefix-map=/build/nginx-re6b6X/nginx-1.10.3=.
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now'
--prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf
--http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules
--http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
--with-ipv6 --with-http_ssl_module --with-http_stub_status_module
--with-http_realip_module --with-http_auth_request_module
--with-http_v2_module --with-http_dav_module --with-http_slice_module
--with-threads --with-http_addition_module --with-http_flv_module
--with-http_geoip_module=dynamic --with-http_gunzip_module
--with-http_gzip_static_module --with-http_image_filter_module=dynamic
--with-http_mp4_module --with-http_perl_module=dynamic
--with-http_random_index_module --with-http_secure_link_module
--with-http_sub_module --with-http_xslt_module=dynamic --with-mail=dynamic
--with-mail_ssl_module --with-stream=dynamic --with-stream_ssl_module
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/headers-more-nginx-module
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-auth-pam
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-cache-purge
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-dav-ext-module
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-development-kit
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-echo
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/ngx-fancyindex
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nchan
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-lua
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-upload-progress
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/nginx-upstream-fair
--add-dynamic-module=/build/nginx-re6b6X/nginx-1.10.3/debian/modules/ngx_http_substitutions_filter_module
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279655,279655#msg-279655
From cult at free.fr Tue May 1 11:52:39 2018
From: cult at free.fr (Vincent)
Date: Tue, 1 May 2018 13:52:39 +0200
Subject: Configure Nginx Fast CGI cache ON error_page 404
Message-ID: <9b6ddf52-7105-b669-e76a-299dcb25ea2d@free.fr>
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed May 2 09:34:04 2018
From: nginx-forum at forum.nginx.org (bmrf)
Date: Wed, 02 May 2018 05:34:04 -0400
Subject: Regex in proxy_hide_header
Message-ID: <4acc1d7f0297d0cf9d30ac0b9716eee0.NginxMailingListEnglish@forum.nginx.org>
Hi list,
I was trying to unset/delete a header using proxy_hide_header. The problem
is that the header name is always unknown, but it has always the same
pattern, it starts with several whitespaces followed by random characters,
something like \s+\w+
If regex is not supported at proxy_hide_header, as it seems it is, is there
any other way to accomplish this?
Thanks a lot!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279657,279657#msg-279657
From oleg at mamontov.net Wed May 2 10:30:08 2018
From: oleg at mamontov.net (Oleg A. Mamontov)
Date: Wed, 2 May 2018 13:30:08 +0300
Subject: Regex in proxy_hide_header
In-Reply-To: <4acc1d7f0297d0cf9d30ac0b9716eee0.NginxMailingListEnglish@forum.nginx.org>
References: <4acc1d7f0297d0cf9d30ac0b9716eee0.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net>
On Wed, May 02, 2018 at 09:34:04AM +0000, bmrf wrote:
>Hi list,
>
>I was trying to unset/delete a header using proxy_hide_header. The problem
>is that the header name is always unknown, but it has always the same
>pattern, it starts with several whitespaces followed by random characters,
>something like \s+\w+
>
>If regex is not supported at proxy_hide_header, as it seems it is, is there
>any other way to accomplish this?
Probably it makes sense to take a look:
https://github.com/openresty/headers-more-nginx-module#more_clear_headers
"The wildcard character, *, can also be used at the end of the header
name to specify a pattern."
>
>Thanks a lot!
--
Cheers,
Oleg A. Mamontov
mailto: oleg at mamontov.net
skype: lonerr11
cell: +7 (903) 798-1352
From mdounin at mdounin.ru Wed May 2 11:08:43 2018
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 2 May 2018 14:08:43 +0300
Subject: Knowing the server port inside Perl code
In-Reply-To:
References:
Message-ID: <20180502110843.GE32137@mdounin.ru>
Hello!
On Tue, May 01, 2018 at 08:47:15AM +0200, Ondrej Jombik wrote:
> When using mail module for SMTP and doing auth using Perl code, it might
> be handy to know entry port number. For example 25/TCP, 465/TCP or
> 587/TCP; those are the most used ones.
>
> I thought this would be somewhere among provided headers:
>
> $request->header_in('Auth-Method');
> $request->header_in('Auth-Protocol');
> $request->header_in('Auth-User');
> $request->header_in('Auth-Pass');
> $request->header_in('Auth-Salt');
> $request->header_in('Client-IP');
> $request->header_in('Client-Host');
> [... ...]
>
> However there is nothing like 'Auth-Port', or 'Client-Port' or
> 'Server-Port' or any port.
>
> 'Auth-Protocol' is no help, because we have same protocol running on
> multiple ports; typically 25/TCP is the same as 587/TCP when sending
> e-mails with auth.
>
> So I tried to help myself:
>
> proxy on;
> auth_http_header Auth-Port $server_port;
> auth_http 127.0.0.1:80/auth;
> proxy_pass_error_message on;
>
> - or -
>
> auth_http_header Auth-Port $proxy_port;
>
> But none of those worked.
>
> How I can know entry port number inside Perl code?
If you really want to know server port, you can get one by
configuring different auth_http_header in server{} blocks
listening on different ports.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Wed May 2 13:14:58 2018
From: nginx-forum at forum.nginx.org (bmrf)
Date: Wed, 02 May 2018 09:14:58 -0400
Subject: Regex in proxy_hide_header
In-Reply-To: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net>
References: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net>
Message-ID:
Oleg A. Mamontov Wrote:
-------------------------------------------------------
> On Wed, May 02, 2018 at 09:34:04AM +0000, bmrf wrote:
> >Hi list,
> >
> >I was trying to unset/delete a header using proxy_hide_header. The
> problem
> >is that the header name is always unknown, but it has always the same
> >pattern, it starts with several whitespaces followed by random
> characters,
> >something like \s+\w+
> >
> >If regex is not supported at proxy_hide_header, as it seems it is,
> is there
> >any other way to accomplish this?
>
> Probably it makes sense to take a look:
> https://github.com/openresty/headers-more-nginx-module#more_clear_head
> ers
>
> "The wildcard character, *, can also be used at the end of the header
> name to specify a pattern."
The header I need to delete is always different, each time a request is done
it is different and alway with this weird patter \s+\w+. (4 whitespaces
followed by 8 random characters)
Some real examples, it's cut to 1 whitespace character, but there're 4:
" XkIOPalY"
" peYhKOlx"
" KpyTKolq"
So using headers-more-nginx-module wildcard character, *, at the end of the
header name does not help here. Anyway, thank you and if you have any other
suggestion it's more than welcome.
> >
> >Thanks a lot!
>
> --
> Cheers,
> Oleg A. Mamontov
>
> mailto: oleg at mamontov.net
>
> skype: lonerr11
> cell: +7 (903) 798-1352
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279657,279660#msg-279660
From mephystoonhell at gmail.com Thu May 3 08:30:20 2018
From: mephystoonhell at gmail.com (Mephysto On Hell)
Date: Thu, 3 May 2018 10:30:20 +0200
Subject: Proxy pass and SSL certificates
Message-ID:
Hello everyone,
I am using Nginx in a production environment since some years, but I am
almost a newbie with SSL certificates and connections. A the moment I have
a configuration with two levels:
1. A first level Nginx that operate as load balancer
2. Two second level Nginx: the first host a web site and it do not need a
SSL connection, the second hosts an Owncloud instance and it need a SSL
connection.
I am using Certbot and Let's Encrypt to generate signed certificates. A the
moment I have certificates installed in both levels and until last month
this configuration was working. After certificates renewal (every three
months) I am getting an ERR_CERT_DATE_INVALID and I can not access to
Owncloud. Only second level certificate has been renewed.
But if I try to connect directly to second level Nginx, I do not get any
error and I can access to Owncloud.
This is first level Nginx config:
upstream cloud {
server 10.39.0.52;
}
upstream cloud_ssl {
server 10.39.0.52:443;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name cloud.diakont.it cloud.diakont.srl;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl on;
server_name cloud.diakont.it cloud.diakont.srl;
include snippets/cloud.diakont.it.conf;
include snippets/ssl-params.conf;
error_log /var/log/nginx/cloudssl.diakont.it.error.log info;
access_log /var/log/nginx/cloudssl.diakont.it.access.log;
location / {
proxy_pass https://cloud_ssl/;
proxy_redirect default;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
I would like to set first level Nginx to establish a SSL connection with
Owncloud without having to renew the certificates on both levels. Is it
possible? How do I have to change my config?
Thanks in advance.
Meph
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu May 3 11:42:01 2018
From: nginx-forum at forum.nginx.org (Joncheski)
Date: Thu, 03 May 2018 07:42:01 -0400
Subject: Reverse proxy from NGINX to Keycloak with 2FA
In-Reply-To: <20180430223548.GC19311@daoine.org>
References: <20180430223548.GC19311@daoine.org>
Message-ID: <3b35f35a31e995482a6c710f8d87ae94.NginxMailingListEnglish@forum.nginx.org>
Hi Francis,
Thanks for your reply.
I have tried with tcp port forwarder ("stream") but my host is changed to
the client's url, which directly sends me to Keycloak, which I do not want
to have direct access to Keycloak, so I use proxy.
Keycloak has been configured to verify a client certificate that needs its
CN to be identically with the username you enter, normally have keystore and
truststore installed to check from whom it was issued and signed (which is
associated with Key Management System for whether it is invalid or revoke).
I have done it and can NGINX check the client certificate (I add these
things: ssl_client_certificate path-of-root-ca, and ssl_verify_client on),
whether it has been issued and signed by my PKI Key Management System, but
the problem is that the user can submit a certificate from one user, and in
Keycloak to announce with another. I want to stop this thing, so I have a
full 2FA. Keycloak is the only one to check it.
I want to ask you, can the client certificate that is attached to NGINX
through the ssl_verify_client option be forwarded to Keycloak?
Best regards,
Goce Joncheski
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279549,279663#msg-279663
From oleg at mamontov.net Thu May 3 12:02:06 2018
From: oleg at mamontov.net (Oleg A. Mamontov)
Date: Thu, 3 May 2018 15:02:06 +0300
Subject: Regex in proxy_hide_header
In-Reply-To:
References: <20180502103008.6f7z2ablvv3zqg4v@xenon.mamontov.net>
Message-ID: <20180503120206.fz5hzbx7ytwu6mfl@xenon.mamontov.net>
On Wed, May 02, 2018 at 01:14:58PM +0000, bmrf wrote:
>Oleg A. Mamontov Wrote:
>-------------------------------------------------------
>> On Wed, May 02, 2018 at 09:34:04AM +0000, bmrf wrote:
>> >Hi list,
>> >
>> >I was trying to unset/delete a header using proxy_hide_header. The
>> problem
>> >is that the header name is always unknown, but it has always the same
>> >pattern, it starts with several whitespaces followed by random
>> characters,
>> >something like \s+\w+
>> >
>> >If regex is not supported at proxy_hide_header, as it seems it is,
>> is there
>> >any other way to accomplish this?
>>
>> Probably it makes sense to take a look:
>> https://github.com/openresty/headers-more-nginx-module#more_clear_head
>> ers
>>
>> "The wildcard character, *, can also be used at the end of the header
>> name to specify a pattern."
>
>The header I need to delete is always different, each time a request is done
>it is different and alway with this weird patter \s+\w+. (4 whitespaces
>followed by 8 random characters)
>
>Some real examples, it's cut to 1 whitespace character, but there're 4:
>
>" XkIOPalY"
>" peYhKOlx"
>" KpyTKolq"
>
>So using headers-more-nginx-module wildcard character, *, at the end of the
>header name does not help here. Anyway, thank you and if you have any other
>suggestion it's more than welcome.
Okay, so it seems that https://github.com/openresty/lua-nginx-module#header_filter_by_lua_block
using iteration over https://github.com/openresty/lua-nginx-module#ngxrespget_headers
is what you're looking for.
>> >
>> >Thanks a lot!
--
Cheers,
Oleg A. Mamontov
mailto: oleg at mamontov.net
skype: lonerr11
cell: +7 (903) 798-1352
From nginx-forum at forum.nginx.org Fri May 4 11:34:40 2018
From: nginx-forum at forum.nginx.org (Joncheski)
Date: Fri, 04 May 2018 07:34:40 -0400
Subject: Proxy pass and SSL certificates
In-Reply-To:
References:
Message-ID: <8fea04109fd128f4d2d21fe7cefd1575.NginxMailingListEnglish@forum.nginx.org>
Hello Meph,
Can you send the other configuration file ( ssl-params.conf and
cloud.diakont.it.conf ) which you call in this configuration.
And in "location /" , you need to enter this "proxy_redirect default;"
because this is default argument.
Best regards,
Goce Joncheski
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279665,279674#msg-279674
From mephystoonhell at gmail.com Fri May 4 12:32:20 2018
From: mephystoonhell at gmail.com (Mephysto On Hell)
Date: Fri, 4 May 2018 14:32:20 +0200
Subject: Proxy pass and SSL certificates
In-Reply-To: <8fea04109fd128f4d2d21fe7cefd1575.NginxMailingListEnglish@forum.nginx.org>
References:
<8fea04109fd128f4d2d21fe7cefd1575.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Hello Goce,
thank you very much for you answer. I attached files you requested at this
email.
On 4 May 2018 at 13:34, Joncheski wrote:
> Hello Meph,
>
> Can you send the other configuration file ( ssl-params.conf and
> cloud.diakont.it.conf ) which you call in this configuration.
> And in "location /" , you need to enter this "proxy_redirect default;"
> because this is default argument.
>
> Best regards,
> Goce Joncheski
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,279665,279674#msg-279674
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ssl-params.conf
Type: application/octet-stream
Size: 747 bytes
Desc: not available
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cloud.diakont.it.conf
Type: application/octet-stream
Size: 143 bytes
Desc: not available
URL:
From francis at daoine.org Fri May 4 13:22:08 2018
From: francis at daoine.org (Francis Daly)
Date: Fri, 4 May 2018 14:22:08 +0100
Subject: Reverse proxy from NGINX to Keycloak with 2FA
In-Reply-To: <3b35f35a31e995482a6c710f8d87ae94.NginxMailingListEnglish@forum.nginx.org>
References: <20180430223548.GC19311@daoine.org>
<3b35f35a31e995482a6c710f8d87ae94.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180504132208.GD19311@daoine.org>
On Thu, May 03, 2018 at 07:42:01AM -0400, Joncheski wrote:
Hi there,
> I have tried with tcp port forwarder ("stream") but my host is changed to
> the client's url, which directly sends me to Keycloak, which I do not want
> to have direct access to Keycloak, so I use proxy.
The end-client must not talk to Keycloak. Ok.
Keycloak wants to get the client certificate, and some indication that
the connecting client has the private key that is associated with the
certificate.
(Effectively, the certificate is "the username", and the private key is
"the password".)
Normally, Keycloak would be able to verify that the client has the
matching private key, because the ssl connection between Keycloak and
the client would demonstrate that.
You do not want that to happen.
So you must configure Keycloak (if it is possible) to believe nginx when
it says that this client has the private key that matches the included
certificate (because nginx used the ssl connection between nginx and
the client to demonstrate that).
> Keycloak has been configured to verify a client certificate that needs its
> CN to be identically with the username you enter, normally have keystore and
> truststore installed to check from whom it was issued and signed (which is
> associated with Key Management System for whether it is invalid or revoke).
Nginx can give the client certificate to Keycloak, and Keycloak can
confirm that the certificate was issued by the correct Certificate
Authority, and can check whatever it wants about the CN. But Keycloak
cannot directly confirm that the client has the matching private key --
it must be told to believe nginx when nginx says that the client has
the matching private key.
> I have done it and can NGINX check the client certificate (I add these
> things: ssl_client_certificate path-of-root-ca, and ssl_verify_client on),
Yes, nginx could check that (but it probably does not need to, if Keycloak
will be checking it anyway).
> whether it has been issued and signed by my PKI Key Management System, but
> the problem is that the user can submit a certificate from one user, and in
> Keycloak to announce with another. I want to stop this thing, so I have a
> full 2FA. Keycloak is the only one to check it.
I don't understand what you mean there.
That's ok; I don't have to understand. So long as you are happy that it
makes sense to you, that's good enough.
> I want to ask you, can the client certificate that is attached to NGINX
> through the ssl_verify_client option be forwarded to Keycloak?
Yes.
http://nginx.org/r/ssl_verify_client
The contents of the certificate is accessible through the $ssl_client_cert
variable.
You can tell nginx to include that variable in a http header, for
example, that you tell Keycloak to read and believe that the client has
the matching private key.
The whole thing cannot be done without configuration within Keycloak.
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Fri May 4 14:04:02 2018
From: nginx-forum at forum.nginx.org (rsckp)
Date: Fri, 04 May 2018 10:04:02 -0400
Subject: using return (http_rewrite) with etag
Message-ID: <6ebb0b4fb938361f5b68189bb39d7d9b.NginxMailingListEnglish@forum.nginx.org>
Hi guys,
In my configuration I'm using return directive from http_rewrite module. I'd
also like to enable etag to speed things up. Sadly, so far didn't manage to
get it to work. Is such configuration even possible?
If I hash out "return...", etag works like a charm.
server {
listen 80 default_server;
root /var/www/html;
index index.nginx-debian.html;
default_type application/json;
etag on;
return 200 'xxx';
}
Debian 9.4, nginx-light 1.10.3-1+deb9u1.
Thanks in advance for any thoughts.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279680,279680#msg-279680
From mdounin at mdounin.ru Fri May 4 14:18:59 2018
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 4 May 2018 17:18:59 +0300
Subject: using return (http_rewrite) with etag
In-Reply-To: <6ebb0b4fb938361f5b68189bb39d7d9b.NginxMailingListEnglish@forum.nginx.org>
References: <6ebb0b4fb938361f5b68189bb39d7d9b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180504141859.GI32137@mdounin.ru>
Hello!
On Fri, May 04, 2018 at 10:04:02AM -0400, rsckp wrote:
> Hi guys,
>
> In my configuration I'm using return directive from http_rewrite module. I'd
> also like to enable etag to speed things up. Sadly, so far didn't manage to
> get it to work. Is such configuration even possible?
>
> If I hash out "return...", etag works like a charm.
>
> server {
> listen 80 default_server;
> root /var/www/html;
> index index.nginx-debian.html;
>
> default_type application/json;
> etag on;
> return 200 'xxx';
> }
>
> Debian 9.4, nginx-light 1.10.3-1+deb9u1.
>
> Thanks in advance for any thoughts.
The "etag" directive controls whether entity tags will be
generated for static files. Entity tags (as well as Last-Modified
headers) are never generated for responses produced with the "return"
directive.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Fri May 4 16:52:01 2018
From: nginx-forum at forum.nginx.org (bmrf)
Date: Fri, 04 May 2018 12:52:01 -0400
Subject: Regex in proxy_hide_header
In-Reply-To: <20180503120206.fz5hzbx7ytwu6mfl@xenon.mamontov.net>
References: <20180503120206.fz5hzbx7ytwu6mfl@xenon.mamontov.net>
Message-ID: <45b95039ca2a419a489a1a94a6b3ce98.NginxMailingListEnglish@forum.nginx.org>
Thanks Oleg!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279657,279683#msg-279683
From b631093f-779b-4d67-9ffe-5f6d5b1d3f8a at protonmail.ch Sat May 5 11:21:21 2018
From: b631093f-779b-4d67-9ffe-5f6d5b1d3f8a at protonmail.ch (Bob Smith)
Date: Sat, 05 May 2018 07:21:21 -0400
Subject: NGINX mangling rewrites when encoded URLs present
Message-ID:
nginx version: nginx/1.13.12
This is my rewrite:
location / {
rewrite ^/(.*)$ https://example.net/$1 permanent;
}
I am getting some really odd behavior.
For example:
mysubdomain.example.com/CL0/https:%2F%2Fapple.com
Gets re-written to
example.net/CLO/https:/apple.com
Only one forward-slash, not two before apple.com. The original declaration was %2F%2F ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From r at roze.lv Sat May 5 22:17:17 2018
From: r at roze.lv (Reinis Rozitis)
Date: Sun, 6 May 2018 01:17:17 +0300
Subject: NGINX mangling rewrites when encoded URLs present
In-Reply-To:
References:
Message-ID: <000001d3e4be$d43d1b50$7cb751f0$@roze.lv>
> rewrite ^/(.*)$ https://example.net/$1 permanent;
>
...
>
> Gets re-written to
>
> example.net/CLO/https:/apple.com
>
> Only one forward-slash, not two before apple.com. The original declaration was %2F%2F ?
It's probably because that way the $1 is/gets url-decoded and merge_slashes kicks in ( http://nginx.org/en/docs/http/ngx_http_core_module.html#merge_slashes ).
Try something like:
location / {
return 301 https://example.net$request_uri;
}
rr
From nginx-forum at forum.nginx.org Sun May 6 07:44:11 2018
From: nginx-forum at forum.nginx.org (Ortal)
Date: Sun, 06 May 2018 03:44:11 -0400
Subject: ngx http upstream request body
Message-ID:
Hello,
I am building an nginx module, using ngx_http_upstream.
I am using ngx_http_request_t struct and I would like to know if my
assumption that request_body->bufs will not be reuse (free) until the
connection will be finalized?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279690,279690#msg-279690
From Ajay_Sonawane at symantec.com Mon May 7 05:15:34 2018
From: Ajay_Sonawane at symantec.com (Ajay Sonawane)
Date: Mon, 7 May 2018 05:15:34 +0000
Subject: Connect to NGINX reverse proxy through proxy
Message-ID:
I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening.
Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using
listen 443 ssl proxy_protocol;
proxy_protocol on;
Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do.
Ajay
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Mon May 7 06:36:26 2018
From: nginx-forum at forum.nginx.org (rsckp)
Date: Mon, 07 May 2018 02:36:26 -0400
Subject: using return (http_rewrite) with etag
In-Reply-To: <20180504141859.GI32137@mdounin.ru>
References: <20180504141859.GI32137@mdounin.ru>
Message-ID:
That would explain it. Thank you for the information!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279680,279692#msg-279692
From arut at nginx.com Mon May 7 10:25:59 2018
From: arut at nginx.com (Roman Arutyunyan)
Date: Mon, 7 May 2018 13:25:59 +0300
Subject: Connect to NGINX reverse proxy through proxy
In-Reply-To:
References:
Message-ID: <20180507102559.GA1824@Romans-MacBook-Air.local>
Hello,
On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote:
> I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening.
>
>
> Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using
>
>
> listen 443 ssl proxy_protocol;
This line instructs nginx to expect PROXY protocol header from SQUID.
Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY
protocol header up until recently.
> proxy_protocol on;
>
>
>
>
> Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do.
For details it's better to look into error.log.
--
Roman Arutyunyan
From Ajay_Sonawane at symantec.com Mon May 7 10:37:08 2018
From: Ajay_Sonawane at symantec.com (Ajay Sonawane)
Date: Mon, 7 May 2018 10:37:08 +0000
Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy
In-Reply-To: <20180507102559.GA1824@Romans-MacBook-Air.local>
References: ,
<20180507102559.GA1824@Romans-MacBook-Air.local>
Message-ID:
>>For details it's better to look into error.log.
Error log says "Broker header [some garbage chars] while reading PROXY protocol, client: IPADDRESS, server:0.0.0.8443
________________________________
From: nginx on behalf of Roman Arutyunyan
Sent: Monday, May 7, 2018 3:55:59 PM
To: nginx at nginx.org
Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy
Hello,
On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote:
> I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening.
>
>
> Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using
>
>
> listen 443 ssl proxy_protocol;
This line instructs nginx to expect PROXY protocol header from SQUID.
Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY
protocol header up until recently.
> proxy_protocol on;
>
>
>
>
> Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do.
For details it's better to look into error.log.
--
Roman Arutyunyan
_______________________________________________
nginx mailing list
nginx at nginx.org
https://clicktime.symantec.com/a/1/-T9P8fTQru19QtJ92SY81cK1kgruSCyqw2a3i7ct9uA=?d=6I_E5mOuE_JiHm4QhzDePIEnOq_IvGHWcHWAQhy-J4UZqqAmz64BtlAUxaKeJ_QUeJlstY5j28Te7x5BUPJmBb7m6We9GzVL-5L0HAk8nw5PEVbXWoK8dlsjU1x4BITL4J3OeGFrdRvQR2wkGd5zLcFgsskyU4BCbuzKn8V5bKCmxB1DpG8cQVok5PkZ6Qg7YthetOt87ogtudPBDs_PJbaFVREIFlzqZKx96xuvYbT5uWM1w_ZYymY83doc7FsBvMyEFL2ozigFAfQT3usyvOndD3N6RIZxARXwdst7NOabaJMq1_Wofqujl-IAJ3M5MqakCUcNqdCC1EjAlA_YICSnnQ6daqQgPbBISB2mdbmdwAjRzNyu8eLvEue2CCe1_oSfgf7r3F4edwaTYA%3D%3D&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From arut at nginx.com Mon May 7 10:54:51 2018
From: arut at nginx.com (Roman Arutyunyan)
Date: Mon, 7 May 2018 13:54:51 +0300
Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy
In-Reply-To:
References:
<20180507102559.GA1824@Romans-MacBook-Air.local>
Message-ID: <20180507105451.GB1824@Romans-MacBook-Air.local>
On Mon, May 07, 2018 at 10:37:08AM +0000, Ajay Sonawane wrote:
> >>For details it's better to look into error.log.
>
> Error log says "Broker header [some garbage chars] while reading PROXY protocol, client: IPADDRESS, server:0.0.0.8443
This means the client (SQUID in your case) does not send the PROXY protocol
header. Remove the "proxy_protocol" parameter from "listen" to fix this.
> ________________________________
> From: nginx on behalf of Roman Arutyunyan
> Sent: Monday, May 7, 2018 3:55:59 PM
> To: nginx at nginx.org
> Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy
>
> Hello,
>
> On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote:
> > I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening.
> >
> >
> > Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using
> >
> >
> > listen 443 ssl proxy_protocol;
>
> This line instructs nginx to expect PROXY protocol header from SQUID.
> Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY
> protocol header up until recently.
>
> > proxy_protocol on;
> >
> >
> >
> >
> > Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do.
>
> For details it's better to look into error.log.
>
> --
> Roman Arutyunyan
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://clicktime.symantec.com/a/1/-T9P8fTQru19QtJ92SY81cK1kgruSCyqw2a3i7ct9uA=?d=6I_E5mOuE_JiHm4QhzDePIEnOq_IvGHWcHWAQhy-J4UZqqAmz64BtlAUxaKeJ_QUeJlstY5j28Te7x5BUPJmBb7m6We9GzVL-5L0HAk8nw5PEVbXWoK8dlsjU1x4BITL4J3OeGFrdRvQR2wkGd5zLcFgsskyU4BCbuzKn8V5bKCmxB1DpG8cQVok5PkZ6Qg7YthetOt87ogtudPBDs_PJbaFVREIFlzqZKx96xuvYbT5uWM1w_ZYymY83doc7FsBvMyEFL2ozigFAfQT3usyvOndD3N6RIZxARXwdst7NOabaJMq1_Wofqujl-IAJ3M5MqakCUcNqdCC1EjAlA_YICSnnQ6daqQgPbBISB2mdbmdwAjRzNyu8eLvEue2CCe1_oSfgf7r3F4edwaTYA%3D%3D&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
From cult at free.fr Mon May 7 12:24:05 2018
From: cult at free.fr (Vincent)
Date: Mon, 7 May 2018 14:24:05 +0200
Subject: Nginx OR for 2 differents location
Message-ID:
An HTML attachment was scrubbed...
URL:
From iippolitov at nginx.com Mon May 7 13:03:28 2018
From: iippolitov at nginx.com (Igor A. Ippolitov)
Date: Mon, 7 May 2018 16:03:28 +0300
Subject: Nginx OR for 2 differents location
In-Reply-To:
References:
Message-ID:
Hello,
You can try
location ~ (render_img.php|^/url_rewriting.php$) {}
Which should effectively do the same
On 07.05.2018 15:24, Vincent wrote:
>
> Hello,
>
> I have 2 location blocks like that:
>
>
> |location =/url_rewriting.php {|
>
> and
>
>
> ||||location ~render_img.php {||
>
>
> with exactly the same content.
>
>
> Is it possible to use an OR to have only one location block?
>
> Thanks in advance,
>
> Vincent.
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Mon May 7 14:04:09 2018
From: nginx-forum at forum.nginx.org (joovunir)
Date: Mon, 07 May 2018 10:04:09 -0400
Subject: Does NGINX support URI (http.ldap) based CRL (revokation lists)
checks? or how to handle CRL valid for 7 days
Message-ID: <3aacb455b2b0cb49362fa78a6d6309e1.NginxMailingListEnglish@forum.nginx.org>
Hi,
I know NGINX supports CRL in file format (PEM), but as the CRLs for my
certificate provider is only valid for 7 days, and downloading the files,
converting to PEM and so on is time consuming, I wonder if NGINX supports
URI based CRLs.
I haven't found any thing in the documentation... so in case it doesn't
support it, how do you handle that? scripts to download/convert/move the
files from your certificates' provider?
thanks in advance!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279702,279702#msg-279702
From kohenkatz at gmail.com Mon May 7 16:12:50 2018
From: kohenkatz at gmail.com (Moshe Katz)
Date: Mon, 07 May 2018 16:12:50 +0000
Subject: Packages for Ubuntu 18.04 "Bionic"?
Message-ID:
Hello,
I see that the new Ubuntu 18.04 release has Nginx 1.14.0
as its install version.
However, as new development progresses, I will want to be on the `mainline`
version on my servers.
Right now, there is no official Nginx package support for 18.04, as the
newest version in http://nginx.org/packages/mainline/ubuntu/ is `artful`.
When can we expect packages for `bionic` to be officially available?
Thanks,
Moshe
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From defan at nginx.com Mon May 7 16:15:51 2018
From: defan at nginx.com (Andrei Belov)
Date: Mon, 7 May 2018 19:15:51 +0300
Subject: Packages for Ubuntu 18.04 "Bionic"?
In-Reply-To:
References:
Message-ID: <9C308314-9B6C-4D10-B695-25B7EB342749@nginx.com>
Hi Moshe,
> On 07 May 2018, at 19:12, Moshe Katz wrote:
>
> Hello,
>
> I see that the new Ubuntu 18.04 release has Nginx 1.14.0 as its install version.
> However, as new development progresses, I will want to be on the `mainline` version on my servers.
> Right now, there is no official Nginx package support for 18.04, as the newest version in http://nginx.org/packages/mainline/ubuntu/ is `artful`.
>
> When can we expect packages for `bionic` to be officially available?
Those should be available later this week.
Thanks for your interest.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From cult at free.fr Mon May 7 19:55:01 2018
From: cult at free.fr (Vincent)
Date: Mon, 7 May 2018 21:55:01 +0200
Subject: Nginx OR for 2 differents location
In-Reply-To:
References:
Message-ID: <4d56bfc7-81bb-390a-6016-07f72a31344a@free.fr>
An HTML attachment was scrubbed...
URL:
From jfjm2002 at gmail.com Tue May 8 02:59:10 2018
From: jfjm2002 at gmail.com (Joe Doe)
Date: Mon, 7 May 2018 19:59:10 -0700
Subject: Logging of mirror requests
Message-ID:
Hi,
I have used ngx_http_mirror_module to create mirrors. I would like to log
these requests as well? So in the /mirror location, I added access_log
directive, but the log file was created, but no logs were produced.
Is logging currently limited to only the original request?
Best,
Jay
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Ajay_Sonawane at symantec.com Tue May 8 05:17:48 2018
From: Ajay_Sonawane at symantec.com (Ajay Sonawane)
Date: Tue, 8 May 2018 05:17:48 +0000
Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy
In-Reply-To: <20180507105451.GB1824@Romans-MacBook-Air.local>
References:
<20180507102559.GA1824@Romans-MacBook-Air.local>
,
<20180507105451.GB1824@Romans-MacBook-Air.local>
Message-ID:
Removing 'proxy_protocol' parameter fixed the problem. Thanks a lot.
________________________________
From: nginx on behalf of Roman Arutyunyan
Sent: Monday, May 7, 2018 4:24:51 PM
To: nginx at nginx.org
Subject: Re: [EXT] Re: Connect to NGINX reverse proxy through proxy
On Mon, May 07, 2018 at 10:37:08AM +0000, Ajay Sonawane wrote:
> >>For details it's better to look into error.log.
>
> Error log says "Broker header [some garbage chars] while reading PROXY protocol, client: IPADDRESS, server:0.0.0.8443
This means the client (SQUID in your case) does not send the PROXY protocol
header. Remove the "proxy_protocol" parameter from "listen" to fix this.
> ________________________________
> From: nginx on behalf of Roman Arutyunyan
> Sent: Monday, May 7, 2018 3:55:59 PM
> To: nginx at nginx.org
> Subject: [EXT] Re: Connect to NGINX reverse proxy through proxy
>
> Hello,
>
> On Mon, May 07, 2018 at 05:15:34AM +0000, Ajay Sonawane wrote:
> > I am using NGINX as a HTTPS reverse proxy and load balancer. My clients are able to connect to reverse proxy using SSL and reverse proxy is able to terminate SSL connection and establish a new connection with backend server, data exchange is also happening.
> >
> >
> > Now I am trying to setup a proxy between a client and NGINX. I am using SQUID proxy in between. I have enabled proxy protocol on nginx using
> >
> >
> > listen 443 ssl proxy_protocol;
>
> This line instructs nginx to expect PROXY protocol header from SQUID.
> Are you sure SQUID sends it? It looks like SQUID didn't support sending PROXY
> protocol header up until recently.
>
> > proxy_protocol on;
> >
> >
> >
> >
> > Still my client is not able to connect to NGINX through proxy. Is there anything else I need to do.
>
> For details it's better to look into error.log.
>
> --
> Roman Arutyunyan
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://clicktime.symantec.com/a/1/-T9P8fTQru19QtJ92SY81cK1kgruSCyqw2a3i7ct9uA=?d=6I_E5mOuE_JiHm4QhzDePIEnOq_IvGHWcHWAQhy-J4UZqqAmz64BtlAUxaKeJ_QUeJlstY5j28Te7x5BUPJmBb7m6We9GzVL-5L0HAk8nw5PEVbXWoK8dlsjU1x4BITL4J3OeGFrdRvQR2wkGd5zLcFgsskyU4BCbuzKn8V5bKCmxB1DpG8cQVok5PkZ6Qg7YthetOt87ogtudPBDs_PJbaFVREIFlzqZKx96xuvYbT5uWM1w_ZYymY83doc7FsBvMyEFL2ozigFAfQT3usyvOndD3N6RIZxARXwdst7NOabaJMq1_Wofqujl-IAJ3M5MqakCUcNqdCC1EjAlA_YICSnnQ6daqQgPbBISB2mdbmdwAjRzNyu8eLvEue2CCe1_oSfgf7r3F4edwaTYA%3D%3D&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://clicktime.symantec.com/a/1/iiK7PDu6t0LJJZcyrtHGQOP0hruXc0lm5KWh72JawJc=?d=zchGGR67Iik2GbGvPUQC-PpGi7Ku0O0GqbsZJKFz-j6IfASApbsJKzyFGhsJhW_ITVuwR--Gn1yeBVn-dCTHWruWcVnXGRvNM-11RN36_vODOpYutPp2ikEt1Kf4TOnD6VRSkprJ0TRoQ8mgXEASHF9NaVkTJtQj3kzZD953ikrNdU7JTvPd_jTYj797kIH4WZL4jsVCywcp6F8N1DtEHFj5uQsKvNeycQTe-Ck0BmzUJyeWSXxuXYfQnyAy-FVHxa6uVtbI6G4vx-WhcMoAZZmc20aBpbQHP8CyIgMnRvWp6kJ0oBGLq4TFj5LbKLuxIL4nPeqGtAQ2pSOTe89K32JZAHGVsYaAcxEI9aOBivM81JeIuLB_t93j4PpuP3do959qD2s3ZW0yR-UWfpAbwFC8ryDmgAY-&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx
--
Roman Arutyunyan
_______________________________________________
nginx mailing list
nginx at nginx.org
https://clicktime.symantec.com/a/1/iiK7PDu6t0LJJZcyrtHGQOP0hruXc0lm5KWh72JawJc=?d=zchGGR67Iik2GbGvPUQC-PpGi7Ku0O0GqbsZJKFz-j6IfASApbsJKzyFGhsJhW_ITVuwR--Gn1yeBVn-dCTHWruWcVnXGRvNM-11RN36_vODOpYutPp2ikEt1Kf4TOnD6VRSkprJ0TRoQ8mgXEASHF9NaVkTJtQj3kzZD953ikrNdU7JTvPd_jTYj797kIH4WZL4jsVCywcp6F8N1DtEHFj5uQsKvNeycQTe-Ck0BmzUJyeWSXxuXYfQnyAy-FVHxa6uVtbI6G4vx-WhcMoAZZmc20aBpbQHP8CyIgMnRvWp6kJ0oBGLq4TFj5LbKLuxIL4nPeqGtAQ2pSOTe89K32JZAHGVsYaAcxEI9aOBivM81JeIuLB_t93j4PpuP3do959qD2s3ZW0yR-UWfpAbwFC8ryDmgAY-&u=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue May 8 05:46:07 2018
From: nginx-forum at forum.nginx.org (auto)
Date: Tue, 08 May 2018 01:46:07 -0400
Subject: Problem with to multiple virtual hosts
Message-ID: <6febf3fdfc5a52e635cff33c75b4c92b.NginxMailingListEnglish@forum.nginx.org>
We use nginx for Hosting multiple hosts. We have haves mixes, some sites are
only available at http:// and other sites are available with https://
We create a new config-file for every virtual hosts (domain) if there is a
new customer with a new Homepage. All works correctly.
Today we create 2 new config-files on the nginx, copy the file to
sites-enabled and make a nginx reload.
Now, no sites works again. But there was no error after the nginx reload.
In the Browser we get the error that the Site is not available. And we get
this error at all Sites.
In the nginx error.log we get the message *2948... no "ssl_certificate" is
defined in server listening on SSL port while SSL handshaking, client:
178...., server 0.0.0.0:443
In the Log-Files are many of these messages, i think ~20 lines of this.
The Virtual-Host config File we create look like:
server {
listen 80;
server_name example.de;
return 301 http://www.$http_host$request_uri;
}
server {
listen 80;
server_name *.example.de;
location / {
access_log off;
proxy_pass http://example.test.de;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwareded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_sestheader Connection "upgrade";
}
}
We get the error only if we create a new virtual-host file in the
sites-enabled. if we copy the code into a exisiting virtual host file it
works correctly and all other sites works again.
Any Ideas why it doesn't work if we create a new file? We deleted the new
file, create it again but always get the same effect with the error Message
in the error-log file.
I don't know if its important but we have 196 Files in the sites-enabled
directory. If we create a new one the error come again, if we delete the
file and write (copy&paste) the same code into a existing file, it works
correctly?!
We don't think that is a ssl error, we think that the count of files are the
problem?!
We want to create always a new virtual-host config-file for each customer
and don't edit add the config to a existing file.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279708,279708#msg-279708
From nginx-forum at forum.nginx.org Tue May 8 07:34:04 2018
From: nginx-forum at forum.nginx.org (Joncheski)
Date: Tue, 08 May 2018 03:34:04 -0400
Subject: Proxy pass and SSL certificates
In-Reply-To:
References:
Message-ID: <8cda7fa6d5fff1e1d28f9a91d746fc81.NginxMailingListEnglish@forum.nginx.org>
Hello Meph,
In configuration file "cloud.diakont.it.conf":
- "ssl_certificate" please set path of only public certificate of server
(cloud.diakont.it), and in "ssl_certificate_key" please set path of only
private key of server (cloud.diakont.it).
In configuration file "ssl-params.conff":
- The certificates that you use for the server and for the client, from whom
are they issued and signed? If you are from your publisher and signer, these
parameters will be removed: ssl_ecdh_curve, ssl_stapling, add_header
X-Frame-Options DENY; add_header X-Content-Type-Options nosniff;
Change parameter: resolver_timeout 10s.
In nginx config:
- Add this argument:
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_trusted_certificate ;
- And in location / like this:
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_pass https://cloud_ssl/;
}
And check the configuration file (nginx -t).
After this, please send me more access and error log for this.
Best regards,
Goce Joncheski
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279665,279710#msg-279710
From ruz at sports.ru Tue May 8 11:43:54 2018
From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=)
Date: Tue, 8 May 2018 14:43:54 +0300
Subject: big difference between request time and upstreams time
Message-ID:
Hello,
Some selected log records:
14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002]
14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000]
14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000]
14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000]
14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000]
14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000]
14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000]
14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000]
14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000]
14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002]
14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002]
14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000]
14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002]
14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000]
14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000]
14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000]
14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000]
14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000]
14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002]
columns: wallclock time, request time, upstream_request_time,
upstream_connect_time, upstream.
Please, help me diagnose this problem further as I stuck. This is subset
where request_time 50x bigger than upstream_request_time (just to make
subset less noisy). I see request times up to 60 seconds. Can not tie it to
some periodicity. It happens so often that don't see anything helpful in
strace... I stuck... Any ideas?
This is nginx/1.10.2 on FreeBSD 10.3-RELEASE-p7.
--
?????? ???????
???????????? ?????? ?????????? ???-????????
+7(916) 597-92-69, ruz @
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From iippolitov at nginx.com Tue May 8 11:50:58 2018
From: iippolitov at nginx.com (Igor A. Ippolitov)
Date: Tue, 8 May 2018 14:50:58 +0300
Subject: big difference between request time and upstreams time
In-Reply-To:
References:
Message-ID:
????? ? ?????? ????????
http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html
????????, ?????? ?????, ? ????????.
On 08.05.2018 14:43, ?????? ??????? wrote:
> Hello,
>
> Some selected log records:
> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002
> ]
> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000
> ]
> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000
> ]
> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000
> ]
> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000
> ]
> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000
> ]
> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000
> ]
> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000
> ]
> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000
> ]
> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002
> ]
> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002
> ]
> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000
> ]
> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002
> ]
> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000
> ]
> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000
> ]
> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000
> ]
> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000
> ]
> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000
> ]
> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002
> ]
>
> columns: wallclock time, request time, upstream_request_time,
> upstream_connect_time, upstream.
>
> Please, help me diagnose this problem further as I stuck. This is
> subset where request_time 50x bigger than upstream_request_time (just
> to make subset less noisy). I see request times up to 60 seconds. Can
> not tie it to some periodicity. It happens so often that don't see
> anything helpful in strace... I stuck... Any ideas?
>
> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7.
>
> --
> ?????? ???????
> ???????????? ?????? ?????????? ???-????????
> +7(916) 597-92-69, ruz?@
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From iippolitov at nginx.com Tue May 8 12:11:39 2018
From: iippolitov at nginx.com (Igor A. Ippolitov)
Date: Tue, 8 May 2018 15:11:39 +0300
Subject: big difference between request time and upstreams time
In-Reply-To:
References:
Message-ID: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com>
Sorry, didn't realize this is an English mailing list.
To sum it up: the problem is most likely about clients and not the server.
Discrepancy between request time and upstream time usually means that a
client is slow or uses a bad connection.
Basically, this is OK unless you have the only server out of many with
this problem.
This in turn may mean that the problem is with that server's network
connection.
Regards.
On 08.05.2018 14:50, Igor A. Ippolitov wrote:
> ????? ? ?????? ????????
> http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html
> ????????, ?????? ?????, ? ????????.
>
> On 08.05.2018 14:43, ?????? ??????? wrote:
>> Hello,
>>
>> Some selected log records:
>> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002
>> ]
>> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000
>> ]
>> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000
>> ]
>> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000
>> ]
>> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000
>> ]
>> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000
>> ]
>> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000
>> ]
>> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000
>> ]
>> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000
>> ]
>> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002
>> ]
>> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002
>> ]
>> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000
>> ]
>> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002
>> ]
>> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000
>> ]
>> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000
>> ]
>> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000
>> ]
>> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000
>> ]
>> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000
>> ]
>> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002
>> ]
>>
>> columns: wallclock time, request time, upstream_request_time,
>> upstream_connect_time, upstream.
>>
>> Please, help me diagnose this problem further as I stuck. This is
>> subset where request_time 50x bigger than upstream_request_time (just
>> to make subset less noisy). I see request times up to 60 seconds. Can
>> not tie it to some periodicity. It happens so often that don't see
>> anything helpful in strace... I stuck... Any ideas?
>>
>> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7.
>>
>> --
>> ?????? ???????
>> ???????????? ?????? ?????????? ???-????????
>> +7(916) 597-92-69, ruz?@
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From thresh at nginx.com Tue May 8 14:28:37 2018
From: thresh at nginx.com (Konstantin Pavlov)
Date: Tue, 8 May 2018 17:28:37 +0300
Subject: Packages for Ubuntu 18.04 "Bionic"?
In-Reply-To:
References:
Message-ID: <73aa4c3a-b037-bf0e-c419-e896e722d7e4@nginx.com>
Hello,
07.05.2018 19:12, Moshe Katz wrote:
> Hello,
>
> I see that the new Ubuntu 18.04 release has Nginx 1.14.0
> ?as its install version.
> However, as new development progresses, I will want to be on the
> `mainline` version on my servers.
> Right now, there is no official Nginx package support for 18.04, as the
> newest version in?http://nginx.org/packages/mainline/ubuntu/ is `artful`.
>
> When can we expect packages for `bionic` to be officially available?
>
> Thanks,
> Moshe
The packages for both stable and mainline branches are now available to
download.
Have a good one,
--
Konstantin Pavlov
https://www.nginx.com/
From ruz at sports.ru Tue May 8 15:51:20 2018
From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=)
Date: Tue, 8 May 2018 18:51:20 +0300
Subject: big difference between request time and upstreams time
In-Reply-To: <6677871e-a058-41b9-1f7a-aee231183612@nginx.com>
References:
<6677871e-a058-41b9-1f7a-aee231183612@nginx.com>
Message-ID:
On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov
wrote:
> Sorry, didn't realize this is an English mailing list.
>
> To sum it up: the problem is most likely about clients and not the server.
> Discrepancy between request time and upstream time usually means that a
> client is slow or uses a bad connection.
> Basically, this is OK unless you have the only server out of many with
> this problem.
> This in turn may mean that the problem is with that server's network
> connection.
>
The issue affects all of our primary nginx servers.
However, they receive requests from 4 "routing" nginx servers and all
backends via haproxy. The problem affects only
requests from the routing nginxs, not backends. I would expect routing
servers pull data from upstream ASAP. So slow
clients in my mind should only affect those routing servers standing in
front.
Am I wrong?
> Regards.
>
>
> On 08.05.2018 14:50, Igor A. Ippolitov wrote:
>
> ????? ? ?????? ???????? http://mailman.nginx.org/
> pipermail/nginx/2008-October/008025.html
> ????????, ?????? ?????, ? ????????.
>
> On 08.05.2018 14:43, ?????? ??????? wrote:
>
> Hello,
>
> Some selected log records:
> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002]
> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000]
> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000]
> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000]
> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000]
> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000]
> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000]
> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000]
> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000]
> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002]
> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002]
> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000]
> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002]
> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000]
> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000]
> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000]
> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000]
> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000]
> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002]
>
> columns: wallclock time, request time, upstream_request_time,
> upstream_connect_time, upstream.
>
> Please, help me diagnose this problem further as I stuck. This is subset
> where request_time 50x bigger than upstream_request_time (just to make
> subset less noisy). I see request times up to 60 seconds. Can not tie it to
> some periodicity. It happens so often that don't see anything helpful in
> strace... I stuck... Any ideas?
>
> This is nginx/1.10.2 on FreeBSD 10.3-RELEASE-p7.
>
> --
> ?????? ???????
> ???????????? ?????? ?????????? ???-????????
> +7(916) 597-92-69, ruz @
>
>
> _______________________________________________
> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
>
> _______________________________________________
> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
?????? ???????
???????????? ?????? ?????????? ???-????????
+7(916) 597-92-69, ruz @
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From iippolitov at nginx.com Tue May 8 16:22:26 2018
From: iippolitov at nginx.com (Igor A. Ippolitov)
Date: Tue, 8 May 2018 19:22:26 +0300
Subject: big difference between request time and upstreams time
In-Reply-To:
References:
<6677871e-a058-41b9-1f7a-aee231183612@nginx.com>
Message-ID: <235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com>
Ruslan,
This depends on your routing nginx configuration.
If doesn't have enough buffers to contain a response completely and
temporary files are turned off, then you will run into a situation, when
the delay is propagated from client facing nginx to a middle layer nginx.
The fact that only client facing requests are affected proves this idea.
On 08.05.2018 18:51, ?????? ??????? wrote:
>
>
> On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov
> > wrote:
>
> Sorry, didn't realize this is an English mailing list.
>
> To sum it up: the problem is most likely about clients and not the
> server.
> Discrepancy between request time and upstream time usually means
> that a client is slow or uses a bad connection.
> Basically, this is OK unless you have the only server out of many
> with this problem.
> This in turn may mean that the problem is with that server's
> network connection.
>
>
>
> The issue affects all of our primary nginx servers.
>
> However, they receive requests from 4 "routing" nginx servers and all
> backends via haproxy. The problem affects only
> requests from the routing nginxs, not backends. I would expect routing
> servers pull data from upstream ASAP. So slow
> clients in my mind should only affect those routing servers standing
> in front.
>
> Am I wrong?
>
>
> Regards.
>
>
> On 08.05.2018 14:50, Igor A. Ippolitov wrote:
>> ????? ? ?????? ????????
>> http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html
>>
>> ????????, ?????? ?????, ? ????????.
>>
>> On 08.05.2018 14:43, ?????? ??????? wrote:
>>> Hello,
>>>
>>> Some selected log records:
>>> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002
>>> ]
>>> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000
>>> ]
>>> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000
>>> ]
>>> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000
>>> ]
>>> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000
>>> ]
>>> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000
>>> ]
>>> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000
>>> ]
>>> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000
>>> ]
>>> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000
>>> ]
>>> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002
>>> ]
>>> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002
>>> ]
>>> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000
>>> ]
>>> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002
>>> ]
>>> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000
>>> ]
>>> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000
>>> ]
>>> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000
>>> ]
>>> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000
>>> ]
>>> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000
>>> ]
>>> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002
>>> ]
>>>
>>> columns: wallclock time, request time, upstream_request_time,
>>> upstream_connect_time, upstream.
>>>
>>> Please, help me diagnose this problem further as I stuck. This
>>> is subset where request_time 50x bigger than
>>> upstream_request_time (just to make subset less noisy). I see
>>> request times up to 60 seconds. Can not tie it to some
>>> periodicity. It happens so often that don't see anything helpful
>>> in strace... I stuck... Any ideas?
>>>
>>> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7.
>>>
>>> --
>>> ?????? ???????
>>> ???????????? ?????? ?????????? ???-????????
>>> +7(916) 597-92-69, ruz?@
>>>
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
>
>
> --
> ?????? ???????
> ???????????? ?????? ?????????? ???-????????
> +7(916) 597-92-69, ruz?@
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kohenkatz at gmail.com Tue May 8 17:21:50 2018
From: kohenkatz at gmail.com (Moshe Katz)
Date: Tue, 08 May 2018 17:21:50 +0000
Subject: Packages for Ubuntu 18.04 "Bionic"?
In-Reply-To: <73aa4c3a-b037-bf0e-c419-e896e722d7e4@nginx.com>
References:
<73aa4c3a-b037-bf0e-c419-e896e722d7e4@nginx.com>
Message-ID:
Great. thanks!
On Tue, May 8, 2018 at 10:28 AM Konstantin Pavlov wrote:
> Hello,
>
> 07.05.2018 19:12, Moshe Katz wrote:
> > Hello,
> >
> > I see that the new Ubuntu 18.04 release has Nginx 1.14.0
> > as its install version.
> > However, as new development progresses, I will want to be on the
> > `mainline` version on my servers.
> > Right now, there is no official Nginx package support for 18.04, as the
> > newest version in http://nginx.org/packages/mainline/ubuntu/ is
> `artful`.
> >
> > When can we expect packages for `bionic` to be officially available?
> >
> > Thanks,
> > Moshe
>
> The packages for both stable and mainline branches are now available to
> download.
>
> Have a good one,
>
> --
> Konstantin Pavlov
> https://www.nginx.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ruz at sports.ru Tue May 8 18:04:46 2018
From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=)
Date: Tue, 8 May 2018 21:04:46 +0300
Subject: big difference between request time and upstreams time
In-Reply-To: <235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com>
References:
<6677871e-a058-41b9-1f7a-aee231183612@nginx.com>
<235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com>
Message-ID:
On Tue, May 8, 2018 at 7:22 PM, Igor A. Ippolitov
wrote:
> Ruslan,
>
> This depends on your routing nginx configuration.
> If doesn't have enough buffers to contain a response completely and
> temporary files are turned off, then you will run into a situation, when
> the delay is propagated from client facing nginx to a middle layer nginx.
>
> The fact that only client facing requests are affected proves this idea.
>
Sure it sounds very much like my case. Any pointers on good article on this
subject? Probably my goal is to free "primary" nginx servers as soon as
possible and leave last mile delivery job to "routing" nginx in front. If
there is no articles you know about on this matter then just point me at
nginx options I should start from.
> On 08.05.2018 18:51, ?????? ??????? wrote:
>
>
>
> On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov
> wrote:
>
>> Sorry, didn't realize this is an English mailing list.
>>
>> To sum it up: the problem is most likely about clients and not the server.
>> Discrepancy between request time and upstream time usually means that a
>> client is slow or uses a bad connection.
>> Basically, this is OK unless you have the only server out of many with
>> this problem.
>> This in turn may mean that the problem is with that server's network
>> connection.
>>
>
>
> The issue affects all of our primary nginx servers.
>
> However, they receive requests from 4 "routing" nginx servers and all
> backends via haproxy. The problem affects only
> requests from the routing nginxs, not backends. I would expect routing
> servers pull data from upstream ASAP. So slow
> clients in my mind should only affect those routing servers standing in
> front.
>
> Am I wrong?
>
>
>> Regards.
>>
>>
>> On 08.05.2018 14:50, Igor A. Ippolitov wrote:
>>
>> ????? ? ?????? ???????? http://mailman.nginx.org/piper
>> mail/nginx/2008-October/008025.html
>> ????????, ?????? ?????, ? ????????.
>>
>> On 08.05.2018 14:43, ?????? ??????? wrote:
>>
>> Hello,
>>
>> Some selected log records:
>> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002]
>> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000]
>> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000]
>> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000]
>> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000]
>> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000]
>> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000]
>> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000]
>> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000]
>> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002]
>> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002]
>> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000]
>> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002]
>> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000]
>> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000]
>> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000]
>> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000]
>> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000]
>> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002]
>>
>> columns: wallclock time, request time, upstream_request_time,
>> upstream_connect_time, upstream.
>>
>> Please, help me diagnose this problem further as I stuck. This is subset
>> where request_time 50x bigger than upstream_request_time (just to make
>> subset less noisy). I see request times up to 60 seconds. Can not tie it to
>> some periodicity. It happens so often that don't see anything helpful in
>> strace... I stuck... Any ideas?
>>
>> This is nginx/1.10.2 on FreeBSD 10.3-RELEASE-p7.
>>
>> --
>> ?????? ???????
>> ???????????? ?????? ?????????? ???-????????
>> +7(916) 597-92-69, ruz @
>>
>>
>> _______________________________________________
>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>>
>> _______________________________________________
>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
>
> --
> ?????? ???????
> ???????????? ?????? ?????????? ???-????????
> +7(916) 597-92-69, ruz @
>
>
> _______________________________________________
> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
?????? ???????
???????????? ?????? ?????????? ???-????????
+7(916) 597-92-69, ruz @
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From iippolitov at nginx.com Tue May 8 18:17:07 2018
From: iippolitov at nginx.com (Igor A. Ippolitov)
Date: Tue, 8 May 2018 21:17:07 +0300
Subject: big difference between request time and upstreams time
In-Reply-To:
References:
<6677871e-a058-41b9-1f7a-aee231183612@nginx.com>
<235690f2-8c96-c707-2594-7daf85fd18c9@nginx.com>
Message-ID: <6ff14652-ab46-b5a1-c038-9791aab15de5@nginx.com>
Ruslan,
Not sure if I know a good article on the topic.
Just ensure proxy_buffering is 'on', proxy_buffer size covers maximum
possible reply headers? size and proxy_buffers matches 90% margin of
your replies (or whatever you think is appropriate).
Most of time these recommendations ensures optimal performance for nginx
as a proxy.
But more interesting question is if you really need to tune anything.
If your front edge servers are well loaded, do you really need to load
them even more?
May be someone else will help with a proper text to read.
On 08.05.2018 21:04, ?????? ??????? wrote:
>
>
> On Tue, May 8, 2018 at 7:22 PM, Igor A. Ippolitov
> > wrote:
>
> Ruslan,
>
> This depends on your routing nginx configuration.
> If doesn't have enough buffers to contain a response completely
> and temporary files are turned off, then you will run into a
> situation, when the delay is propagated from client facing nginx
> to a middle layer nginx.
>
> The fact that only client facing requests are affected proves this
> idea.
>
>
> Sure it sounds very much like my case. Any pointers on good article on
> this subject? Probably my goal is to free "primary" nginx servers as
> soon as possible and leave last mile delivery job to "routing" nginx
> in front. If there is no articles you know about on this matter then
> just point me at nginx options I should start from.
>
>
> On 08.05.2018 18:51, ?????? ??????? wrote:
>>
>>
>> On Tue, May 8, 2018 at 3:11 PM, Igor A. Ippolitov
>> > wrote:
>>
>> Sorry, didn't realize this is an English mailing list.
>>
>> To sum it up: the problem is most likely about clients and
>> not the server.
>> Discrepancy between request time and upstream time usually
>> means that a client is slow or uses a bad connection.
>> Basically, this is OK unless you have the only server out of
>> many with this problem.
>> This in turn may mean that the problem is with that server's
>> network connection.
>>
>>
>>
>> The issue affects all of our primary nginx servers.
>>
>> However, they receive requests from 4 "routing" nginx servers and
>> all backends via haproxy. The problem affects only
>> requests from the routing nginxs, not backends. I would expect
>> routing servers pull data from upstream ASAP. So slow
>> clients in my mind should only affect those routing servers
>> standing in front.
>>
>> Am I wrong?
>>
>>
>> Regards.
>>
>>
>> On 08.05.2018 14:50, Igor A. Ippolitov wrote:
>>> ????? ? ?????? ????????
>>> http://mailman.nginx.org/pipermail/nginx/2008-October/008025.html
>>>
>>> ????????, ?????? ?????, ? ????????.
>>>
>>> On 08.05.2018 14:43, ?????? ??????? wrote:
>>>> Hello,
>>>>
>>>> Some selected log records:
>>>> 14:27:46 1.609 [0.013] [0.002] [192.168.1.44:5002
>>>> ]
>>>> 14:27:50 1.017 [0.017] [0.001] [192.168.1.24:9000
>>>> ]
>>>> 14:27:51 1.522 [0.021] [0.000] [192.168.1.92:9000
>>>> ]
>>>> 14:27:50 1.019 [0.019] [0.000] [192.168.1.41:9000
>>>> ]
>>>> 14:27:52 1.019 [0.018] [0.000] [192.168.1.49:9000
>>>> ]
>>>> 14:27:52 1.019 [0.018] [0.001] [192.168.1.59:9000
>>>> ]
>>>> 14:27:55 1.515 [0.014] [0.000] [192.168.1.92:9000
>>>> ]
>>>> 14:27:57 0.510 [0.010] [0.001] [192.168.1.21:9000
>>>> ]
>>>> 14:28:03 1.521 [0.021] [0.001] [192.168.1.48:9000
>>>> ]
>>>> 14:28:04 0.660 [0.007] [0.002] [192.168.1.24:5002
>>>> ]
>>>> 14:28:05 2.216 [0.018] [0.002] [192.168.1.44:5002
>>>> ]
>>>> 14:28:11 0.510 [0.010] [0.000] [192.168.1.49:9000
>>>> ]
>>>> 14:28:26 0.937 [0.008] [0.002] [192.168.1.92:5002
>>>> ]
>>>> 14:28:28 1.019 [0.019] [0.000] [192.168.1.49:9000
>>>> ]
>>>> 14:28:28 0.508 [0.007] [0.000] [192.168.1.42:9000
>>>> ]
>>>> 14:28:31 1.021 [0.019] [0.000] [192.168.1.44:9000
>>>> ]
>>>> 14:28:32 0.509 [0.008] [0.000] [192.168.1.48:9000
>>>> ]
>>>> 14:28:36 1.015 [0.015] [0.000] [192.168.1.43:9000
>>>> ]
>>>> 14:28:39 0.358 [0.007] [0.001] [192.168.1.92:5002
>>>> ]
>>>>
>>>> columns: wallclock time, request time,
>>>> upstream_request_time, upstream_connect_time, upstream.
>>>>
>>>> Please, help me diagnose this problem further as I stuck.
>>>> This is subset where request_time 50x bigger than
>>>> upstream_request_time (just to make subset less noisy). I
>>>> see request times up to 60 seconds. Can not tie it to some
>>>> periodicity. It happens so often that don't see anything
>>>> helpful in strace... I stuck... Any ideas?
>>>>
>>>> This is?nginx/1.10.2 on?FreeBSD 10.3-RELEASE-p7.
>>>>
>>>> --
>>>> ?????? ???????
>>>> ???????????? ?????? ?????????? ???-????????
>>>> +7(916) 597-92-69, ruz?@
>>>>
>>>>
>>>> _______________________________________________
>>>> nginx mailing list
>>>> nginx at nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>>
>>
>> --
>> ?????? ???????
>> ???????????? ?????? ?????????? ???-????????
>> +7(916) 597-92-69, ruz?@
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
>
>
> --
> ?????? ???????
> ???????????? ?????? ?????????? ???-????????
> +7(916) 597-92-69, ruz?@
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Tue May 8 19:15:22 2018
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 8 May 2018 22:15:22 +0300
Subject: Logging of mirror requests
In-Reply-To:
References:
Message-ID: <20180508191522.GS32137@mdounin.ru>
Hello!
On Mon, May 07, 2018 at 07:59:10PM -0700, Joe Doe wrote:
> I have used ngx_http_mirror_module to create mirrors. I would like to log
> these requests as well? So in the /mirror location, I added access_log
> directive, but the log file was created, but no logs were produced.
>
> Is logging currently limited to only the original request?
By default, subrequests are not logged. If you want them to be
logged, consider the "log_subrequest" directive
(http://nginx.org/r/log_subrequest).
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Tue May 8 20:28:54 2018
From: nginx-forum at forum.nginx.org (pkris)
Date: Tue, 08 May 2018 16:28:54 -0400
Subject: Restricting access by public IP blocking remote content
Message-ID: <71eae06a168b0a2f829bcd05f5976158.NginxMailingListEnglish@forum.nginx.org>
As the subject states when I restrict access to a subdirectory via IP,
remote content like Google fonts, and Favicons are blocked.
This of course makes sense, but without adding those hostnames to my
admin-ip's file I use to allow IP's (explained below), can remote content
like this be allowed by the actual web traffic I'm attempting to restrict to
my VPN IP be filtered?
/etc/nginx/sites-enabled/default:
location /billingadmin {
include includes/admin-ips;
deny all;
}
/etc/nginx/includes/admin-ips:
#LAN
allow XXX.XXX.XXX.XXX;
#VPN
allow XXX.XXX.XXX.XXX;
allow XXX.XXX.XXX.XXX;
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279725,279725#msg-279725
From jfjm2002 at gmail.com Wed May 9 05:06:58 2018
From: jfjm2002 at gmail.com (Joe Doe)
Date: Tue, 8 May 2018 22:06:58 -0700
Subject: Logging of mirror requests
In-Reply-To: <20180508191522.GS32137@mdounin.ru>
References:
<20180508191522.GS32137@mdounin.ru>
Message-ID:
Thank you very much! That did the trick.
On Tue, May 8, 2018 at 12:15 PM, Maxim Dounin wrote:
> Hello!
>
> On Mon, May 07, 2018 at 07:59:10PM -0700, Joe Doe wrote:
>
> > I have used ngx_http_mirror_module to create mirrors. I would like to log
> > these requests as well? So in the /mirror location, I added access_log
> > directive, but the log file was created, but no logs were produced.
> >
> > Is logging currently limited to only the original request?
>
> By default, subrequests are not logged. If you want them to be
> logged, consider the "log_subrequest" directive
> (http://nginx.org/r/log_subrequest).
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed May 9 06:10:04 2018
From: nginx-forum at forum.nginx.org (_gg_)
Date: Wed, 09 May 2018 02:10:04 -0400
Subject: No shared cipher
Message-ID: <92a86c1b805c7a584f20056a7ee8fef2.NginxMailingListEnglish@forum.nginx.org>
Not sure if it's not more of an openssl/TLS 'issue'/question...
For some time I've been observing
SSL_do_handshake() failed (SSL: error:1408A0C1:SSL
routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking
in error.log while having
ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!aNULL;
in configuration.
Examining Client Hello packet reveals client supported ciphers:
Cipher Suites (9 suites)
Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8)
Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
I'm running
nginx version: nginx/1.12.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
According to 'openssl ciphers' the third cipher on the list is supported and
yet server responds with:
TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Handshake Failure)
Content Type: Alert (21)
Version: TLS 1.2 (0x0303)
Length: 2
Alert Message
Level: Fatal (2)
Description: Handshake Failure (40)
Either I've messed up my investigation or I'm completely misunderstanding
something here.
Why despite having a common cipher with a client server denies to handshake
a connection?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279727,279727#msg-279727
From mephystoonhell at gmail.com Wed May 9 09:50:21 2018
From: mephystoonhell at gmail.com (Mephysto On Hell)
Date: Wed, 9 May 2018 11:50:21 +0200
Subject: Proxy pass and SSL certificates
In-Reply-To: <8cda7fa6d5fff1e1d28f9a91d746fc81.NginxMailingListEnglish@forum.nginx.org>
References:
<8cda7fa6d5fff1e1d28f9a91d746fc81.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Hello Goce,
but with this configuration, can I disable SSL in target Nginx?
Thanks in advance.
Meph
On 8 May 2018 at 09:34, Joncheski wrote:
> Hello Meph,
>
> In configuration file "cloud.diakont.it.conf":
> - "ssl_certificate" please set path of only public certificate of server
> (cloud.diakont.it), and in "ssl_certificate_key" please set path of only
> private key of server (cloud.diakont.it).
>
> In configuration file "ssl-params.conff":
> - The certificates that you use for the server and for the client, from
> whom
> are they issued and signed? If you are from your publisher and signer,
> these
> parameters will be removed: ssl_ecdh_curve, ssl_stapling, add_header
> X-Frame-Options DENY; add_header X-Content-Type-Options nosniff;
>
> Change parameter: resolver_timeout 10s.
>
> In nginx config:
> - Add this argument:
> proxy_ssl_verify on;
> proxy_ssl_verify_depth 2;
> proxy_ssl_session_reuse on;
> proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
> proxy_ssl_trusted_certificate ;
> - And in location / like this:
> location / {
> proxy_set_header X-Real-IP
> $remote_addr;
> proxy_set_header X-Forwarded-Proto
> $scheme;
> proxy_set_header X-Forwarded-For
> $proxy_add_x_forwarded_for;
> proxy_set_header Upgrade
> $http_upgrade;
> proxy_set_header Connection
> 'upgrade';
> proxy_set_header Host $host;
> proxy_pass https://cloud_ssl/;
> }
>
> And check the configuration file (nginx -t).
> After this, please send me more access and error log for this.
>
>
> Best regards,
> Goce Joncheski
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,279665,279710#msg-279710
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jfjm2002 at gmail.com Wed May 9 11:32:51 2018
From: jfjm2002 at gmail.com (Joe Doe)
Date: Wed, 9 May 2018 04:32:51 -0700
Subject: inheritance of proxy_http_version and proxy_set_header
Message-ID:
I have many multiple mirrors for incoming request. To keep the config
clean, I set:
proxy_http_version 1.1;
proxy_set_header "";
in the http context. This worked for us (verified keep-alive is working),
and it will inherit to all the mirror proxy_pass.
However, I recently added a mirror that used https, and I notice these
settings no longer inherit to this mirror. At least keep-alive was not
working. To address this, I had to add these 2 settings into the location
specific to the mirror. (adding to the server context didn't work either)
According to the documentation, these 2 settings can be in http, server and
location context. And I assume if it's in http context, it would inherit to
all the sub-blocks (and it did work for all the other http mirrors). Is
this assumption incorrect and I should add these 2 settings to all the
locations where I want to use keep-alive?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed May 9 18:19:23 2018
From: nginx-forum at forum.nginx.org (snir)
Date: Wed, 09 May 2018 14:19:23 -0400
Subject: Set real ip not working
Message-ID:
Hello
I want to get the real ip of the client but I'm all ways getting the ip of
the ngnix server.
I trayed using set_real_ip:
http {
upstream myapp1 {
server 177.17.777.13:8080;
}
server {
listen 80;
real_ip_recursive on;
set_real_ip_from 177.17.777.13;
real_ip_header X-Forwarded-For;
location / {
proxy_pass http://myapp1;
}
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279736,279736#msg-279736
From francis at daoine.org Wed May 9 20:10:58 2018
From: francis at daoine.org (Francis Daly)
Date: Wed, 9 May 2018 21:10:58 +0100
Subject: Problem with to multiple virtual hosts
In-Reply-To: <6febf3fdfc5a52e635cff33c75b4c92b.NginxMailingListEnglish@forum.nginx.org>
References: <6febf3fdfc5a52e635cff33c75b4c92b.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180509201058.GE19311@daoine.org>
On Tue, May 08, 2018 at 01:46:07AM -0400, auto wrote:
Hi there,
> Today we create 2 new config-files on the nginx, copy the file to
> sites-enabled and make a nginx reload.
>
> Now, no sites works again. But there was no error after the nginx reload.
>
> In the Browser we get the error that the Site is not available. And we get
> this error at all Sites.
>
> In the nginx error.log we get the message *2948... no "ssl_certificate" is
> defined in server listening on SSL port while SSL handshaking, client:
> 178...., server 0.0.0.0:443
The error message refers to something to do with ssl.
The example config files you show do not mention ssl.
Does the actual config that you are writing to the new file that leads
to the failure, refer to ssl at all?
Is the new file name alphabetically first in the list of files?
Do you have the word "default_server" on any "listen" line in any file?
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed May 9 20:17:50 2018
From: francis at daoine.org (Francis Daly)
Date: Wed, 9 May 2018 21:17:50 +0100
Subject: Restricting access by public IP blocking remote content
In-Reply-To: <71eae06a168b0a2f829bcd05f5976158.NginxMailingListEnglish@forum.nginx.org>
References: <71eae06a168b0a2f829bcd05f5976158.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180509201750.GF19311@daoine.org>
On Tue, May 08, 2018 at 04:28:54PM -0400, pkris wrote:
Hi there,
> As the subject states when I restrict access to a subdirectory via IP,
> remote content like Google fonts, and Favicons are blocked.
I don't understand what you are reporting there. Can you give one
specific example?
It looks like you are saying that when you intentionally block access to
/billingadmin, you also accidentally block access to /favicon.ico and
to totally unrelated urls like https://fonts.google.com/. That seems
very strange to me, so I suspect that I am missing something.
> This of course makes sense, but without adding those hostnames to my
> admin-ip's file I use to allow IP's (explained below), can remote content
> like this be allowed by the actual web traffic I'm attempting to restrict to
> my VPN IP be filtered?
Maybe it is clear to someone else, what you mean by this. If so, perhaps
they will respond.
But it might be helpful if you can rephrase your question, perhaps
including an example request that does not get the response that you
expect (and including the relevant nginx config).
Good luck,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed May 9 20:25:03 2018
From: francis at daoine.org (Francis Daly)
Date: Wed, 9 May 2018 21:25:03 +0100
Subject: inheritance of proxy_http_version and proxy_set_header
In-Reply-To:
References:
Message-ID: <20180509202503.GG19311@daoine.org>
On Wed, May 09, 2018 at 04:32:51AM -0700, Joe Doe wrote:
Hi there,
> I have many multiple mirrors for incoming request. To keep the config
> clean, I set:
> proxy_http_version 1.1;
> proxy_set_header "";
>
> in the http context. This worked for us (verified keep-alive is working),
> and it will inherit to all the mirror proxy_pass.
Those config directives (corrected) will inherit to any "location" which
does not have a "proxy_http_version" directive or a "proxy_set_header"
directive, respectively. (Assuming that neither are set at "server"
level either.)
> However, I recently added a mirror that used https, and I notice these
> settings no longer inherit to this mirror. At least keep-alive was not
> working. To address this, I had to add these 2 settings into the location
> specific to the mirror. (adding to the server context didn't work either)
Can you show the config that does not react the way that you want it to?
If you get the upstream (proxy_pass) server to "echo" the incoming
request, can you see what http version and http headers are sent by nginx?
> According to the documentation, these 2 settings can be in http, server and
> location context. And I assume if it's in http context, it would inherit to
> all the sub-blocks (and it did work for all the other http mirrors). Is
> this assumption incorrect and I should add these 2 settings to all the
> locations where I want to use keep-alive?
Directive inheritance follows the rules, or there is a bug. If these two
settings mean that keep-alive works for you, then you must make sure
that these two settings are in, or inherited into, each location that
you care about.
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed May 9 20:36:24 2018
From: francis at daoine.org (Francis Daly)
Date: Wed, 9 May 2018 21:36:24 +0100
Subject: Set real ip not working
In-Reply-To:
References:
Message-ID: <20180509203624.GH19311@daoine.org>
On Wed, May 09, 2018 at 02:19:23PM -0400, snir wrote:
Hi there,
> I want to get the real ip of the client but I'm all ways getting the ip of
> the ngnix server.
What, specifically, do you mean by "getting the ip"?
> I trayed using set_real_ip:
The tcp connection from nginx to upstream will (almost) always come from
an IP address of the nginx machine.
It is possible that nginx can be configured to write a client IP address
into a http header, that the upstream server can then be invited to read.
For that, you will want to make sure to write the client IP address into
a http header (proxy_set_header, perhaps $proxy_add_x_forwarded_for) and
you will want to make sure to configure your upstream server to read it.
For one test request, what is the client IP address that you care
about? Do you see that IP address anywhere in the request from nginx to
upstream? If not, fix that. If so: do you see upstream doing anything
with that part of the request? If not, fix that.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Thu May 10 08:11:41 2018
From: nginx-forum at forum.nginx.org (Joncheski)
Date: Thu, 10 May 2018 04:11:41 -0400
Subject: Proxy pass and SSL certificates
In-Reply-To:
References:
Message-ID:
Hello Meph,
Not, exactly this has SSL.
Here's a suggestion configuration:
nginx.conf:
------------------------------------------------------------------------------------------------------
user nginx;
worker_processes auto;
error_log /var/log/nginx/cloudssl.diakont.it.error.log;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/cloudssl.diakont.it.access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
upstream cloud {
server 10.39.0.52;
}
upstream cloud_ssl {
server 10.39.0.52:443;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name cloud.diakont.it cloud.diakont.srl;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name cloud.diakont.it;
#HTTPS-and-SSL
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_trusted_certificate ;
include snippets/cloud.diakont.it.conf;
include snippets/ssl-params.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_pass https://cloud_ssl/;
}
}
}
------------------------------------------------------------------------------------------------------
cloud.diakont.it.conf:
------------------------------------------------------------------------------------------------------
ssl_certificate #PATH OF PUBLIC CERTIFICATE FROM SDP GATEWAY#;
ssl_certificate_key #PATH OF PRIVATE KEY FROM SDP GATEWAY#;
ssl_trusted_certificate #PATH OF PUBLIC CA CERTIFICATE#;
------------------------------------------------------------------------------------------------------
ssl-params.conf:
------------------------------------------------------------------------------------------------------
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers
'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
#this resolver and resolver_timeout maybe be comment
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 10s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
------------------------------------------------------------------------------------------------------
Test this configuration and tell me :)
Best regards,
Goce Joncheski
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279665,279741#msg-279741
From danny at trisect.uk Thu May 10 09:34:27 2018
From: danny at trisect.uk (Danny Horne)
Date: Thu, 10 May 2018 10:34:27 +0100
Subject: Possible to use RHEL / CentOS repo on Fedora 28?
Message-ID:
Hi all,
I'm running Fedora 28 Server, and in the default repos Nginx is lagging
behind at 1.12.1, I found the following on the Nginx website -
To set up the yum repository for RHEL/CentOS, create the file named
|/etc/yum.repos.d/nginx.repo| with the following contents:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/OS/OSRELEASE/$basearch/
gpgcheck=0
enabled=1
Replace ?|OS|? with ?|rhel|? or ?|centos|?, depending on the
distribution used, and ?|OSRELEASE|? with ?|6|? or ?|7|?, for 6.x or 7.x
versions, respectively.
Could I set up this repo to upgrade Nginx?? And if so, what would I use
for OS and OSRELEASE?
Thanks for looking
From nginx-forum at forum.nginx.org Thu May 10 12:04:59 2018
From: nginx-forum at forum.nginx.org (snir)
Date: Thu, 10 May 2018 08:04:59 -0400
Subject: Set real ip not working
In-Reply-To: <20180509203624.GH19311@daoine.org>
References: <20180509203624.GH19311@daoine.org>
Message-ID: <959dadecae8c7cd346176de22e7123ae.NginxMailingListEnglish@forum.nginx.org>
Thanks
That what I needed
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://myapp1;
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279736,279745#msg-279745
From hemelaar at desikkel.nl Thu May 10 12:54:28 2018
From: hemelaar at desikkel.nl (Jean-Paul Hemelaar)
Date: Thu, 10 May 2018 14:54:28 +0200
Subject: No live upstreams
Message-ID:
Hi!
I'm using Nginx as a proxy to Apache.
I noticed some messages in my error.log that I cannot explain:
27463#0: *125209 no live upstreams while connecting to upstream, client:
x.x.x.x, server: www.xxx.com, request: "GET /xxx/ HTTP/1.1", upstream: "
http://backend/xxx/", host: "www.xxx.com"
The errors appear after Apache returned some 502-errors; however in the
configuration I have set the following:
upstream backend {
server 10.0.0.2:8080 max_fails=3 fail_timeout=10;
server 127.0.0.1:8000 backup;
keepalive 6;
}
server {
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout invalid_header;
etc.
}
I expected that, if Apache returns a few 502's:
- Nginx will not try to proceed to the next upstream as proxy_next_upstream
doesn't mention the http_502 but just forward the 502 to the client
- if the upstream is marked as failed (what I didn't expect to happen) the
server will try the backup server instead
What can be happening:
- If the primary server sends a 502 it tries the backup that will send a
502 as well. Because the max_fails is not defined it will be marked as
failed after the first failure.
Not sure if the above assumption is true. If it is, why are they marked as
failed even when the http_502 is not mentioned?
Thanks!
JP
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Thu May 10 13:09:16 2018
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 10 May 2018 16:09:16 +0300
Subject: No shared cipher
In-Reply-To: <92a86c1b805c7a584f20056a7ee8fef2.NginxMailingListEnglish@forum.nginx.org>
References: <92a86c1b805c7a584f20056a7ee8fef2.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180510130916.GV32137@mdounin.ru>
Hello!
On Wed, May 09, 2018 at 02:10:04AM -0400, _gg_ wrote:
> Not sure if it's not more of an openssl/TLS 'issue'/question...
> For some time I've been observing
>
> SSL_do_handshake() failed (SSL: error:1408A0C1:SSL
> routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking
>
> in error.log while having
>
> ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
> ssl_ciphers ALL:!aNULL;
>
> in configuration.
>
> Examining Client Hello packet reveals client supported ciphers:
> Cipher Suites (9 suites)
> Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8)
> Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13)
> Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
> Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
> Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
> Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
> Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
> Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
> Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
>
> I'm running
> nginx version: nginx/1.12.1
> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
> built with OpenSSL 1.0.2k-fips 26 Jan 2017
> TLS SNI support enabled
>
> According to 'openssl ciphers' the third cipher on the list is supported and
> yet server responds with:
> TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Handshake Failure)
> Content Type: Alert (21)
> Version: TLS 1.2 (0x0303)
> Length: 2
> Alert Message
> Level: Fatal (2)
> Description: Handshake Failure (40)
>
> Either I've messed up my investigation or I'm completely misunderstanding
> something here.
> Why despite having a common cipher with a client server denies to handshake
> a connection?
Whether a cipher suite can be used or not depends on various
factors. In particular:
- list of ciphers the client supports;
- list of ciphers the server supports;
- the certificate used by the server (e.g., you won't be able to
use RSA cipher suites with an ECDSA certificate);
- when using ECDHE ciphers or ECDSA certificates - supported EC curves on both
client and server;
In this particular case the client supports only RSA ciphers, so,
for example, there will be no shared cipher if you are using ECDSA
certificate.
--
Maxim Dounin
http://mdounin.ru/
From michael.friscia at yale.edu Thu May 10 13:17:42 2018
From: michael.friscia at yale.edu (Friscia, Michael)
Date: Thu, 10 May 2018 13:17:42 +0000
Subject: Load balancing
Message-ID: <86F468E0-8465-470A-8FF2-27E976614147@yale.edu>
I?m working on a project to perform A/B testing with the web hosting platform. The simple version is that we are hosted everything on Azure and want to compare using their Web Apps versus running a VM with IIS.
My question is about load balancing since there seems to be two ways to go about this. First is to use a simple config where I setup the three hosts I?m testing like this:
upstream ym-host
{
least_conn;
server ysm-iis-prod1.northcentralus.cloudapp.azure.com;
server ysm-iis-prod2.northcentralus.cloudapp.azure.com;
server ysm-ym-live-prod.trafficmanager.net;
}
This works but I am not sure how to set a header to indicate which host is being used.
The alternative is to use split_client and the same configuration looks like this:
upstream ym_host1
{
server ysm-iis-prod1.northcentralus.cloudapp.azure.com;
}
upstream ym_host2
{
server ysm-iis-prod2.northcentralus.cloudapp.azure.com;
}
upstream ym_host3
{
server ysm-ym-live-prod.trafficmanager.net;
}
split_clients "$arg_token" $ymhost
{
25% ym_host1;
25% ym_host2;
50% ym_host3;
}
Granted the $arg_token will change to something else but for now I use that since I can manipulate it easier.
The benefit of the second is that I can add a header like X-UpstreamHost $ymhost and then I can see which host I am hitting.
The benefit of the first is using the least connected round robin approach but I can?t add a header to indicate which host is being hit. For good reasons I won?t get into, adding the header at the web app is not an option.
My question is three part
1. Which is considered the best approach to load balance for this sort of testing?
2. Is there a way to get the name of the host being used if I stick with the simpler approach that uses just the single upstream configuration?
3. What would be the best variable to use for the split_client approach to achieve closest to a round robin?
___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri May 11 01:30:54 2018
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Thu, 10 May 2018 21:30:54 -0400
Subject: Nginx Proxy/FastCGI Caching X-Accel-Expires 0 or Off ?
Message-ID: <4d81a6d8a6676539ddb24520ae9e58e9.NginxMailingListEnglish@forum.nginx.org>
So in order for my web application to tell Nginx not to cache a page what
header response should I be sending ?
X-Accel-Expires: 0
X-Accel-Expires: Off
I read here it should be "OFF"
https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x-accel-expires
But it does not mention if numeric value "0" has the same effect Nor does it
mention if the "off" value is case sensitive or not.
I am hoping case sensitivity does not matter.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279752,279752#msg-279752
From nginx-forum at forum.nginx.org Fri May 11 06:04:35 2018
From: nginx-forum at forum.nginx.org (_gg_)
Date: Fri, 11 May 2018 02:04:35 -0400
Subject: No shared cipher
In-Reply-To: <20180510130916.GV32137@mdounin.ru>
References: <20180510130916.GV32137@mdounin.ru>
Message-ID: <28c21dfbd923fb1dab0312e9985568ef.NginxMailingListEnglish@forum.nginx.org>
Indeed, I have an EC certificate.
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279727,279754#msg-279754
From nginx-forum at forum.nginx.org Fri May 11 06:42:29 2018
From: nginx-forum at forum.nginx.org (Dhinesh Kumar T)
Date: Fri, 11 May 2018 02:42:29 -0400
Subject: How to enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above in
Nginx
Message-ID: <26905b3b1deada448aebb9267385f695.NginxMailingListEnglish@forum.nginx.org>
How nginx enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above?
Nginx: 1.12.2-1
OpenSSL: 1.0.2k-8
I have tried with creating multiple server, but that dint help. is there a
way to do this?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279755,279755#msg-279755
From nginx-forum at forum.nginx.org Fri May 11 08:17:21 2018
From: nginx-forum at forum.nginx.org (auto)
Date: Fri, 11 May 2018 04:17:21 -0400
Subject: Problem with to multiple virtual hosts
In-Reply-To: <20180509201058.GE19311@daoine.org>
References: <20180509201058.GE19311@daoine.org>
Message-ID: <956def47bd86121839b3ed3573431044.NginxMailingListEnglish@forum.nginx.org>
@Francis: so this is the big question, we only want to include 2 new sites
that are only available without ssl. so we included the files without the
ssl part. but if we include it, we get a ssl error?!
no the new files are anywhere between the other files, these files are not
the first files in the alphabetically list.
at the moment i don't now if we have the word "default_server" in any file
of the virtual-host files.
there are 196 files in the sites-enabled directory, maybe i have a look in
the next days if there is the word "default_server" anywhere.
a few days ago we created a additionally directory for the virtual-host
files and insert there the new virtual-host files.
We included the new directory in the config-file of the nginx.conf, and now
it works!
We don't know why, we think that the count of the files in the "normal"
sites-enabled directory are the problem or so?!
With this solution, it works correctly without any errors.
These files are the same files we had included the first the in the "normal"
sites-enabled directory.
We think that the count of the 195 virtual-host files are the problem in one
directory?! but we don't know it, we only believe it.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279708,279756#msg-279756
From pluknet at nginx.com Fri May 11 10:17:55 2018
From: pluknet at nginx.com (Sergey Kandaurov)
Date: Fri, 11 May 2018 13:17:55 +0300
Subject: Nginx Proxy/FastCGI Caching X-Accel-Expires 0 or Off ?
In-Reply-To: <4d81a6d8a6676539ddb24520ae9e58e9.NginxMailingListEnglish@forum.nginx.org>
References: <4d81a6d8a6676539ddb24520ae9e58e9.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <72A2DFDC-8205-414A-9E98-8FB498A82BF5@nginx.com>
> On 11 May 2018, at 04:30, c0nw0nk wrote:
>
> So in order for my web application to tell Nginx not to cache a page what
> header response should I be sending ?
>
> X-Accel-Expires: 0
> X-Accel-Expires: Off
>
> I read here it should be "OFF"
> https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x-accel-expires
>
> But it does not mention if numeric value "0" has the same effect Nor does it
> mention if the "off" value is case sensitive or not.
Wiki materials are updated by its users and thus may not always
contain up-to-date and correct information.
See reference documentation:
http://nginx.org/r/proxy_cache_valid
--
Sergey Kandaurov
From nginx-forum at forum.nginx.org Fri May 11 15:54:17 2018
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Fri, 11 May 2018 11:54:17 -0400
Subject: Nginx Proxy/FastCGI Caching X-Accel-Expires 0 or Off ?
In-Reply-To: <72A2DFDC-8205-414A-9E98-8FB498A82BF5@nginx.com>
References: <72A2DFDC-8205-414A-9E98-8FB498A82BF5@nginx.com>
Message-ID:
Sergey Kandaurov Wrote:
-------------------------------------------------------
> > On 11 May 2018, at 04:30, c0nw0nk
> wrote:
> >
> > So in order for my web application to tell Nginx not to cache a page
> what
> > header response should I be sending ?
> >
> > X-Accel-Expires: 0
> > X-Accel-Expires: Off
> >
> > I read here it should be "OFF"
> >
> https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x-
> accel-expires
> >
> > But it does not mention if numeric value "0" has the same effect Nor
> does it
> > mention if the "off" value is case sensitive or not.
>
> Wiki materials are updated by its users and thus may not always
> contain up-to-date and correct information.
>
> See reference documentation:
> http://nginx.org/r/proxy_cache_valid
>
> --
> Sergey Kandaurov
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Thank you for the information and help :)
I am now using the "0" value and my header responses say "STALE" so it
appears to be working well.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279752,279759#msg-279759
From mdounin at mdounin.ru Fri May 11 18:36:02 2018
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 11 May 2018 21:36:02 +0300
Subject: How to enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above
in Nginx
In-Reply-To: <26905b3b1deada448aebb9267385f695.NginxMailingListEnglish@forum.nginx.org>
References: <26905b3b1deada448aebb9267385f695.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20180511183602.GZ32137@mdounin.ru>
Hello!
On Fri, May 11, 2018 at 02:42:29AM -0400, Dhinesh Kumar T wrote:
> How nginx enable 3des in TLS 1.0 and Disable 3des TLS 1.1 and above?
>
> Nginx: 1.12.2-1
> OpenSSL: 1.0.2k-8
>
> I have tried with creating multiple server, but that dint help. is there a
> way to do this?
No. Currently OpenSSL provides no mechanisms to selectively
enable or disable ciphers depending on the protocol negotiated.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Sat May 12 04:05:51 2018
From: nginx-forum at forum.nginx.org (c0nw0nk)
Date: Sat, 12 May 2018 00:05:51 -0400
Subject: Nginx Cache | @ prefix example
Message-ID:
So it says this on the docs :
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid
The ?X-Accel-Expires? header field sets caching time of a response in
seconds. The zero value disables caching for a response. If the value starts
with the @ prefix, it sets an absolute time in seconds since Epoch, up to
which the response may be cached.
Can someone give an example of how this should look and what if i set it as
zero what is the outcome then...?
//unknown outcome / result...?
X-Accel-Expires: @0
//Expire cache straight away.
X-Accel-Expires: 0
//Expire cache in 5 seconds
X-Accel-Expires: 5
//Expire cache in 5 seconds and allow "STALE" cache responses to be stored
for 5 seconds ?????
X-Accel-expires: @5 5
Hopefully I am right thinking that the above would work like this need some
clarification.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279762,279762#msg-279762
From quintinpar at gmail.com Sat May 12 16:26:07 2018
From: quintinpar at gmail.com (Quintin Par)
Date: Sat, 12 May 2018 10:26:07 -0600
Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite
high proxy valid
Message-ID:
My proxy cache path is set to a very high size
proxy_cache_path /var/lib/nginx/cache levels=1:2
keys_zone=staticfilecache:180m max_size=700m;
and the size used is only
sudo du -sh *
14M cache
4.0K proxy
Proxy cache valid is set to
proxy_cache_valid 200 120d;
I track HIT and MISS via
add_header X-Cache-Status $upstream_cache_status;
Despite these settings I am seeing a lot of MISSes. And this is for pages I
intentionally ran a cache warmer an hour ago.
How do I debug why these MISSes are happening? How do I find out if the
miss was due to eviction, expiration, some rogue header etc? Does Nginx
provide commands for this?
- Quintin
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lucas at lucasrolff.com Sat May 12 16:29:43 2018
From: lucas at lucasrolff.com (Lucas Rolff)
Date: Sat, 12 May 2018 16:29:43 +0000
Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite
high proxy valid
In-Reply-To:
References:
Message-ID:
It can be as simple as doing a curl to your ?origin? url (the one you proxy_pass to) for the files you see that gets a lot of MISS?s ? if there?s odd headers such as cookies etc, then you?ll most likely experience a bad cache if your nginx is configured to not ignore those headers.
From: nginx on behalf of Quintin Par
Reply-To: "nginx at nginx.org"
Date: Saturday, 12 May 2018 at 18.26
To: "nginx at nginx.org"
Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid
[https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae639800.png?u=74734]
My proxy cache path is set to a very high size
proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=staticfilecache:180m max_size=700m;
and the size used is only
sudo du -sh *
14M cache
4.0K proxy
Proxy cache valid is set to
proxy_cache_valid 200 120d;
I track HIT and MISS via
add_header X-Cache-Status $upstream_cache_status;
Despite these settings I am seeing a lot of MISSes. And this is for pages I intentionally ran a cache warmer an hour ago.
How do I debug why these MISSes are happening? How do I find out if the miss was due to eviction, expiration, some rogue header etc? Does Nginx provide commands for this?
- Quintin
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From quintinpar at gmail.com Sat May 12 17:32:13 2018
From: quintinpar at gmail.com (Quintin Par)
Date: Sat, 12 May 2018 11:32:13 -0600
Subject: Debugging Nginx Cache Misses: Hitting high number of MISS despite
high proxy valid
In-Reply-To:
References:
Message-ID:
That?s the tricky part. These MISSes are intermittent. Whenever I run curl
I get HITs but I end up seeing a lot of MISS in the logs.
How do I log these MiSSes with the reason? I want to know what headers
ended up bypassing the cache.
Here?s my caching config
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
# If logged in, don't cache.
if ($http_cookie ~*
"comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) {
set $do_not_cache 1;
}
proxy_cache_key "$scheme://$host$request_uri$do_not_cache";
proxy_cache staticfilecache;
add_header Cache-Control public;
proxy_cache_valid 200 120d;
proxy_hide_header "Set-Cookie";
proxy_ignore_headers "Set-Cookie";
proxy_ignore_headers "Cache-Control";
proxy_hide_header "Cache-Control";
proxy_pass_header X-Accel-Expires;
proxy_set_header Accept-Encoding "";
proxy_ignore_headers Expires;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache_use_stale timeout;
proxy_cache_bypass $arg_nocache $do_not_cache;
- Quintin
On Sat, May 12, 2018 at 10:29 AM Lucas Rolff wrote:
> It can be as simple as doing a curl to your ?origin? url (the one you
> proxy_pass to) for the files you see that gets a lot of MISS?s ? if there?s
> odd headers such as cookies etc, then you?ll most likely experience a bad
> cache if your nginx is configured to not ignore those headers.
>
>
>
> *From: *nginx on behalf of Quintin Par <
> quintinpar at gmail.com>
> *Reply-To: *"nginx at nginx.org"
> *Date: *Saturday, 12 May 2018 at 18.26
> *To: *"nginx at nginx.org"
> *Subject: *Debugging Nginx Cache Misses: Hitting high number of MISS
> despite high proxy valid
>
>
>
> [image:
> https://mailtrack.io/trace/mail/86a613eb1ce46a4e7fa6f9eb96989cddae639800.png?u=74734]
>
> My proxy cache path is set to a very high size
>
>
>
> proxy_cache_path /var/lib/nginx/cache levels=1:2
> keys_zone=staticfilecache:180m max_size=700m;
>
> and the size used is only
>
>
>
> sudo du -sh *
>
> 14M cache
>
> 4.0K proxy
>
> Proxy cache valid is set to
>
>
>
> proxy_cache_valid 200 120d;
>
> I track HIT and MISS via
>
>
>
> add_header X-Cache-Status $upstream_cache_status;
>
> Despite these settings I am seeing a lot of MISSes. And this is for pages
> I intentionally ran a cache warmer an hour ago.
>
>
>
> How do I debug why these MISSes are happening? How do I find out if the
> miss was due to eviction, expiration, some rogue header etc? Does Nginx
> provide commands for this?
>
>
>
> - Quintin
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: