# set search paths for Lua external libraries written in C (can also use ';;'):

lua_package_cpath '/bar/baz/?.so;/blah/blah/?.so;;';

server {

location /inline_concat {

# MIME type determined by default_type:

default_type 'text/plain';

set $a "hello";

set $b "world";

# inline Lua script

set_by_lua $res "return ngx.arg[1]..ngx.arg[2]" $a $b;

echo $res;

}

location /rel_file_concat {

set $a "foo";

set $b "bar";

# script path relative to nginx prefix

# $ngx_prefix/conf/concat.lua contents:

#

# return ngx.arg[1]..ngx.arg[2]

#

set_by_lua_file $res conf/concat.lua $a $b;

echo $res;

}

location /abs_file_concat {

set $a "fee";

set $b "baz";

# absolute script path not modified

set_by_lua_file $res /usr/nginx/conf/concat.lua $a $b;

echo $res;

}

location /lua_content {

# MIME type determined by default_type:

default_type 'text/plain';

content_by_lua "ngx.say('Hello,world!')";

}

location /nginx_var {

# MIME type determined by default_type:

default_type 'text/plain';

# try access /nginx_var?a=hello,world

content_by_lua "ngx.print(ngx.var['arg_a'], '\\n')";

}

location /request_body {

# force reading request body (default off)

lua_need_request_body on;

client_max_body_size 50k;

client_body_buffer_size 50k;

content_by_lua 'ngx.print(ngx.var.request_body)';

}

# transparent non-blocking I/O in Lua via subrequests

location /lua {

# MIME type determined by default_type:

default_type 'text/plain';

content_by_lua '

local res = ngx.location.capture("/some_other_location")

if res.status == 200 then

ngx.print(res.body)

end';

}

# GET /recur?num=5

location /recur {

# MIME type determined by default_type:

default_type 'text/plain';

content_by_lua '

local num = tonumber(ngx.var.arg_num) or 0

if num > 50 then

ngx.say("num too big")

return

end

ngx.say("num is: ", num)

if num > 0 then

res = ngx.location.capture("/recur?num=" .. tostring(num - 1))

ngx.print("status=", res.status, " ")

ngx.print("body=", res.body)

else

ngx.say("end")

end

';

}

location /foo {

rewrite_by_lua '

res = ngx.location.capture("/memc",

{ args = { cmd = "incr", key = ngx.var.uri } }

)

';

proxy_pass http://blah.blah.com;

}

location /blah {

access_by_lua '

local res = ngx.location.capture("/auth")

if res.status == ngx.HTTP_OK then

return

end

if res.status == ngx.HTTP_FORBIDDEN then

ngx.exit(res.status)

end

ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

';

# proxy_pass/fastcgi_pass/postgres_pass/...

}

location /mixed {

rewrite_by_lua_file /path/to/rewrite.lua;

access_by_lua_file /path/to/access.lua;

content_by_lua_file /path/to/content.lua;

}

# use nginx var in code path

# WARN: contents in nginx var must be carefully filtered,

# otherwise there'll be great security risk!

location ~ ^/app/(.+) {

content_by_lua_file /path/to/lua/app/root/$1.lua;

}

location / {

lua_need_request_body on;

client_max_body_size 100k;

client_body_buffer_size 100k;

access_by_lua '

-- check the client IP address is in our black list

if ngx.var.remote_addr == "132.5.72.3" then

ngx.exit(ngx.HTTP_FORBIDDEN)

end

-- check if the request body contains bad words

if ngx.var.request_body and

string.match(ngx.var.request_body, "fsck")

then

return ngx.redirect("/terms_of_use.html")

end

-- tests passed

';

# proxy_pass/fastcgi_pass/etc settings

}

}

</geshi>

= Description =

This module embeds Lua, via the standard Lua interpreter or [http://luajit.org/luajit.html LuaJIT 2.0], into Nginx and by leveraging Nginx's subrequests, allows the integration of the powerful Lua threads (Lua coroutines) into the Nginx event model.

Unlike [http://httpd.apache.org/docs/2.3/mod/mod_lua.html Apache's mod_lua] and [http://redmine.lighttpd.net/wiki/1/Docs:ModMagnet Lighttpd's mod_magnet], Lua code executed using this module can be ''100% non-blocking'' on network traffic as long as the [[#Nginx API for Lua|Nginx API for Lua]] provided by this module is used to handle

Almost all the Nginx modules can be used with this ngx_lua module by means of [[#ngx.location.capture|ngx.location.capture]] or [[#ngx.location.capture_multi|ngx.location.capture_multi]] but it is recommended to use those <code>lua-resty-*</code> libraries instead of creating subrequests to access the Nginx upstream modules because the former is usually much more flexible and memory-efficient.

The Lua interpreter or LuaJIT instance is shared across all the requests in a single nginx worker process but request contexts are segregated using lightweight Lua coroutines.

Loaded Lua modules persist in the nginx worker process level resulting in a small memory footprint in Lua even when under heavy loads.

= Directives =

== lua_code_cache ==

'''syntax:''' ''lua_code_cache on | off''

'''default:''' ''lua_code_cache on''

'''context:''' ''main, server, location, location if''

Enables or disables the Lua code cache for [[#set_by_lua_file|set_by_lua_file]],

[[#content_by_lua_file|content_by_lua_file]], [[#rewrite_by_lua_file|rewrite_by_lua_file]], and

[[#access_by_lua_file|access_by_lua_file]], and also force Lua module reloading on a per-request basis.

will not be cached either). With this in place, developers can adopt an edit-and-refresh approach.

Please note however, that Lua code written inline within nginx.conf

such as those specified by [[#set_by_lua|set_by_lua]], [[#content_by_lua|content_by_lua]],

[[#access_by_lua|access_by_lua]], and [[#rewrite_by_lua|rewrite_by_lua]] will ''always'' be

cached because only the Nginx config file parser can correctly parse the <code>nginx.conf</code>

file and the only ways to to reload the config file

are to send a <code>HUP</code> signal or to restart Nginx.

The ngx_lua module does not currently support the <code>stat</code> mode available with the

Apache <code>mod_lua</code> module but this is planned for implementation in the future.

Disabling the Lua code cache is strongly

discouraged for production use and should only be used during

development as it has a significant negative impact on overall performance.

In addition, race conditions when reloading Lua modules are common for concurrent requests

when the code cache is disabled.

== lua_regex_cache_max_entries ==

'''syntax:''' ''lua_regex_cache_max_entries <num>''

'''default:''' ''lua_regex_cache_max_entries 1024''

'''context:''' ''http''

Specifies the maximum number of entries allowed in the worker process level compiled regex cache.

The regular expressions used in [[#ngx.re.match|ngx.re.match]], [[#ngx.re.gmatch|ngx.re.gmatch]], [[#ngx.re.sub|ngx.re.sub]], and [[#ngx.re.gsub|ngx.re.gsub]] will be cached within this cache if the regex option <code>o</code> (i.e., compile-once flag) is specified.

The default number of entries allowed is 1024 and when this limit is reached, new regular expressions will not be cached (as if the <code>o</code> option was not specified) and there will be one, and only one, warning in the <code>error.log</code> file:

Do not activate the <code>o</code> option for regular expressions (and/or <code>replace</code> string arguments for [[#ngx.re.sub|ngx.re.sub]] and [[#ngx.re.gsub|ngx.re.gsub]]) that are generated ''on the fly'' and give rise to infinite variations to avoid hitting the specified limit.

Sets the Lua module search path used by scripts specified by [[#set_by_lua|set_by_lua]],

[[#content_by_lua|content_by_lua]] and others. The path string is in standard Lua path form, and <code>;;</code>

can be used to stand for the original search paths.

As from the <code>v0.5.0rc29</code> release, the special notation <code>$prefix</code> or <code>${prefix}</code> can be used in the search path string to indicate the path of the <code>server prefix</code> usually determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

Sets the Lua C-module search path used by scripts specified by [[#set_by_lua|set_by_lua]],

[[#content_by_lua|content_by_lua]] and others. The cpath string is in standard Lua cpath form, and <code>;;</code>

can be used to stand for the original cpath.

As from the <code>v0.5.0rc29</code> release, the special notation <code>$prefix</code> or <code>${prefix}</code> can be used in the search path string to indicate the path of the <code>server prefix</code> usually determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

== init_by_lua ==

'''syntax:''' ''init_by_lua <lua-script-str>''

'''context:''' ''http''

'''phase:''' ''loading-config''

Runs the Lua code specified by the argument <code><lua-script-str></code> on the global Lua VM level when the Nginx master process (if any) is loading the Nginx config file.

When Nginx receives the <code>HUP</code> signal and starts reloading the config file, the Lua VM will also be re-created and <code>init_by_lua</code> will run again on the new Lua VM.

Usually you can register (true) Lua global variables or pre-load Lua modules at server start-up by means of this hook. Here is an example for pre-loading Lua modules:

<geshi lang="nginx">

init_by_lua 'require "cjson"';

server {

location = /api {

content_by_lua '

ngx.say(cjson.encode({dog = 5, cat = 6}))

';

}

}

</geshi>

You can also initialize the [[#lua_shared_dict|lua_shared_dict]] shm storage at this phase. Here is an example for this:

<geshi lang="nginx">

lua_shared_dict dogs 1m;

init_by_lua '

local dogs = ngx.shared.dogs;

dogs:set("Tom", 56)

';

server {

location = /api {

content_by_lua '

local dogs = ngx.shared.dogs;

ngx.say(dogs:get("Tom"))

';

}

}

</geshi>

But note that, the [[#lua_shared_dict|lua_shared_dict]]'s shm storage will not be cleared through a config reload (via the <code>HUP</code> signal, for example). So if you do ''not'' want to re-initialize the shm storage in your <code>init_by_lua</code> code in this case, then you just need to set a custom flag in the shm storage and always check the flag in your <code>init_by_lua</code> code.

Because the Lua code in this context runs before Nginx forks its worker processes (if any), data or code loaded here will enjoy the [http://en.wikipedia.org/wiki/Copy-on-write Copy-on-write (COW)] feature provided by many operating systems among all the worker processes, thus saving a lot of memory.

Only a small set of the [[#Nginx API for Lua|Nginx API for Lua]] is supported in this context:

* Logging APIs: [[#ngx.log|ngx.log]] and [[#print|print]],

* Shared Dictionary API: [[#ngx.shared.DICT|ngx.shared.DICT]].

More Nginx APIs for Lua may be supported in this context upon future user requests.

Basically you can safely use Lua libraries that do blocking I/O in this very context because blocking the master process during server start-up is completely okay. Even the Nginx core does blocking I/O (at least on resolving upstream's host names) at the configure-loading phase.

You should be very careful about potential security vulnerabilities in your Lua code registered in this context because the Nginx master process is often run under the <code>root</code> account.

This directive was first introduced in the <code>v0.5.5</code> release.

== init_by_lua_file ==

'''syntax:''' ''init_by_lua_file <path-to-lua-script-file>''

'''context:''' ''http''

'''phase:''' ''loading-config''

Equivalent to [[#init_by_lua|init_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code or [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

This directive was first introduced in the <code>v0.5.5</code> release.

The code in <code><lua-script-str></code> can make [[#Nginx API for Lua|API calls]] and can retrieve input arguments from the <code>ngx.arg</code> table (index starts from <code>1</code> and increases sequentially).

This directive is designed to execute short, fast running code blocks as the Nginx event loop is blocked during code execution. Time consuming code sequences should therefore be avoided.

Note that the following API functions are currently disabled within this context:

In addition, note that this directive can only write out a value to a single Nginx variable at

a time. However, a workaround is possible using the [[#ngx.var.VARIABLE|ngx.var.VARIABLE]] interface.

<geshi lang="nginx">

location /foo {

set $diff ''; # we have to predefine the $diff variable here

set_by_lua $sum '

local a = 32

local b = 56

ngx.var.diff = a - b; -- write to $diff directly

return a + b; -- return the $sum value normally

';

echo "sum = $sum, diff = $diff";

}

</geshi>

This directive can be freely mixed with all directives of the [[HttpRewriteModule]], [[HttpSetMiscModule]], and [[HttpArrayVarModule]] modules. All of these directives will run in the same order as they appear in the config file.

<geshi lang="nginx">

set $foo 32;

set_by_lua $bar 'tonumber(ngx.var.foo) + 1';

set $baz "bar: $bar"; # $baz == "bar: 33"

</geshi>

As from the <code>v0.5.0rc29</code> release, Nginx variable interpolation is disabled in the <code><lua-script-str></code> argument of this directive and therefore, the dollar sign character (<code>$</code>) can be used directly.

Equivalent to [[#set_by_lua|set_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or, as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

Nginx variable interpolation is supported in the <code><path-to-lua-script-file></code> argument string of this directive. But special care must be taken for injection attacks.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

When the Lua code cache is turned on (by default), the user code is loaded once at the first request and cached

and the Nginx config must be reloaded each time the Lua source file is modified.

Acts as a "content handler" and executes Lua code string specified in <code><lua-script-str></code> for every request.

The Lua code may make [[#Nginx API for Lua|API calls]] and is executed as a new spawned coroutine in an independent global environment (i.e. a sandbox).

Do not use this directive and other content handler directives in the same location. For example, this directive and the [[HttpProxyModule#proxy_pass|proxy_pass]] directive should not be used in the same location.

== content_by_lua_file ==

'''syntax:''' ''content_by_lua_file <path-to-lua-script-file>''

'''context:''' ''location, location if''

'''phase:''' ''content''

Equivalent to [[#content_by_lua|content_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or, as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

Nginx variables can be used in the <code><path-to-lua-script-file></code> string to provide flexibility. This however carries some risks and is not ordinarily recommended.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

When the Lua code cache is turned on (by default), the user code is loaded once at the first request and cached

and the Nginx config must be reloaded each time the Lua source file is modified.

because <code>if</code> runs ''before'' [[#rewrite_by_lua|rewrite_by_lua]] even if it is placed after [[#rewrite_by_lua|rewrite_by_lua]] in the config.

The right way of doing this is as follows:

<geshi lang="nginx">

location /foo {

set $a 12; # create and initialize $a

set $b ''; # create and initialize $b

rewrite_by_lua '

ngx.var.b = tonumber(ngx.var.a) + 1

if tonumber(ngx.var.b) == 13 then

return ngx.redirect("/bar");

end

';

echo "res = $b";

}

</geshi>

Note that the [http://www.grid.net.ru/nginx/eval.en.html ngx_eval] module can be approximated by using [[#rewrite_by_lua|rewrite_by_lua]]. For example,

<geshi lang="nginx">

location / {

eval $res {

proxy_pass http://foo.com/check-spam;

}

if ($res = 'spam') {

rewrite ^ /terms-of-use.html redirect;

}

fastcgi_pass ...;

}

</geshi>

can be implemented in ngx_lua as:

<geshi lang="nginx">

location = /check-spam {

internal;

proxy_pass http://foo.com/check-spam;

}

location / {

rewrite_by_lua '

local res = ngx.location.capture("/check-spam")

if res.body == "spam" then

ngx.redirect("/terms-of-use.html")

end

';

fastcgi_pass ...;

}

</geshi>

Just as any other rewrite phase handlers, [[#rewrite_by_lua|rewrite_by_lua]] also runs in subrequests.

Note that when calling <code>ngx.exit(ngx.OK)</code> within a [[#rewrite_by_lua|rewrite_by_lua]] handler, the nginx request processing control flow will still continue to the content handler. To terminate the current request from within a [[#rewrite_by_lua|rewrite_by_lua]] handler, calling [[#ngx.exit|ngx.exit]] with status >= 200 (<code>ngx.HTTP_OK</code>) and status < 300 (<code>ngx.HTTP_SPECIAL_RESPONSE</code>) for successful quits and <code>ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)</code> (or its friends) for failures.

If the [[HttpRewriteModule]]'s [[HttpRewriteModule#rewrite|rewrite]] directive is used to change the URI and initiate location re-lookups (internal redirections), then any [[#rewrite_by_lua|rewrite_by_lua]] or [[#rewrite_by_lua_file|rewrite_by_lua_file]] code sequences within the current location will not be executed. For example,

<geshi lang="nginx">

location /foo {

rewrite ^ /bar;

rewrite_by_lua 'ngx.exit(503)';

}

location /bar {

...

}

</geshi>

Here the Lua code <code>ngx.exit(503)</code> will never run. This will be the case if <code>rewrite ^ /bar last</code> is used as this will similarly initiate an internal redirection. If the <code>break</code> modifier is used instead, there will be no internal redirection and the <code>rewrite_by_lua</code> code will be executed.

The <code>rewrite_by_lua</code> code will always run at the end of the <code>rewrite</code> request-processing phase unless [[#rewrite_by_lua_no_postpone|rewrite_by_lua_no_postpone]] is turned on.

== rewrite_by_lua_file ==

'''syntax:''' ''rewrite_by_lua_file <path-to-lua-script-file>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''rewrite tail''

Equivalent to [[#rewrite_by_lua|rewrite_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or, as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

Nginx variables can be used in the <code><path-to-lua-script-file></code> string to provide flexibility. This however carries some risks and is not ordinarily recommended.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

When the Lua code cache is turned on (by default), the user code is loaded once at the first request and cached and the Nginx config must be reloaded each time the Lua source file is modified. The Lua code cache can be temporarily disabled during development by switching [[#lua_code_cache|lua_code_cache]] <code>off</code> in <code>nginx.conf</code> to avoid reloading Nginx.

The <code>rewrite_by_lua_file</code> code will always run at the end of the <code>rewrite</code> request-processing phase unless [[#rewrite_by_lua_no_postpone|rewrite_by_lua_no_postpone]] is turned on.

== access_by_lua ==

'''syntax:''' ''access_by_lua <lua-script-str>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''access tail''

Acts as an access phase handler and executes Lua code string specified in <code><lua-script-str></code> for every request.

The Lua code may make [[#Nginx API for Lua|API calls]] and is executed as a new spawned coroutine in an independent global environment (i.e. a sandbox).

Note that this handler always runs ''after'' the standard [[HttpAccessModule]]. So the following will work as expected:

<geshi lang="nginx">

location / {

deny 192.168.1.1;

allow 192.168.1.0/24;

allow 10.1.1.0/16;

deny all;

access_by_lua '

local res = ngx.location.capture("/mysql", { ... })

...

';

# proxy_pass/fastcgi_pass/...

}

</geshi>

That is, if a client IP address is in the blacklist, it will be denied before the MySQL query for more complex authentication is executed by [[#access_by_lua|access_by_lua]].

Note that the [http://mdounin.ru/hg/ngx_http_auth_request_module/ ngx_auth_request] module can be approximated by using [[#access_by_lua|access_by_lua]]:

<geshi lang="nginx">

location / {

auth_request /auth;

# proxy_pass/fastcgi_pass/postgres_pass/...

}

</geshi>

can be implemented in ngx_lua as:

<geshi lang="nginx">

location / {

access_by_lua '

local res = ngx.location.capture("/auth")

if res.status == ngx.HTTP_OK then

return

end

if res.status == ngx.HTTP_FORBIDDEN then

ngx.exit(res.status)

end

ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

';

# proxy_pass/fastcgi_pass/postgres_pass/...

}

</geshi>

As with other access phase handlers, [[#access_by_lua|access_by_lua]] will ''not'' run in subrequests.

Note that when calling <code>ngx.exit(ngx.OK)</code> within a [[#access_by_lua|access_by_lua]] handler, the nginx request processing control flow will still continue to the content handler. To terminate the current request from within a [[#access_by_lua|access_by_lua]] handler, calling [[#ngx.exit|ngx.exit]] with status >= 200 (<code>ngx.HTTP_OK</code>) and status < 300 (<code>ngx.HTTP_SPECIAL_RESPONSE</code>) for successful quits and <code>ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)</code> (or its friends) for failures.

== access_by_lua_file ==

'''syntax:''' ''access_by_lua_file <path-to-lua-script-file>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''access tail''

Equivalent to [[#access_by_lua|access_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or, as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

Nginx variables can be used in the <code><path-to-lua-script-file></code> string to provide flexibility. This however carries some risks and is not ordinarily recommended.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

When the Lua code cache is turned on (by default), the user code is loaded once at the first request and cached

and the Nginx config must be reloaded each time the Lua source file is modified.

The Lua code cache can be temporarily disabled during development by switching [[#lua_code_cache|lua_code_cache]] <code>off</code> in <code>nginx.conf</code> to avoid repeatedly reloading Nginx.

Here is an example of overriding a response header (or adding one if absent) in our Lua header filter:

<geshi lang="nginx">

location / {

proxy_pass http://mybackend;

header_filter_by_lua 'ngx.header.Foo = "blah"';

}

</geshi>

This directive was first introduced in the <code>v0.2.1rc20</code> release.

== header_filter_by_lua_file ==

'''syntax:''' ''header_filter_by_lua_file <path-to-lua-script-file>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''output-header-filter''

Equivalent to [[#header_filter_by_lua|header_filter_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

This directive was first introduced in the <code>v0.2.1rc20</code> release.

The input data chunk is passed via [[#ngx.arg|ngx.arg]][1] (as a Lua string value) and the "eof" flag indicating the end of the response body data stream is passed via [[#ngx.arg|ngx.arg]][2] (as a Lua boolean value).

Behind the scene, the "eof" flag is just the <code>last_buf</code> flag of the nginx chain link buffers. And in the context of an Nginx subrequest, there is no "eof" flag at all, due to the underlying limitation in the Nginx core.

The output data stream can be aborted immediately by running the following Lua statement:

<geshi lang="lua">

return ngx.ERROR

</geshi>

This will truncate the response body and usually result in incomplete and also invalid responses.

The Lua code can pass its own modified version of the input data chunk to the downstream Nginx output body filters by overriding [[#ngx.arg|ngx.arg]][1] with a Lua string or a Lua table of strings. For example, to transform all the lowercase letters in the response body, we can just write:

<geshi lang="nginx">

location / {

proxy_pass http://mybackend;

body_filter_by_lua 'ngx.arg[1] = string.upper(ngx.arg[1])';

}

</geshi>

When setting <code>nil</code> or an empty Lua string value to <code>ngx.arg[1]</code>, no data chunk will be passed to the downstream Nginx output filters at all.

Likewise, new "eof" flag can also be specified by setting a boolean value to [[#ngx.arg|ngx.arg]][2]. For example,

<geshi lang="nginx">

location /t {

echo hello world;

echo hiya globe;

body_filter_by_lua '

local chunk = ngx.arg[1]

if string.match(chunk, "hello") then

ngx.arg[2] = true -- new eof

return

end

-- just throw away any remaining chunk data

ngx.arg[1] = nil

';

}

</geshi>

Then <code>GET /t</code> will just return the output

<geshi lang="text">

hello world

</geshi>

That is, when the body filter sees a chunk containing the word "hello", then it will set the "eof" flag to true immediately, resulting in truncated but still valid responses.

When the Lua code may change the length of the response body, then it is required to always clear out the <code>Content-Length</code> response header (if any) in a header filter to enforce streaming output, as in

<geshi lang="nginx">

location /foo {

# fastcgi_pass/proxy_pass/...

header_filter_by_lua 'ngx.header.content_length = nil';

body_filter_by_lua 'ngx.arg[1] = {string.len(arg[1]), "\n"}'

}

</geshi>

Note that the following API functions are currently disabled within this context:

Nginx output filters may be called multiple times for a single request because response body may be delivered in chunks. Thus, the Lua code specified by in this directive may also run multiple times in the lifetime of a single HTTP request.

This directive was first introduced in the <code>v0.5.0rc32</code> release.

== body_filter_by_lua_file ==

'''syntax:''' ''body_filter_by_lua_file <path-to-lua-script-file>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''output-body-filter''

Equivalent to [[#body_filter_by_lua|body_filter_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or, as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

This directive was first introduced in the <code>v0.5.0rc32</code> release.

== log_by_lua ==

'''syntax:''' ''log_by_lua <lua-script-str>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''log''

Run the Lua source code inlined as the <code><lua-script-str></code> at the <code>log</code> request processing phase. This does not replace the current access logs, but runs after.

Note that the following API functions are currently disabled within this context:

Here is an example of gathering average data for [[HttpUpstreamModule#$upstream_response_time|$upstream_response_time]]:

<geshi lang="nginx">

lua_shared_dict log_dict 5M;

server {

location / {

proxy_pass http://mybackend;

log_by_lua '

local log_dict = ngx.shared.log_dict

local upstream_time = tonumber(ngx.var.upstream_response_time)

local sum = log_dict:get("upstream_time-sum") or 0

sum = sum + upstream_time

log_dict:set("upstream_time-sum", sum)

local newval, err = log_dict:incr("upstream_time-nb", 1)

if not newval and err == "not found" then

log_dict:add("upstream_time-nb", 0)

log_dict:incr("upstream_time-nb", 1)

end

';

}

location = /status {

content_by_lua '

local log_dict = ngx.shared.log_dict

local sum = log_dict:get("upstream_time-sum")

local nb = log_dict:get("upstream_time-nb")

if nb and sum then

ngx.say("average upstream response time: ", sum / nb,

" (", nb, " reqs)")

else

ngx.say("no data yet")

end

';

}

}

</geshi>

This directive was first introduced in the <code>v0.5.0rc31</code> release.

== log_by_lua_file ==

'''syntax:''' ''log_by_lua_file <path-to-lua-script-file>''

'''context:''' ''http, server, location, location if''

'''phase:''' ''log''

Equivalent to [[#log_by_lua|log_by_lua]], except that the file specified by <code><path-to-lua-script-file></code> contains the Lua code, or, as from the <code>v0.5.0rc32</code> release, the [[#Lua/LuaJIT bytecode support|Lua/LuaJIT bytecode]] to be executed.

When a relative path like <code>foo/bar.lua</code> is given, they will be turned into the absolute path relative to the <code>server prefix</code> path determined by the <code>-p PATH</code> command-line option while starting the Nginx server.

This directive was first introduced in the <code>v0.5.0rc31</code> release.

== lua_need_request_body ==

'''syntax:''' ''lua_need_request_body <on|off>''

'''default:''' ''off''

'''context:''' ''main | server | location''

'''phase:''' ''depends on usage''

Determines whether to force the request body data to be read before running rewrite/access/access_by_lua* or not. The Nginx core does not read the client request body by default and if request body data is required, then this directive should be turned <code>on</code> or the [[#ngx.req.read_body|ngx.req.read_body]] function should be called within the Lua code.

To read the request body data within the [[HttpCoreModule#$request_body|$request_body]] variable,

[[HttpCoreModule#client_body_buffer_size|client_body_buffer_size]] must have the same value as [[HttpCoreModule#client_max_body_size|client_max_body_size]]. Because when the content length exceeds [[HttpCoreModule#client_body_buffer_size|client_body_buffer_size]] but less than [[HttpCoreModule#client_max_body_size|client_max_body_size]], Nginx will buffer the data into a temporary file on the disk, which will lead to empty value in the [[HttpCoreModule#$request_body|$request_body]] variable.

If the current location includes [[#rewrite_by_lua|rewrite_by_lua]] or [[#rewrite_by_lua_file|rewrite_by_lua_file]] directives,

then the request body will be read just before the [[#rewrite_by_lua|rewrite_by_lua]] or [[#rewrite_by_lua_file|rewrite_by_lua_file]] code is run (and also at the

<code>rewrite</code> phase). Similarly, if only [[#content_by_lua|content_by_lua]] is specified,

the request body will not be read until the content handler's Lua code is

about to run (i.e., the request body will be read during the content phase).

It is recommended however, to use the [[#ngx.req.read_body|ngx.req.read_body]] and [[#ngx.req.discard_body|ngx.req.discard_body]] functions for finer control over the request body reading process instead.

This also applies to [[#access_by_lua|access_by_lua]] and [[#access_by_lua_file|access_by_lua_file]].

== lua_shared_dict ==

'''syntax:''' ''lua_shared_dict <name> <size>''

'''default:''' ''no''

'''context:''' ''http''

'''phase:''' ''depends on usage''

Declares a shared memory zone, <code><name></code>, to serve as storage for the shm based Lua dictionary <code>ngx.shared.<name></code>.

The <code><size></code> argument accepts size units such as <code>k</code> and <code>m</code>:

<geshi lang="nginx">

http {

lua_shared_dict dogs 10m;

...

}

</geshi>

See [[#ngx.shared.DICT|ngx.shared.DICT]] for details.

This directive was first introduced in the <code>v0.3.1rc22</code> release.

== lua_socket_connect_timeout ==

'''syntax:''' ''lua_socket_connect_timeout <time>''

'''default:''' ''lua_socket_connect_timeout 60s''

'''context:''' ''http, server, location''

This directive controls the default timeout value used in TCP/unix-domain socket object's [[#tcpsock:connect|connect]] method and can be overridden by the [[#tcpsock:settimeout|settimeout]] method.

The <code><time></code> argument can be an integer, with an optional time unit, like <code>s</code> (second), <code>ms</code> (millisecond), <code>m</code> (minute). The default time unit is <code>s</code>, i.e., "second". The default setting is <code>60s</code>.

This directive was first introduced in the <code>v0.5.0rc1</code> release.

== lua_socket_send_timeout ==

'''syntax:''' ''lua_socket_send_timeout <time>''

'''default:''' ''lua_socket_send_timeout 60s''

'''context:''' ''http, server, location''

Controls the default timeout value used in TCP/unix-domain socket object's [[#tcpsock:send|send]] method and can be overridden by the [[#tcpsock:settimeout|settimeout]] method.

The <code><time></code> argument can be an integer, with an optional time unit, like <code>s</code> (second), <code>ms</code> (millisecond), <code>m</code> (minute). The default time unit is <code>s</code>, i.e., "second". The default setting is <code>60s</code>.

This directive was first introduced in the <code>v0.5.0rc1</code> release.

This directive controls the default timeout value used in TCP/unix-domain socket object's [[#tcpsock:receive|receive]] method and iterator functions returned by the [[#tcpsock:receiveuntil|receiveuntil]] method. This setting can be overridden by the [[#tcpsock:settimeout|settimeout]] method.

The <code><time></code> argument can be an integer, with an optional time unit, like <code>s</code> (second), <code>ms</code> (millisecond), <code>m</code> (minute). The default time unit is <code>s</code>, i.e., "second". The default setting is <code>60s</code>.

This directive was first introduced in the <code>v0.5.0rc1</code> release.

== lua_socket_buffer_size ==

'''syntax:''' ''lua_socket_buffer_size <size>''

'''default:''' ''lua_socket_buffer_size 4k/8k''

'''context:''' ''http, server, location''

Specifies the buffer size used by cosocket reading operations.

This buffer does not have to be that big to hold everything at the same time because cosocket supports 100% non-buffered reading and parsing. So even <code>1</code> byte buffer size should still work everywhere but the performance could be terrible.

This directive was first introduced in the <code>v0.5.0rc1</code> release.

== lua_socket_pool_size ==

'''syntax:''' ''lua_socket_pool_size <size>''

'''default:''' ''lua_socket_pool_size 30''

'''context:''' ''http, server, location''

Specifies the size limit (in terms of connection count) for every cosocket connection pool associated with every remote server (i.e., identified by either the host-port pair or the unix domain socket file path).

Default to 30 connections for every pool.

When the connection pool exceeds the available size limit, the least recently used (idle) connection already in the pool will be closed to make room for the current connection.

Note that the cosocket connection pool is per nginx worker process rather than per nginx server instance, so so size limit specified here also applies to every single nginx worker process.

This directive was first introduced in the <code>v0.5.0rc1</code> release.

== lua_socket_keepalive_timeout ==

'''syntax:''' ''lua_socket_keepalive_timeout <time>''

'''default:''' ''lua_socket_keepalive_timeout 60s''

'''context:''' ''http, server, location''

This directive controls the default maximal idle time of the connections in the cosocket built-in connection pool. When this timeout reaches, idle connections will be closed and removed from the pool. This setting can be overridden by cosocket objects' [[#tcpsock:setkeepalive|setkeepalive]] method.

The <code><time></code> argument can be an integer, with an optional time unit, like <code>s</code> (second), <code>ms</code> (millisecond), <code>m</code> (minute). The default time unit is <code>s</code>, i.e., "second". The default setting is <code>60s</code>.

This directive was first introduced in the <code>v0.5.0rc1</code> release.

== lua_socket_log_errors ==

'''syntax:''' ''lua_socket_log_errors on|off''

'''default:''' ''lua_socket_log_errors on''

'''context:''' ''http, server, location''

This directive can be used to toggle error logging when a failure occurs for the TCP or UDP cosockets. If you are already doing proper error handling and logging in your Lua code, then it is recommended to turn this directive off to prevent data flushing in your nginx error log files (which is usually rather expensive).

This directive was first introduced in the <code>v0.5.13</code> release.

== lua_http10_buffering ==

'''syntax:''' ''lua_http10_buffering on|off''

'''default:''' ''lua_http10_buffering on''

'''context:''' ''http, server, location, location-if''

Enables or disables response caching for HTTP 1.0 (or older) requests. This buffering mechanism is mainly used for HTTP 1.0 keep-alive which replies on a proper <code>Content-Length</code> response header.

If the Lua code explicitly sets a <code>Content-Length</code> response header before sending the headers (either explicitly via [[#ngx.send_headers|ngx.send_headers]] or implicitly via the first [[#ngx.say|ngx.say]] or [[#ngx.print|ngx.print]] call).

To output very large response data in a streaming fashion (via the [[#ngx.flush|ngx.flush]] call, for example), this directive MUST be turned off to minimize memory usage.

This directive is turned <code>on</code> by default.

This directive was first introduced in the <code>v0.5.0rc19</code> release.

== rewrite_by_lua_no_postpone ==

'''syntax:''' ''rewrite_by_lua_no_postpone on|off''

'''default:''' ''rewrite_by_lua_no_postpone off''

'''context:''' ''http, server, location, location-if''

Controls whether or not to disable postponing [[#rewrite_by_lua|rewrite_by_lua]] and [[#rewrite_by_lua_file|rewrite_by_lua_file]] directives to run at the end of the <code>rewrite</code> request-processing phase. By default, this directive is turned off and the Lua code is postponed to run at the end of the <code>rewrite</code> phase.

This directive was first introduced in the <code>v0.5.0rc29</code> release.

Controls whether to transform underscores (<code>_</code>) in the response header names specified in the [[#ngx.header.HEADER|ngx.header.HEADER]] API to hypens (<code>-</code>).

This directive was first introduced in the <code>v0.5.0rc32</code> release.

= Nginx API for Lua =

== Introduction ==

The various <code>*_by_lua</code> and <code>*_by_lua_file</code> configuration directives serve as gateways to the Lua API within the <code>nginx.conf</code> file. The Nginx Lua API described below can only be called within the user Lua code run in the context of these configuration directives.

The API is exposed to Lua in the form of two standard packages <code>ngx</code> and <code>ndk</code>. These packages are in the default global scope within ngx_lua and are always available within ngx_lua directives.

The packages can be introduced into external Lua modules by using the [http://www.lua.org/manual/5.1/manual.html#pdf-package.seeall package.seeall] option:

<geshi lang="lua">

module("my_module", package.seeall)

function say(a) ngx.say(a) end

</geshi>

Alternatively, they can be imported to external Lua modules by using file scoped local Lua variables:

<geshi lang="lua">

local ngx = ngx

module("my_module")

function say(a) ngx.say(a) end

</geshi>

It is also possible to directly require the packages in external Lua modules:

<geshi lang="lua">

local ngx = require "ngx"

local ndk = require "ndk"

</geshi>

The ability to require these packages was introduced in the <code>v0.2.1rc19</code> release.

Network I/O operations in user code should only be done through the Nginx Lua API calls as the Nginx event loop may be blocked and performance drop off dramatically otherwise. Disk operations with relatively small amount of data can be done using the standard Lua <code>io</code> library but huge file reading and writing should be avoided wherever possible as they may block the Nginx process significantly. Delegating all network and disk I/O operations to Nginx's subrequests (via the [[#ngx.location.capture|ngx.location.capture]] method and similar) is strongly recommended for maximum performance.

== ngx.arg ==

'''syntax:''' ''val = ngx.arg[index]''

'''context:''' ''set_by_lua*, body_filter_by_lua*''

When this is used in the context of the [[#set_by_lua|set_by_lua]] or [[#set_by_lua_file|set_by_lua_file]] directives, this table is read-only and holds the input arguments to the config directives:

<geshi lang="lua">

value = ngx.arg[n]

</geshi>

Here is an example

<geshi lang="nginx">

location /foo {

set $a 32;

set $b 56;

set_by_lua $res

'return tonumber(ngx.arg[1]) + tonumber(ngx.arg[2])'

$a $b;

echo $sum;

}

</geshi>

that writes out <code>88</code>, the sum of <code>32</code> and <code>56</code>.

When this table is used in the context of [[#body_filter_by_lua|body_filter_by_lua]] or [[#body_filter_by_lua_file|body_filter_by_lua_file]], the first element holds the input data chunk to the output filter code and the second element holds the boolean flag for the "eof" flag indicating the end of the whole output data stream.

The data chunk and "eof" flag passed to the downstream Nginx output filters can also be overridden by assigning values directly to the corresponding table elements. When setting <code>nil</code> or an empty Lua string value to <code>ngx.arg[1]</code>, no data chunk will be passed to the downstream Nginx output filters at all.

set $my_var ''; # this line is required to create $my_var at config time

content_by_lua '

ngx.var.my_var = 123;

...

';

}

</geshi>

That is, nginx variables cannot be created on-the-fly.

Some special nginx variables like <code>$args</code> and <code>$limit_rate</code> can be assigned a value,

some are not, like <code>$arg_PARAMETER</code>.

Nginx regex group capturing variables <code>$1</code>, <code>$2</code>, <code>$3</code>, and etc, can be read by this

interface as well, by writing <code>ngx.var[1]</code>, <code>ngx.var[2]</code>, <code>ngx.var[3]</code>, and etc.

Setting <code>ngx.var.Foo</code> to a <code>nil</code> value will unset the <code>$Foo</code> Nginx variable.

<geshi lang="lua">

ngx.var.args = nil

</geshi>

'''WARNING''' When reading from an Nginx variable, Nginx will allocate memory in the per-request memory pool which is freed only at request termination. So when you need to read from an Nginx variable repeatedly in your Lua code, cache the Nginx variable value to your own Lua variable, for example,

<geshi lang="lua">

local val = ngx.var.some_var

--- use the val repeatedly later

</geshi>

to prevent (temporary) memory leaking within the current request's lifetime.

Note that only three of these constants are utilized by the [[#Nginx API for Lua|Nginx API for Lua]] (i.e., [[#ngx.exit|ngx.exit]] accepts <code>NGX_OK</code>, <code>NGX_ERROR</code>, and <code>NGX_DECLINED</code> as input).

<geshi lang="lua">

ngx.null

</geshi>

The <code>ngx.null</code> constant is a <code>NULL</code> light userdata usually used to represent nil values in Lua tables etc and is similar to the [http://www.kyne.com.au/~mark/software/lua-cjson.php lua-cjson] library's <code>cjson.null</code> constant. This constant was first introduced in the <code>v0.5.0rc5</code> release.

The <code>ngx.DECLINED</code> constant was first introduced in the <code>v0.5.0rc19</code> release.

Writes argument values into the nginx <code>error.log</code> file with the <code>ngx.NOTICE</code> log level.

It is equivalent to

<geshi lang="lua">

ngx.log(ngx.NOTICE, ...)

</geshi>

Lua <code>nil</code> arguments are accepted and result in literal <code>"nil"</code> strings while Lua booleans result in literal <code>"true"</code> or <code>"false"</code> strings. And the <code>ngx.null</code> constant will yield the <code>"null"</code> string output.

There is a hard coded <code>2048</code> byte limitation on error message lengths in the Nginx core. This limit includes trailing newlines and leading time stamps. If the message size exceeds this limit, Nginx will truncate the message text accordingly. This limit can be manually modified by editing the <code>NGX_MAX_ERROR_STR</code> macro definition in the <code>src/core/ngx_log.h</code> file in the Nginx source tree.

This table can be used to store per-request Lua context data and has a life time identical to the current request (as with the Nginx variables).

Consider the following example,

<geshi lang="nginx">

location /test {

rewrite_by_lua '

ngx.say("foo = ", ngx.ctx.foo)

ngx.ctx.foo = 76

';

access_by_lua '

ngx.ctx.foo = ngx.ctx.foo + 3

';

content_by_lua '

ngx.say(ngx.ctx.foo)

';

}

</geshi>

Then <code>GET /test</code> will yield the output

<geshi lang="bash">

foo = nil

79

</geshi>

That is, the <code>ngx.ctx.foo</code> entry persists across the rewrite, access, and content phases of a request.

Every request, including subrequests, has its own copy of the table. For example:

<geshi lang="nginx">

location /sub {

content_by_lua '

ngx.say("sub pre: ", ngx.ctx.blah)

ngx.ctx.blah = 32

ngx.say("sub post: ", ngx.ctx.blah)

';

}

location /main {

content_by_lua '

ngx.ctx.blah = 73

ngx.say("main pre: ", ngx.ctx.blah)

local res = ngx.location.capture("/sub")

ngx.print(res.body)

ngx.say("main post: ", ngx.ctx.blah)

';

}

</geshi>

Then <code>GET /main</code> will give the output

<geshi lang="bash">

main pre: 73

sub pre: nil

sub post: 32

main post: 73

</geshi>

Here, modification of the <code>ngx.ctx.blah</code> entry in the subrequest does not affect the one in the parent request. This is because they have two separate versions of <code>ngx.ctx.blah</code>.

Internal redirection will destroy the original request <code>ngx.ctx</code> data (if any) and the new request will have an empty <code>ngx.ctx</code> table. For instance,

<geshi lang="nginx">

location /new {

content_by_lua '

ngx.say(ngx.ctx.foo)

';

}

location /orig {

content_by_lua '

ngx.ctx.foo = "hello"

ngx.exec("/new")

';

}

</geshi>

Then <code>GET /orig</code> will give

<geshi lang="bash">

nil

</geshi>

rather than the original <code>"hello"</code> value.

Arbitrary data values, including Lua closures and nested tables, can be inserted into this "magic" table. It also allows the registration of custom meta methods.

Overriding <code>ngx.ctx</code> with a new Lua table is also supported, for example,

<geshi lang="lua">

ngx.ctx = { foo = 32, bar = 54 }

</geshi>

== ngx.location.capture ==

'''syntax:''' ''res = ngx.location.capture(uri, options?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Issue a synchronous but still non-blocking ''Nginx Subrequest'' using <code>uri</code>.

Nginx's subrequests provide a powerful way to make non-blocking internal requests to other locations configured with disk file directory or ''any'' other nginx C modules like <code>ngx_proxy</code>, <code>ngx_fastcgi</code>, <code>ngx_memc</code>,

<code>ngx_postgres</code>, <code>ngx_drizzle</code>, and even ngx_lua itself and etc etc etc.

Also note that subrequests just mimic the HTTP interface but there is ''no'' extra HTTP/TCP traffic ''nor'' IPC involved. Everything works internally, efficiently, on the C level.

: specify a Lua table to be the [[#ngx.ctx|ngx.ctx]] table for the subrequest. It can be the current request's [[#ngx.ctx|ngx.ctx]] table, which effectively makes the parent and its subrequest to share exactly the same context table. This option was first introduced in the <code>v0.3.1rc25</code> release.

* <code>vars</code>

: take a Lua table which holds the values to set the specified Nginx variables in the subrequest as this option's value. This option was first introduced in the <code>v0.3.1rc31</code> release.

* <code>copy_all_vars</code>

: specify whether to copy over all the Nginx variable values of the current request to the subrequest in question. modifications of the nginx variables in the subrequest will not affect the current (parent) request. This option was first introduced in the <code>v0.3.1rc31</code> release.

* <code>share_all_vars</code>

: specify whether to share all the Nginx variables of the subrequest with the current (parent) request. modifications of the Nginx variables in the subrequest will affect the current (parent) request.

Issuing a POST subrequest, for example, can be done as follows

<geshi lang="lua">

res = ngx.location.capture(

'/foo/bar',

{ method = ngx.HTTP_POST, body = 'hello, world' }

)

</geshi>

See HTTP method constants methods other than POST.

The <code>method</code> option is <code>ngx.HTTP_GET</code> by default.

The <code>args</code> option can specify extra URI arguments, for instance,

<geshi lang="lua">

ngx.location.capture('/foo?a=1',

{ args = { b = 3, c = ':' } }

)

</geshi>

is equivalent to

<geshi lang="lua">

ngx.location.capture('/foo?a=1&b=3&c=%3a')

</geshi>

that is, this method will escape argument keys and values according to URI rules and

concatenate them together into a complete query string. The format for the Lua table passed as the <code>args</code> argument is identical to the format used in the [[#ngx.encode_args|ngx.encode_args]] method.

The <code>args</code> option can also take plain query strings:

<geshi lang="lua">

ngx.location.capture('/foo?a=1',

{ args = 'b=3&c=%3a' } }

)

</geshi>

This is functionally identical to the previous examples.

The <code>share_all_vars</code> option controls whether to share nginx variables among the current request and its subrequests.

If this option is set to <code>true</code>, then the current request and associated subrequests will share the same Nginx variable scope. Hence, changes to Nginx variables made by a subrequest will affect the current request.

Care should be taken in using this option as variable scope sharing can have unexpected side effects. The <code>args</code>, <code>vars</code>, or <code>copy_all_vars</code> options are generally preferable instead.

This option is set to <code>false</code> by default

<geshi lang="nginx">

location /other {

set $dog "$dog world";

echo "$uri dog: $dog";

}

location /lua {

set $dog 'hello';

content_by_lua '

res = ngx.location.capture("/other",

{ share_all_vars = true });

ngx.print(res.body)

ngx.say(ngx.var.uri, ": ", ngx.var.dog)

';

}

</geshi>

Accessing location <code>/lua</code> gives

<geshi lang="text">

/other dog: hello world

/lua: hello world

</geshi>

The <code>copy_all_vars</code> option provides a copy of the parent request's Nginx variables to subrequests when such subrequests are issued. Changes made to these variables by such subrequests will not affect the parent request or any other subrequests sharing the parent request's variables.

<geshi lang="nginx">

location /other {

set $dog "$dog world";

echo "$uri dog: $dog";

}

location /lua {

set $dog 'hello';

content_by_lua '

res = ngx.location.capture("/other",

{ copy_all_vars = true });

ngx.print(res.body)

ngx.say(ngx.var.uri, ": ", ngx.var.dog)

';

}

</geshi>

Request <code>GET /lua</code> will give the output

<geshi lang="text">

/other dog: hello world

/lua: hello

</geshi>

Note that if both <code>share_all_vars</code> and <code>copy_all_vars</code> are set to true, then <code>share_all_vars</code> takes precedence.

In addition to the two settings above, it is possible to specify

values for variables in the subrequest using the <code>vars</code> option. These

variables are set after the sharing or copying of variables has been

evaluated, and provides a more efficient method of passing specific

values to a subrequest over encoding them as URL arguments and

unescaping them in the Nginx config file.

<geshi lang="nginx">

location /other {

content_by_lua '

ngx.say("dog = ", ngx.var.dog)

ngx.say("cat = ", ngx.var.cat)

';

}

location /lua {

set $dog '';

set $cat '';

content_by_lua '

res = ngx.location.capture("/other",

{ vars = { dog = "hello", cat = 32 }});

ngx.print(res.body)

';

}

</geshi>

Accessing <code>/lua</code> will yield the output

<geshi lang="text">

dog = hello

cat = 32

</geshi>

The <code>ctx</code> option can be used to specify a custom Lua table to serve as the [[#ngx.ctx|ngx.ctx]] table for the subrequest.

<geshi lang="nginx">

location /sub {

content_by_lua '

ngx.ctx.foo = "bar";

';

}

location /lua {

content_by_lua '

local ctx = {}

res = ngx.location.capture("/sub", { ctx = ctx })

ngx.say(ctx.foo);

ngx.say(ngx.ctx.foo);

';

}

</geshi>

Then request <code>GET /lua</code> gives

<geshi lang="text">

bar

nil

</geshi>

It is also possible to use this <code>ctx</code> option to share the same [[#ngx.ctx|ngx.ctx]] table between the current (parent) request and the subrequest:

<geshi lang="nginx">

location /sub {

content_by_lua '

ngx.ctx.foo = "bar";

';

}

location /lua {

content_by_lua '

res = ngx.location.capture("/sub", { ctx = ngx.ctx })

ngx.say(ngx.ctx.foo);

';

}

</geshi>

Request <code>GET /lua</code> yields the output

<geshi lang="text">

bar

</geshi>

Note that subrequests issued by [[#ngx.location.capture|ngx.location.capture]] inherit all the

request headers of the current request by default and that this may have unexpected side effects on the

subrequest responses. For example, when using the standard <code>ngx_proxy</code> module to serve

subrequests, an "Accept-Encoding: gzip" header in the main request may result

in gzipped responses that cannot be handled properly in Lua code. Original request headers should be ignored by setting

[[HttpProxyModule#proxy_pass_request_headers|proxy_pass_request_headers]] to <code>off</code> in subrequest locations.

There is a hard-coded upper limit on the number of concurrent subrequests possible for every main request. In older versions of Nginx, the limit was <code>50</code> concurrent subrequests and in more recent versions, Nginx <code>1.1.x</code> onwards, this was increased to <code>200</code> concurrent subrequests. When this limit is exceeded, the following error message is added to the <code>error.log</code> file:

<geshi lang="text">

[error] 13983#0: *1 subrequests cycle while processing "/uri"

</geshi>

The limit can be manually modified if required by editing the definition of the <code>NGX_HTTP_MAX_SUBREQUESTS</code> macro in the <code>nginx/src/http/ngx_http_request.h</code> file in the Nginx source tree.

Please also refer to restrictions on capturing locations configured by [[#Locations_Configured_by_Subrequest_Directives_of_Other_Modules|subrequest directives of other modules]].

Set, add to, or clear the current request's <code>HEADER</code> response header that is to be sent.

Underscores (<code>_</code>) in the header names will be replaced by hyphens (<code>-</code>) by default. This transformation can be turned off via the [[#lua_transform_underscores_in_response_headers|lua_transform_underscores_in_response_headers]] directive.

The header names are matched case-insensitively.

<geshi lang="lua">

-- equivalent to ngx.header["Content-Type"] = 'text/plain'

ngx.header.content_type = 'text/plain';

ngx.header["X-My-Header"] = 'blah blah';

</geshi>

Multi-value headers can be set this way:

<geshi lang="lua">

ngx.header['Set-Cookie'] = {'a=32; path=/', 'b=4; path=/'}

</geshi>

will yield

<geshi lang="bash">

Set-Cookie: a=32; path=/

Set-Cookie: b=4; path=/

</geshi>

in the response headers.

Only Lua tables are accepted (Only the last element in the table will take effect for standard headers such as <code>Content-Type</code> that only accept a single value).

<geshi lang="lua">

ngx.header.content_type = {'a', 'b'}

</geshi>

is equivalent to

<geshi lang="lua">

ngx.header.content_type = 'b'

</geshi>

Setting a slot to <code>nil</code> effectively removes it from the response headers:

<geshi lang="lua">

ngx.header["X-My-Header"] = nil;

</geshi>

The same applies to assigning an empty table:

<geshi lang="lua">

ngx.header["X-My-Header"] = {};

</geshi>

Setting <code>ngx.header.HEADER</code> after sending out response headers (either explicitly with [[#ngx.send_headers|ngx.send_headers]] or implicitly with [[#ngx.print|ngx.print]] and similar) will throw out a Lua exception.

Reading <code>ngx.header.HEADER</code> will return the value of the response header named <code>HEADER</code>.

Underscores (<code>_</code>) in the header names will also be replaced by dashes (<code>-</code>) and the header names will be matched case-insensitively. If the response header is not present at all, <code>nil</code> will be returned.

This is particularly useful in the context of [[#header_filter_by_lua|header_filter_by_lua]] and [[#header_filter_by_lua_file|header_filter_by_lua_file]], for example,

<geshi lang="nginx">

location /test {

set $footer '';

proxy_pass http://some-backend;

header_filter_by_lua '

if ngx.header["X-My-Header"] == "blah" then

ngx.var.footer = "some value"

end

';

echo_after_body $footer;

}

</geshi>

For multi-value headers, all of the values of header will be collected in order and returned as a Lua table. For example, response headers

<geshi lang="text">

Foo: bar

Foo: baz

</geshi>

will result in

<geshi lang="lua">

{"bar", "baz"}

</geshi>

to be returned when reading <code>ngx.header.Foo</code>.

Note that <code>ngx.header</code> is not a normal Lua table and as such, it is not possible to iterate through it using the Lua <code>ipairs</code> function.

For reading ''request'' headers, use the [[#ngx.req.get_headers|ngx.req.get_headers]] function instead.

Overrides the current request's request method with the <code>request_id</code> argument. Currently only numerical [[#HTTP method constants|method constants]] are supported, like <code>ngx.HTTP_POST</code> and <code>ngx.HTTP_GET</code>.

If the current request is an Nginx subrequest, then the subrequest's method will be overridden.

Rewrite the current request's (parsed) URI by the <code>uri</code> argument. The <code>uri</code> argument must be a Lua string and cannot be of zero length, or a Lua exception will be thrown.

The optional boolean <code>jump</code> argument can trigger location rematch (or location jump) as [[HttpRewriteModule]]'s [[HttpRewriteModule#rewrite|rewrite]] directive, that is, when <code>jump</code> is <code>true</code> (default to <code>false</code>), this function will never return and it will tell Nginx to try re-searching locations with the new URI value at the later <code>post-rewrite</code> phase and jumping to the new location.

Location jump will not be triggered otherwise, and only the current request's URI will be modified, which is also the default behavior. This function will return but with no returned values when the <code>jump</code> argument is <code>false</code> or absent altogether.

For example, the following nginx config snippet

<geshi lang="nginx">

rewrite ^ /foo last;

</geshi>

can be coded in Lua like this:

<geshi lang="lua">

ngx.req.set_uri("/foo", true)

</geshi>

Similarly, Nginx config

<geshi lang="nginx">

rewrite ^ /foo break;

</geshi>

can be coded in Lua as

<geshi lang="lua">

ngx.req.set_uri("/foo", false)

</geshi>

or equivalently,

<geshi lang="lua">

ngx.req.set_uri("/foo")

</geshi>

The <code>jump</code> can only be set to <code>true</code> in [[#rewrite_by_lua|rewrite_by_lua]] and [[#rewrite_by_lua_file|rewrite_by_lua_file]]. Use of jump in other contexts is prohibited and will throw out a Lua exception.

A more sophisticated example involving regex substitutions is as follows

<geshi lang="nginx">

location /test {

rewrite_by_lua '

local uri = ngx.re.sub(ngx.var.uri, "^/test/(.*)", "$1", "o")

ngx.req.set_uri(uri)

';

proxy_pass http://my_backend;

}

</geshi>

which is functionally equivalent to

<geshi lang="nginx">

location /test {

rewrite ^/test/(.*) /$1 break;

proxy_pass http://my_backend;

}

</geshi>

Note that it is not possible to use this interface to rewrite URI arguments and that [[#ngx.req.set_uri_args|ngx.req.set_uri_args]] should be used for this instead. For instance, Nginx config

<geshi lang="nginx">

rewrite ^ /foo?a=3? last;

</geshi>

can be coded as

<geshi lang="nginx">

ngx.req.set_uri_args("a=3")

ngx.req.set_uri("/foo", true)

</geshi>

or

<geshi lang="nginx">

ngx.req.set_uri_args({a = 3})

ngx.req.set_uri("/foo", true)

</geshi>

This interface was first introduced in the <code>v0.3.1rc14</code> release.

Note that a maximum of 100 request arguments are parsed by default (including those with the same name) and that additional request arguments are silently discarded to guard against potential denial of service attacks.

However, the optional <code>max_args</code> function argument can be used to override this limit:

<geshi lang="lua">

local args = ngx.req.get_uri_args(10)

</geshi>

This argument can be set to zero to remove the limit and to process all request arguments received:

Returns a Lua table holding all the current request POST query arguments (of the MIME type <code>application/x-www-form-urlencoded</code>). Call [[#ngx.req.read_body|ngx.req.read_body]] to read the request body first or turn on the [[#lua_need_request_body|lua_need_request_body]] directive to avoid Lua exception errors.

<geshi lang="nginx">

location = /test {

content_by_lua '

ngx.req.read_body()

local args = ngx.req.get_post_args()

for key, val in pairs(args) do

if type(val) == "table" then

ngx.say(key, ": ", table.concat(val, ", "))

else

ngx.say(key, ": ", val)

end

end

';

}

</geshi>

Then

<geshi lang="bash">

# Post request with the body 'foo=bar&bar=baz&bar=blah'

$ curl --data 'foo=bar&bar=baz&bar=blah' localhost/test

</geshi>

will yield the response body like

<geshi lang="bash">

foo: bar

bar: baz, blah

</geshi>

Multiple occurrences of an argument key will result in a table value holding all of the values for that key in order.

Keys and values will be unescaped according to URI escaping rules.

With the settings above,

<geshi lang="bash">

# POST request with body 'a%20b=1%61+2'

$ curl -d 'a%20b=1%61+2' localhost/test

</geshi>

will yield:

<geshi lang="bash">

a b: 1a 2

</geshi>

Arguments without the <code>=<value></code> parts are treated as boolean arguments. <code>GET /test?foo&bar</code> will yield:

<geshi lang="bash">

foo: true

bar: true

</geshi>

That is, they will take Lua boolean values <code>true</code>. However, they are different from arguments taking empty string values. <code>POST /test</code> with request body <code>foo=&bar=</code> will return something like

Note that a maximum of 100 request arguments are parsed by default (including those with the same name) and that additional request arguments are silently discarded to guard against potential denial of service attacks.

However, the optional <code>max_args</code> function argument can be used to override this limit:

<geshi lang="lua">

local args = ngx.req.get_post_args(10)

</geshi>

This argument can be set to zero to remove the limit and to process all request arguments received:

Note that the [[#ngx.var.VARIABLE|ngx.var.HEADER]] API call, which uses core [[HttpCoreModule#$http_HEADER|$http_HEADER]] variables, may be more preferable for reading individual request headers.

For multiple instances of request headers such as:

<geshi lang="bash">

Foo: foo

Foo: bar

Foo: baz

</geshi>

the value of <code>ngx.req.get_headers()["Foo"]</code> will be a Lua (array) table such as:

<geshi lang="lua">

{"foo", "bar", "baz"}

</geshi>

Note that a maximum of 100 request headers are parsed by default (including those with the same name) and that additional request headers are silently discarded to guard against potential denial of service attacks.

However, the optional <code>max_headers</code> function argument can be used to override this limit:

<geshi lang="lua">

local args = ngx.req.get_headers(10)

</geshi>

This argument can be set to zero to remove the limit and to process all request headers received:

<geshi lang="lua">

local args = ngx.req.get_headers(0)

</geshi>

Removing the <code>max_headers</code> cap is strongly discouraged.

Since the <code>0.6.9</code> release, all the header names in the Lua table returned are converted to the pure lower-case form by default, unless the <code>raw</code> argument is set to <code>true</code> (default to <code>false</code>).

Also, by default, an <code>__index</code> metamethod is added to the resulting Lua table and will normalize the keys to a pure lowercase form with all underscores converted to dashes in case of a lookup miss. For example, if a request header <code>My-Foo-Header</code> is present, then the following invocations will all pick up the value of this header correctly:

<geshi lang="lua">

ngx.say(headers.my_foo_header)

ngx.say(headers["My-Foo-Header"])

ngx.say(headers["my-foo-header"])

</geshi>

The <code>__index</code> metamethod will not be added when the <code>raw</code> argument is set to <code>true</code>.

If the request body is already read previously by turning on [[#lua_need_request_body|lua_need_request_body]] or by using other modules, then this function does not run and returns immediately.

If the request body has already been explicitly discarded, either by the [[#ngx.req.discard_body|ngx.req.discard_body]] function or other modules, this function does not run and returns immediately.

In case of errors, such as connection errors while reading the data, this method will throw out a Lua exception ''or'' terminate the current request with a 500 status code immediately.

The request body data read using this function can be retrieved later via [[#ngx.req.get_body_data|ngx.req.get_body_data]] or, alternatively, the temporary file name for the body data cached to disk using [[#ngx.req.get_body_file|ngx.req.get_body_file]]. This depends on

# whether the current request body is already larger than the [[HttpCoreModule#client_body_buffer_size|client_body_buffer_size]],

# and whether [[HttpCoreModule#client_body_in_file_only|client_body_in_file_only]] has been switched on.

In cases where current request may have a request body and the request body data is not required, The [[#ngx.req.discard_body|ngx.req.discard_body]] function must be used to explicitly discard the request body to avoid breaking things under HTTP 1.1 keepalive or HTTP 1.1 pipelining.

This function was first introduced in the <code>v0.3.1rc17</code> release.

== ngx.req.discard_body ==

'''syntax:''' ''ngx.req.discard_body()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Explicitly discard the request body, i.e., read the data on the connection and throw it away immediately. Please note that ignoring request body is not the right way to discard it, and that this function must be called to avoid breaking things under HTTP 1.1 keepalive or HTTP 1.1 pipelining.

This function is an asynchronous call and returns immediately.

If the request body has already been read, this function does nothing and returns immediately.

This function was first introduced in the <code>v0.3.1rc17</code> release.

See also [[#ngx.req.read_body|ngx.req.read_body]].

== ngx.req.get_body_data ==

'''syntax:''' ''data = ngx.req.get_body_data()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Retrieves in-memory request body data. It returns a Lua string rather than a Lua table holding all the parsed query arguments. Use the [[#ngx.req.get_post_args|ngx.req.get_post_args]] function instead if a Lua table is required.

This function returns <code>nil</code> if

# the request body has not been read,

# the request body has been read into disk temporary files,

# or the request body has zero size.

If the request body has not been read yet, call [[#ngx.req.read_body|ngx.req.read_body]] first (or turned on [[#lua_need_request_body|lua_need_request_body]] to force this module to read the request body. This is not recommended however).

If the request body has been read into disk files, try calling the [[#ngx.req.get_body_file|ngx.req.get_body_file]] function instead.

To force in-memory request bodies, try setting [[HttpCoreModule#client_body_buffer_size|client_body_buffer_size]] to the same size value in [[HttpCoreModule#client_max_body_size|client_max_body_size]].

Note that calling this function instead of using <code>ngx.var.request_body</code> or <code>ngx.var.echo_request-body</code> is more efficient because it can save one dynamic memory allocation and one data copy.

This function was first introduced in the <code>v0.3.1rc17</code> release.

See also [[#ngx.req.get_body_file|ngx.req.get_body_file]].

== ngx.req.get_body_file ==

'''syntax:''' ''file_name = ngx.req.get_body_file()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Retrieves the file name for the in-file request body data. Returns <code>nil</code> if the request body has not been read or has been read into memory.

The returned file is read only and is usually cleaned up by Nginx's memory pool. It should not be manually modified, renamed, or removed in Lua code.

If the request body has not been read yet, call [[#ngx.req.read_body|ngx.req.read_body]] first (or turned on [[#lua_need_request_body|lua_need_request_body]] to force this module to read the request body. This is not recommended however).

If the request body has been read into memory, try calling the [[#ngx.req.get_body_data|ngx.req.get_body_data]] function instead.

This function was first introduced in the <code>v0.3.1rc17</code> release.

See also [[#ngx.req.get_body_data|ngx.req.get_body_data]].

== ngx.req.set_body_data ==

'''syntax:''' ''ngx.req.set_body_data(data)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Set the current request's request body using the in-memory data specified by the <code>data</code> argument.

If the current request's request body has not been read, then it will be properly discarded. When the current request's request body has been read into memory or buffered into a disk file, then the old request body's memory will be freed or the disk file will be cleaned up immediately, respectively.

This function requires patching the Nginx core to function properly because the Nginx core does not allow modifying request bodies by the current design. Here is a patch for Nginx 1.0.11: [https://github.com/agentzh/ngx_openresty/blob/master/patches/nginx-1.0.11-allow_request_body_updating.patch nginx-1.0.11-allow_request_body_updating.patch], and this patch should be applied cleanly to other releases of Nginx as well.

This patch has already been applied to [http://openresty.org/ ngx_openresty] 1.0.8.17 and above.

This function was first introduced in the <code>v0.3.1rc18</code> release.

See also [[#ngx.req.set_body_file|ngx.req.set_body_file]].

== ngx.req.set_body_file ==

'''syntax:''' ''ngx.req.set_body_file(file_name, auto_clean?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Set the current request's request body using the in-file data specified by the <code>file_name</code> argument.

If the optional <code>auto_clean</code> argument is given a <code>true</code> value, then this file will be removed at request completion or the next time this function or [[#ngx.req.set_body_data|ngx.req.set_body_data]] are called in the same request. The <code>auto_clean</code> is default to <code>false</code>.

Please ensure that the file specified by the <code>file_name</code> argument exists and is readable by an Nginx worker process by setting its permission properly to avoid Lua exception errors.

If the current request's request body has not been read, then it will be properly discarded. When the current request's request body has been read into memory or buffered into a disk file, then the old request body's memory will be freed or the disk file will be cleaned up immediately, respectively.

This function requires patching the Nginx core to function properly because the Nginx core does not allow modifying request bodies by the current design. Here is a patch for Nginx 1.0.9: [https://github.com/agentzh/ngx_openresty/blob/master/patches/nginx-1.0.9-allow_request_body_updating.patch nginx-1.0.9-allow_request_body_updating.patch], and this patch should be applied cleanly to other releases of Nginx as well.

This patch has already been applied to [http://openresty.org/ ngx_openresty] 1.0.8.17 and above.

This function was first introduced in the <code>v0.3.1rc18</code> release.

Creates a new blank request body for the current request and inializes the buffer for later request body data writing via the [[#ngx.req.append_body|ngx.req.append_body]] and [[#ngx.req.finish_body|ngx.req.finish_body]] APIs.

If the <code>buffer_size</code> argument is specified, then its value will be used for the size of the memory buffer for body writing with [[#ngx.req.append_body|ngx.req.append_body]]. If the argument is omitted, then the value specified by the standard [[HttpCoreModule#client_body_buffer_size|client_body_buffer_size]] directive will be used instead.

When the data can no longer be hold in the memory buffer for the request body, then the data will be flushed onto a temporary file just like the standard request body reader in the Nginx core.

It is important to always call the [[#ngx.req.finish_body|ngx.req.finish_body]] after all the data has been appended onto the current request body. Also, when this function is used together with [[#ngx.req.socket|ngx.req.socket]], it is required to call [[#ngx.req.socket|ngx.req.socket]] ''before'' this function, or you will get the "request body already exists" error message.

The usage of this function is often like this:

<geshi lang="lua">

ngx.req.init_body(128 * 1024) -- buffer is 128KB

for chunk in next_data_chunk() do

ngx.req.append_body(chunk) -- each chunk can be 4KB

end

ngx.req.finish_body()

</geshi>

This function can be used with [[#ngx.req.append_body|ngx.req.append_body]], [[#ngx.req.finish_body|ngx.req.finish_body]], and [[#ngx.req.socket|ngx.req.socket]] to implement efficient input filters in pure Lua (in the context of [[#rewrite_by_lua|rewrite_by_lua]]* or [[#access_by_lua|access_by_lua]]*), which can be used with other Nginx content handler or upstream modules like [[HttpProxyModule]] and [[HttpFastcgiModule]].

This function was first introduced in the <code>v0.5.11</code> release.

Append new data chunk specified by the <code>data_chunk</code> argument onto the existing request body created by the [[#ngx.req.init_body|ngx.req.init_body]] call.

When the data can no longer be hold in the memory buffer for the request body, then the data will be flushed onto a temporary file just like the standard request body reader in the Nginx core.

It is important to always call the [[#ngx.req.finish_body|ngx.req.finish_body]] after all the data has been appended onto the current request body.

This function can be used with [[#ngx.req.init_body|ngx.req.init_body]], [[#ngx.req.finish_body|ngx.req.finish_body]], and [[#ngx.req.socket|ngx.req.socket]] to implement efficient input filters in pure Lua (in the context of [[#rewrite_by_lua|rewrite_by_lua]]* or [[#access_by_lua|access_by_lua]]*), which can be used with other Nginx content handler or upstream modules like [[HttpProxyModule]] and [[HttpFastcgiModule]].

This function was first introduced in the <code>v0.5.11</code> release.

Completes the construction process of the new request body created by the [[#ngx.req.init_body|ngx.req.init_body]] and [[#ngx.req.append_body|ngx.req.append_body]] calls.

This function can be used with [[#ngx.req.init_body|ngx.req.init_body]], [[#ngx.req.append_body|ngx.req.append_body]], and [[#ngx.req.socket|ngx.req.socket]] to implement efficient input filters in pure Lua (in the context of [[#rewrite_by_lua|rewrite_by_lua]]* or [[#access_by_lua|access_by_lua]]*), which can be used with other Nginx content handler or upstream modules like [[HttpProxyModule]] and [[HttpFastcgiModule]].

This function was first introduced in the <code>v0.5.11</code> release.

See also [[#ngx.req.init_body|ngx.req.init_body]].

== ngx.req.socket ==

'''syntax:''' ''tcpsock, err = ngx.req.socket()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Returns a read-only cosocket object that wraps the downstream connection. Only [[#tcpsock:receive|receive]] and [[#tcpsock:receiveuntil|receiveuntil]] methods are supported on this object.

In case of error, <code>nil</code> will be returned as well as a string describing the error.

The socket object returned by this method is usually used to read the current request's body in a streaming fashion. Do not turn on the [[#lua_need_request_body|lua_need_request_body]] directive, and do not mix this call with [[#ngx.req.read_body|ngx.req.read_body]] and [[#ngx.req.discard_body|ngx.req.discard_body]].

If any request body data has been pre-read into the Nginx core request header buffer, the resulting cosocket object will take care of this to avoid potential data loss resulting from such pre-reading.

This function was first introduced in the <code>v0.5.0rc1</code> release.

Clear the current request's request header named <code>header_name</code>. None of the current request's subrequests will be affected.

== ngx.exec ==

'''syntax:''' ''ngx.exec(uri, args?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Does an internal redirect to <code>uri</code> with <code>args</code>.

<geshi lang="lua">

ngx.exec('/some-location');

ngx.exec('/some-location', 'a=3&b=5&c=6');

ngx.exec('/some-location?a=3&b=5', 'c=6');

</geshi>

Named locations are also supported, but query strings are ignored. For example,

<geshi lang="nginx">

location /foo {

content_by_lua '

ngx.exec("@bar");

';

}

location @bar {

...

}

</geshi>

The optional second <code>args</code> can be used to specify extra URI query arguments, for example:

<geshi lang="lua">

ngx.exec("/foo", "a=3&b=hello%20world")

</geshi>

Alternatively, a Lua table can be passed for the <code>args</code> argument for ngx_lua to carry out URI escaping and string concatenation.

<geshi lang="lua">

ngx.exec("/foo", { a = 3, b = "hello world" })

</geshi>

The result is exactly the same as the previous example. The format for the Lua table passed as the <code>args</code> argument is identical to the format used in the [[#ngx.encode_args|ngx.encode_args]] method.

Note that this is very different from [[#ngx.redirect|ngx.redirect]] in that

it is just an internal redirect and no new HTTP traffic is involved.

This method never returns.

This method ''must'' be called before [[#ngx.send_headers|ngx.send_headers]] or explicit response body

outputs by either [[#ngx.print|ngx.print]] or [[#ngx.say|ngx.say]].

It is strongly recommended to combine the <code>return</code> statement with this call, i.e., <code>return ngx.exec(...)</code>.

This method is similar to the [[HttpEchoModule#echo_exec|echo_exec]] directive of the [[HttpEchoModule]].

== ngx.redirect ==

'''syntax:''' ''ngx.redirect(uri, status?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Issue an <code>HTTP 301</code> or <code>302</code> redirection to <code>uri</code>.

The optional <code>status</code> parameter specifies whether

<code>301</code> or <code>302</code> to be used. It is <code>302</code> (<code>ngx.HTTP_MOVED_TEMPORARILY</code>) by default.

Here is an example assuming the current server name is <code>localhost</code> and that it is listening on Port 1984:

Nested arrays of strings are permitted and the elements in the arrays will be sent one by one:

<geshi lang="lua">

local table = {

"hello, ",

{"world: ", true, " or ", false,

{": ", nil}}

}

ngx.print(table)

</geshi>

will yield the output

<geshi lang="bash">

hello, world: true or false: nil

</geshi>

Non-array table arguments will cause a Lua exception to be thrown.

The <code>ngx.null</code> constant will yield the <code>"null"</code> string output.

This is an asynchronous call and will return immediately without waiting for all the data to be written into the system send buffer. To run in synchronous mode, call <code>ngx.flush(true)</code> after calling <code>ngx.print</code>. This can be particularly useful for streaming output. See [[#ngx.flush|ngx.flush]] for more details.

Lua <code>nil</code> arguments are accepted and result in literal <code>"nil"</code> string while Lua booleans result in literal <code>"true"</code> or <code>"false"</code> string outputs. And the <code>ngx.null</code> constant will yield the <code>"null"</code> string output.

The <code>log_level</code> argument can take constants like <code>ngx.ERR</code> and <code>ngx.WARN</code>. Check out [[#Nginx log level constants|Nginx log level constants]] for details.

There is a hard coded <code>2048</code> byte limitation on error message lengths in the Nginx core. This limit includes trailing newlines and leading time stamps. If the message size exceeds this limit, Nginx will truncate the message text accordingly. This limit can be manually modified by editing the <code>NGX_MAX_ERROR_STR</code> macro definition in the <code>src/core/ngx_log.h</code> file in the Nginx source tree.

== ngx.flush ==

'''syntax:''' ''ngx.flush(wait?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Flushes response output to the client.

<code>ngx.flush</code> accepts an optional boolean <code>wait</code> argument (Default: <code>false</code>) first introduced in the <code>v0.3.1rc34</code> release. When called with the default argument, it issues an asynchronous call (Returns immediately without waiting for output data to be written into the system send buffer). Calling the function with the <code>wait</code> argument set to <code>true</code> switches to synchronous mode.

In synchronous mode, the function will not return until all output data has been written into the system send buffer or until the [[HttpCoreModule#send_timeout|send_timeout]] setting has expired. Note that using the Lua coroutine mechanism means that this function does not block the Nginx event loop even in the synchronous mode.

When <code>ngx.flush(true)</code> is called immediately after [[#ngx.print|ngx.print]] or [[#ngx.say|ngx.say]], it causes the latter functions to run in synchronous mode. This can be particularly useful for streaming output.

Note that <code>ngx.flush</code> is non functional when in the HTTP 1.0 output buffering mode. See [[#HTTP 1.0 support|HTTP 1.0 support]].

== ngx.exit ==

'''syntax:''' ''ngx.exit(status)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

When <code>status >= 200</code> (i.e., <code>ngx.HTTP_OK</code> and above), it will interrupt the execution of the current request and return status code to nginx.

When <code>status == 0</code> (i.e., <code>ngx.OK</code>), it will only quit the current phase handler (or the content handler if the [[#content_by_lua|content_by_lua]] directive is used) and continue to run later phases (if any) for the current request.

The <code>status</code> argument can be <code>ngx.OK</code>, <code>ngx.ERROR</code>, <code>ngx.HTTP_NOT_FOUND</code>,

To return an error page with custom contents, use code snippets like this:

<geshi lang="lua">

ngx.status = ngx.HTTP_GONE

ngx.say("This is our own content")

-- to cause quit the whole request rather than the current phase handler

ngx.exit(ngx.HTTP_OK)

</geshi>

The effect in action:

<geshi lang="bash">

$ curl -i http://localhost/test

HTTP/1.1 410 Gone

Server: nginx/1.0.6

Date: Thu, 15 Sep 2011 00:51:48 GMT

Content-Type: text/plain

Transfer-Encoding: chunked

Connection: keep-alive

This is our own content

</geshi>

Number literals can be used directly as the argument, for instance,

<geshi lang="lua">

ngx.exit(501)

</geshi>

Note that while this method accepts all [[#HTTP status constants|HTTP status constants]] as input, it only accepts <code>NGX_OK</code> and <code>NGX_ERROR</code> of the [[#core constants|core constants]].

It is recommended, though not necessary, to combine the <code>return</code> statement with this call, i.e., <code>return ngx.exit(...)</code>, to give a visual hint to others reading the code.

== ngx.eof ==

'''syntax:''' ''ngx.eof()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Explicitly specify the end of the response output stream. In the case of HTTP 1.1 chunked encoded output, it will just trigger the Nginx core to send out the "last chunk".

When you disable the HTTP 1.1 keep-alive feature for your downstream connections, you can rely on descent HTTP clients to close the connection actively for you when you call this method. This trick can be used do back-ground jobs without letting the HTTP clients to wait on the connection, as in the following example:

<geshi lang="nginx">

location = /async {

keepalive_timeout 0;

content_by_lua '

ngx.say("got the task!")

ngx.eof() -- descent HTTP client will close the connection at this point

-- access MySQL, PostgreSQL, Redis, Memcached, and etc here...

';

}

</geshi>

But if you create subrequests to access other locations configured by Nginx upstream modules, then you should configure those upstream modules to ignore client connection abortions if they are not by default. For example, by default the standard [[HttpProxyModule]] will terminate both the subrequest and the main request as soon as the client closes the connection, so it is important to turn on the [[HttpProxyModule#proxy_ignore_client_abort|proxy_ignore_client_abort]] directive in your location block configured by [[HttpProxyModule]]:

<geshi lang="nginx">

proxy_ignore_client_abort on;

</geshi>

== ngx.sleep ==

'''syntax:''' ''ngx.sleep(seconds)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Sleeps for the specified seconds without blocking. One can specify time resolution up to 0.001 seconds (i.e., one milliseconds).

Decodes a URI encoded query-string into a Lua table. This is the inverse function of [[#ngx.encode_args|ngx.encode_args]].

The optional <code>max_args</code> argument can be used to specify the maximum number of arguments parsed from the <code>str</code> argument. By default, a maximum of 100 request arguments are parsed (including those with the same name) and that additional URI arguments are silently discarded to guard against potential denial of service attacks.

This argument can be set to zero to remove the limit and to process all request arguments received:

This method performs better on relatively short <code>str</code> inputs (i.e., less than 30 ~ 60 bytes), as compared to [[#ngx.crc32_long|ngx.crc32_long]]. The result is exactly the same as [[#ngx.crc32_long|ngx.crc32_long]].

Behind the scene, it is just a thin wrapper around the <code>ngx_crc32_short</code> function defined in the Nginx core.

This method performs better on relatively long <code>str</code> inputs (i.e., longer than 30 ~ 60 bytes), as compared to [[#ngx.crc32_short|ngx.crc32_short]]. The result is exactly the same as [[#ngx.crc32_short|ngx.crc32_short]].

Behind the scene, it is just a thin wrapper around the <code>ngx_crc32_long</code> function defined in the Nginx core.

Returns a floating-point number for the elapsed time in seconds (including milliseconds as the decimal part) from the epoch for the current time stamp from the nginx cached time (no syscall involved unlike Lua's date library).

Use the Nginx core [[CoreModule#timer_resolution|timer_resolution]] directive to adjust the accuracy or forcibly update the Nginx time cache by calling [[#ngx.update_time|ngx.update_time]] first.

Returns a formated string can be used as the http header time (for example, being used in <code>Last-Modified</code> header). The parameter <code>sec</code> is the time stamp in seconds (like those returned from [[#ngx.time|ngx.time]]).

Matches the <code>subject</code> string using the Perl compatible regular expression <code>regex</code> with the optional <code>options</code>.

Only the first occurrence of the match is returned, or <code>nil</code> if no match is found. In case of fatal errors, like seeing bad <code>UTF-8</code> sequences in <code>UTF-8</code> mode, a Lua exception will be raised.

When a match is found, a Lua table <code>captures</code> is returned, where <code>captures[0]</code> holds the whole substring being matched, and <code>captures[1]</code> holds the first parenthesized sub-pattern's capturing, <code>captures[2]</code> the second, and so on.

<geshi lang="lua">

local m = ngx.re.match("hello, 1234", "[0-9]+")

-- m[0] == "1234"

</geshi>

<geshi lang="lua">

local m = ngx.re.match("hello, 1234", "([0-9])[0-9]+")

-- m[0] == "1234"

-- m[1] == "1"

</geshi>

Unmatched sub-patterns will have <code>nil</code> values in their <code>captures</code> table fields.

<geshi lang="lua">

local m = ngx.re.match("hello, world", "(world)|(hello)")

-- m[0] == "hello"

-- m[1] == nil

-- m[2] == "hello"

</geshi>

Specify <code>options</code> to control how the match operation will be performed. The following option characters are supported:

<geshi lang="text">

a anchored mode (only match from the beginning)

d enable the DFA mode (or the longest token match semantics).

this requires PCRE 6.0+ or else a Lua exception will be thrown.

first introduced in ngx_lua v0.3.1rc30.

i case insensitive mode (similar to Perl's /i modifier)

j enable PCRE JIT compilation, this requires PCRE 8.21+ which

must be built with the --enable-jit option. for optimum performance,

this option should always be used together with the 'o' option.

first introduced in ngx_lua v0.3.1rc30.

m multi-line mode (similar to Perl's /m modifier)

o compile-once mode (similar to Perl's /o modifier),

to enable the worker-process-level compiled-regex cache

s single-line mode (similar to Perl's /s modifier)

u UTF-8 mode. this requires PCRE to be built with

the --enable-utf8 option or else a Lua exception will be thrown.

x extended mode (similar to Perl's /x modifier)

</geshi>

These options can be combined:

<geshi lang="nginx">

local m = ngx.re.match("hello, world", "HEL LO", "ix")

-- m[0] == "hello"

</geshi>

<geshi lang="nginx">

local m = ngx.re.match("hello, 美好生活", "HELLO, (.{2})", "iu")

-- m[0] == "hello, 美好"

-- m[1] == "美好"

</geshi>

The <code>o</code> option is useful for performance tuning, because the regex pattern in question will only be compiled once, cached in the worker-process level, and shared among all requests in the current Nginx worker process. The upper limit of the regex cache can be tuned via the [[#lua_regex_cache_max_entries|lua_regex_cache_max_entries]] directive.

The optional fourth argument, <code>ctx</code>, can be a Lua table holding an optional <code>pos</code> field. When the <code>pos</code> field in the <code>ctx</code> table argument is specified, <code>ngx.re.match</code> will start matching from that offset. Regardless of the presence of the <code>pos</code> field in the <code>ctx</code> table, <code>ngx.re.match</code> will always set this <code>pos</code> field to the position ''after'' the substring matched by the whole pattern in case of a successful match. When match fails, the <code>ctx</code> table will be left intact.

<geshi lang="lua">

local ctx = {}

local m = ngx.re.match("1234, hello", "[0-9]+", "", ctx)

-- m[0] = "1234"

-- ctx.pos == 4

</geshi>

<geshi lang="lua">

local ctx = { pos = 2 }

local m = ngx.re.match("1234, hello", "[0-9]+", "", ctx)

-- m[0] = "34"

-- ctx.pos == 4

</geshi>

The <code>ctx</code> table argument combined with the <code>a</code> regex modifier can be used to construct a lexer atop <code>ngx.re.match</code>.

Note that, the <code>options</code> argument is not optional when the <code>ctx</code> argument is specified and that the empty Lua string (<code>""</code>) must be used as placeholder for <code>options</code> if no meaningful regex options are required.

To confirm that PCRE JIT is enabled, activate the Nginx debug log by adding the <code>--with-debug</code> option to Nginx or ngx_openresty's <code>./configure</code> script. Then, enable the "debug" error log level in <code>error_log</code> directive. The following message will be generated if PCRE JIT is enabled:

Similar to [[#ngx.re.match|ngx.re.match]], but returns a Lua iterator instead, so as to let the user programmer iterate all the matches over the <code><subject></code> string argument with the PCRE <code>regex</code>.

Here is a small example to demonstrate its basic usage:

<geshi lang="lua">

local iterator = ngx.re.gmatch("hello, world!", "([a-z]+)", "i")

local m

m = iterator() -- m[0] == m[1] == "hello"

m = iterator() -- m[0] == m[1] == "world"

m = iterator() -- m == nil

</geshi>

More often we just put it into a Lua <code>for</code> loop:

<geshi lang="lua">

for m in ngx.re.gmatch("hello, world!", "([a-z]+)", "i")

ngx.say(m[0])

ngx.say(m[1])

end

</geshi>

The optional <code>options</code> argument takes exactly the same semantics as the [[#ngx.re.match|ngx.re.match]] method.

The current implementation requires that the iterator returned should only be used in a single request. That is, one should ''not'' assign it to a variable belonging to persistent namespace like a Lua package.

Substitutes the first match of the Perl compatible regular expression <code>regex</code> on the <code>subject</code> argument string with the string or function argument <code>replace</code>. The optional <code>options</code> argument has exactly the same meaning as in [[#ngx.re.match|ngx.re.match]].

This method returns the resulting new string as well as the number of successful substitutions, or throw out a Lua exception when an error occurred (syntax errors in the <code><replace></code> string argument, for example).

When the <code>replace</code> is a string, then it is treated as a special template for string replacement. For example,

where <code>$0</code> referring to the whole substring matched by the pattern and <code>$1</code> referring to the first parenthesized capturing substring.

Curly braces can also be used to disambiguate variable names from the background string literals:

<geshi lang="lua">

local newstr, n = ngx.re.sub("hello, 1234", "[0-9]", "${0}00")

-- newstr == "hello, 10034"

-- n == 1

</geshi>

Literal dollar sign characters (<code>$</code>) in the <code>replace</code> string argument can be escaped by another dollar sign, for instance,

<geshi lang="lua">

local newstr, n = ngx.re.sub("hello, 1234", "[0-9]", "$$")

-- newstr == "hello, $234"

-- n == 1

</geshi>

Do not use backlashes to escape dollar signs; it will not work as expected.

When the <code>replace</code> argument is of type "function", then it will be invoked with the "match table" as the argument to generate the replace string literal for substitution. The "match table" fed into the <code>replace</code> function is exactly the same as the return value of [[#ngx.re.match|ngx.re.match]]. Here is an example:

Fetching the shm-based Lua dictionary object for the shared memory zone named <code>DICT</code> defined by the [[#lua_shared_dict|lua_shared_dict]] directive.

The resulting object <code>dict</code> has the following methods:

* [[#ngx.shared.DICT.get|get]]

* [[#ngx.shared.DICT.set|set]]

* [[#ngx.shared.DICT.add|add]]

* [[#ngx.shared.DICT.replace|replace]]

* [[#ngx.shared.DICT.incr|incr]]

* [[#ngx.shared.DICT.delete|delete]]

* [[#ngx.shared.DICT.flush_all|flush_all]]

* [[#ngx.shared.DICT.flush_expired|flush_expired]]

Here is an example:

<geshi lang="nginx">

http {

lua_shared_dict dogs 10m;

server {

location /set {

content_by_lua '

local dogs = ngx.shared.dogs

dogs:set("Jim", 8)

ngx.say("STORED")

';

}

location /get {

content_by_lua '

local dogs = ngx.shared.dogs

ngx.say(dogs:get("Jim"))

';

}

}

}

</geshi>

Let us test it:

<geshi lang="bash">

$ curl localhost/set

STORED

$ curl localhost/get

8

$ curl localhost/get

8

</geshi>

The number <code>8</code> will be consistently output when accessing <code>/get</code> regardless of how many Nginx workers there are because the <code>dogs</code> dictionary resides in the shared memory and visible to ''all'' of the worker processes.

The shared dictionary will retain its contents through a server config reload (either by sending the <code>HUP</code> signal to the Nginx process or by using the <code>-s reload</code> command-line option).

The contents in the dictionary storage will be lost, however, when the Nginx server quits.

This feature was first introduced in the <code>v0.3.1rc22</code> release.

* <code>forcible</code>: a boolean value to indicate whether other valid items have been removed forcibly when out of storage in the shared memory zone.

The <code>value</code> argument inserted can be Lua booleans, numbers, strings, or <code>nil</code>. Their value type will also be stored into the dictionary and the same data type can be retrieved later via the [[#ngx.shared.DICT.get|get]] method.

The optional <code>exptime</code> argument specifies expiration time (in seconds) for the inserted key-value pair. The time resolution is <code>0.001</code> seconds. If the <code>exptime</code> takes the value <code>0</code> (which is the default), then the item will never be expired.

The optional <code>flags</code> argument specifies a user flags value associated with the entry to be stored. It can also be retrieved later with the value. The user flags is stored as an unsigned 32-bit integer internally. Defaults to <code>0</code>. The user flags argument was first introduced in the <code>v0.5.0rc2</code> release.

When it fails to allocate memory for the current key-value item, then <code>set</code> will try removing existing items in the storage according to the Least-Recently Used (LRU) algorithm. Note that, LRU takes priority over expiration time here. If up to tens of existing items have been removed and the storage left is still insufficient (either due to the total capacity limit specified by [[#lua_shared_dict|lua_shared_dict]] or memory segmentation), then the <code>err</code> return value will be <code>no memory</code> and <code>success</code> will be <code>false</code>.

If this method succeeds in storing the current item by forcibly removing other not-yet-expired items in the dictionary via LRU, the <code>forcible</code> return value will be <code>true</code>. If it stores the item without forcibly removing other valid items, then the return value <code>forcible</code> will be <code>false</code>.

The first argument to this method must be the dictionary object itself, for example,

Just like the [[#ngx.shared.DICT.set|set]] method, but only stores the key-value pair into the dictionary [[#ngx.shared.DICT|ngx.shared.DICT]] if the key does ''not'' exist.

If the <code>key</code> argument already exists in the dictionary (and not expired for sure), the <code>success</code> return value will be <code>false</code> and the <code>err</code> return value will be <code>"exists"</code>.

This feature was first introduced in the <code>v0.3.1rc22</code> release.

Just like the [[#ngx.shared.DICT.set|set]] method, but only stores the key-value pair into the dictionary [[#ngx.shared.DICT|ngx.shared.DICT]] if the key ''does'' exist.

If the <code>key</code> argument does ''not'' exist in the dictionary (or expired already), the <code>success</code> return value will be <code>false</code> and the <code>err</code> return value will be <code>"not found"</code>.

This feature was first introduced in the <code>v0.3.1rc22</code> release.

Increments the (numerical) value for <code>key</code> in the shm-based dictionary [[#ngx.shared.DICT|ngx.shared.DICT]] by the step value <code>value</code>. Returns the new resulting number if the operation is successfully completed or <code>nil</code> and an error message otherwise.

The key must already exist in the dictionary, otherwise it will return <code>nil</code> and <code>"not found"</code>.

If the original value is not a valid Lua number in the dictionary, it will return <code>nil</code> and <code>"not a number"</code>.

The <code>value</code> argument can be any valid Lua numbers, like negative numbers or floating-point numbers.

This feature was first introduced in the <code>v0.3.1rc22</code> release.

Flushes out the expired items in the dictionary, up to the maximal number specified by the optional <code>max_count</code> argument. When the <code>max_count</code> argument is given <code>0</code> or not given at all, then it means unlimited. Returns the number of items that have actually been flushed.

This feature was first introduced in the <code>v0.6.3</code> release.

See also [[#ngx.shared.DICT.flush_all|ngx.shared.DICT.flush_all]] and [[#ngx.shared.DICT|ngx.shared.DICT]].

Fetch a list of the keys from the dictionary, up to <code><max_count></code>.

By default, only the first 1024 keys (if any) are returned. When the <code><max_count></code> argument is given the value <code>0</code>, then all the keys will be returned even there is more than 1024 keys in the dictionary.

'''WARNING''' Be careful when calling this method on dictionaries with a really huge number of keys. This method may lock the dictionary for quite a while and block all the nginx worker processes that are trying to access the dictionary.

This feature was first introduced in the <code>v0.7.3</code> release.

== ngx.socket.udp ==

'''syntax:''' ''udpsock = ngx.socket.udp()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Creates and returns a UDP or datagram-oriented unix domain socket object (also known as one type of the "cosocket" objects). The following methods are supported on this object:

* [[#udpsock:setpeername|setpeername]]

* [[#udpsock:send|send]]

* [[#udpsock:receive|receive]]

* [[#udpsock:close|close]]

* [[#udpsock:settimeout|settimeout]]

It is intended to be compatible with the UDP API of the [http://w3.impa.br/~diego/software/luasocket/udp.html LuaSocket] library but is 100% nonblocking out of the box.

Attempts to connect a UDP socket object to a remote server or to a datagram unix domain socket file. Because the datagram protocol is actually connection-less, this method does not really establish a "connection", but only just set the name of the remote peer for subsequent read/write operations.

Both IP addresses and domain names can be specified as the <code>host</code> argument. In case of domain names, this method will use Nginx core's dynamic resolver to parse the domain name without blocking and it is required to configure the [[HttpCoreModule#resolver|resolver]] directive in the <code>nginx.conf</code> file like this:

<geshi lang="nginx">

resolver 8.8.8.8; # use Google's public DNS nameserver

</geshi>

If the nameserver returns multiple IP addresses for the host name, this method will pick up one randomly.

In case of error, the method returns <code>nil</code> followed by a string describing the error. In case of success, the method returns <code>1</code>.

assuming the datagram service is listening on the unix domain socket file <code>/tmp/some-datagram-service.sock</code>.

Calling this method on an already connected socket object will cause the original connection to be closed first.

This method was first introduced in the <code>v0.5.7</code> release.

== udpsock:send ==

'''syntax:''' ''ok, err = udpsock:send(data)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Sends data on the current UDP or datagram unix domain socket object.

In case of success, it returns <code>1</code>. Otherwise, it returns <code>nil</code> and a string describing the error.

The input argument <code>data</code> can either be a Lua string or a (nested) Lua table holding string fragments. In case of table arguments, this method will copy all the string elements piece by piece to the underlying Nginx socket send buffers, which is usually optimal than doing string concatenation operations on the Lua land.

In case of success, it returns the data received; in case of error, it returns <code>nil</code> with a string describing the error.

If the <code>size</code> argument is specified, then this method will use this size as the receive buffer size. But when this size is greater than <code>8192</code>, then <code>8192</code> will be used instead.

If no argument is specified, then the maximal buffer size, <code>8192</code> is assumed.

Timeout for the reading operation is controlled by the [[#lua_socket_read_timeout|lua_socket_read_timeout]] config directive and the [[#udpsock:settimeout|settimeout]] method. And the latter takes priority. For example:

<geshi lang="lua">

sock:settimeout(1000) -- one second timeout

local data, err = sock:receive()

if not data then

ngx.say("failed to read a packet: ", data)

return

end

ngx.say("successfully read a packet: ", data)

</geshi>

It is important here to call the [[#udpsock:settimeout|settimeout]] method ''before'' calling this method.

This feature was first introduced in the <code>v0.5.7</code> release.

== udpsock:close ==

'''syntax:''' ''ok, err = udpsock:close()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Closes the current UDP or datagram unix domain socket. It returns the <code>1</code> in case of success and returns <code>nil</code> with a string describing the error otherwise.

Socket objects that have not invoked this method (and associated connections) will be closed when the socket object is released by the Lua GC (Garbage Collector) or the current client HTTP request finishes processing.

This feature was first introduced in the <code>v0.5.7</code> release.

== udpsock:settimeout ==

'''syntax:''' ''udpsock:settimeout(time)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Set the timeout value in milliseconds for subsequent socket operations (like [[#udpsock:receive|receive]]).

Settings done by this method takes priority over those config directives, like [[#lua_socket_read_timeout|lua_socket_read_timeout]].

This feature was first introduced in the <code>v0.5.7</code> release.

== ngx.socket.tcp ==

'''syntax:''' ''tcpsock = ngx.socket.tcp()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Creates and returns a TCP or stream-oriented unix domain socket object (also known as one type of the "cosocket" objects). The following methods are supported on this object:

* [[#tcpsock:connect|connect]]

* [[#tcpsock:send|send]]

* [[#tcpsock:receive|receive]]

* [[#tcpsock:close|close]]

* [[#tcpsock:settimeout|settimeout]]

* [[#tcpsock:setoption|setoption]]

* [[#tcpsock:receiveuntil|receiveuntil]]

* [[#tcpsock:setkeepalive|setkeepalive]]

* [[#tcpsock:getreusedtimes|getreusedtimes]]

It is intended to be compatible with the TCP API of the [http://w3.impa.br/~diego/software/luasocket/tcp.html LuaSocket] library but is 100% nonblocking out of the box. Also, we introduce some new APIs to provide more functionalities.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

Attempts to connect a TCP socket object to a remote server or to a stream unix domain socket file without blocking.

Before actually resolving the host name and connecting to the remote backend, this method will always look up the connection pool for matched idle connections created by previous calls of this method (or the [[#ngx.socket.connect|ngx.socket.connect]] function).

Both IP addresses and domain names can be specified as the <code>host</code> argument. In case of domain names, this method will use Nginx core's dynamic resolver to parse the domain name without blocking and it is required to configure the [[HttpCoreModule#resolver|resolver]] directive in the <code>nginx.conf</code> file like this:

<geshi lang="nginx">

resolver 8.8.8.8; # use Google's public DNS nameserver

</geshi>

If the nameserver returns multiple IP addresses for the host name, this method will pick up one randomly.

In case of error, the method returns <code>nil</code> followed by a string describing the error. In case of success, the method returns <code>1</code>.

Timeout for the connecting operation is controlled by the [[#lua_socket_connect_timeout|lua_socket_connect_timeout]] config directive and the [[#tcpsock:settimeout|settimeout]] method. And the latter takes priority. For example:

<geshi lang="lua">

local sock = ngx.socket.tcp()

sock:settimeout(1000) -- one second timeout

local ok, err = sock:connect(host, port)

</geshi>

It is important here to call the [[#tcpsock:settimeout|settimeout]] method ''before'' calling this method.

Calling this method on an already connected socket object will cause the original connection to be closed first.

An optional Lua table can be specified as the last argument to this method to specify various connect options:

* <code>pool</code>

: specify a custom name for the connection pool being used. If omitted, then the connection pool name will be generated from the string template <code>"<host>:<port>"</code> or <code>"<unix-socket-path>"</code>.

The support for the options table argument was first introduced in the <code>v0.5.7</code> release.

This method was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:send ==

'''syntax:''' ''bytes, err = tcpsock:send(data)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Sends data without blocking on the current TCP or Unix Domain Socket connection.

This method is a synchronous operation that will not return until ''all'' the data has been flushed into the system socket send buffer or an error occurs.

In case of success, it returns the total number of bytes that have been sent. Otherwise, it returns <code>nil</code> and a string describing the error.

The input argument <code>data</code> can either be a Lua string or a (nested) Lua table holding string fragments. In case of table arguments, this method will copy all the string elements piece by piece to the underlying Nginx socket send buffers, which is usually optimal than doing string concatenation operations on the Lua land.

Timeout for the sending operation is controlled by the [[#lua_socket_send_timeout|lua_socket_send_timeout]] config directive and the [[#tcpsock:settimeout|settimeout]] method. And the latter takes priority. For example:

<geshi lang="lua">

sock:settimeout(1000) -- one second timeout

local bytes, err = sock:send(request)

</geshi>

It is important here to call the [[#tcpsock:settimeout|settimeout]] method ''before'' calling this method.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:receive ==

'''syntax:''' ''data, err, partial = tcpsock:receive(size)''

'''syntax:''' ''data, err, partial = tcpsock:receive(pattern?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Receives data from the connected socket according to the reading pattern or size.

This method is a synchronous operation just like the [[#tcpsock:send|send]] method and is 100% nonblocking.

In case of success, it returns the data received; in case of error, it returns <code>nil</code> with a string describing the error and the partial data received so far.

If a number-like argument is specified (including strings that look like numbers), then it is interpreted as a size. This method will not return until it reads exactly this size of data or an error occurs.

If a non-number-like string argument is specified, then it is interpreted as a "pattern". The following patterns are supported:

* <code>'*a'</code>: reads from the socket until the connection is closed. No end-of-line translation is performed;

* <code>'*l'</code>: reads a line of text from the socket. The line is terminated by a <code>Line Feed</code> (LF) character (ASCII 10), optionally preceded by a <code>Carriage Return</code> (CR) character (ASCII 13). The CR and LF characters are not included in the returned line. In fact, all CR characters are ignored by the pattern.

If no argument is specified, then it is assumed to be the pattern <code>'*l'</code>, that is, the line reading pattern.

Timeout for the reading operation is controlled by the [[#lua_socket_read_timeout|lua_socket_read_timeout]] config directive and the [[#tcpsock:settimeout|settimeout]] method. And the latter takes priority. For example:

<geshi lang="lua">

sock:settimeout(1000) -- one second timeout

local line, err, partial = sock:receive()

if not line then

ngx.say("failed to read a line: ", err)

return

end

ngx.say("successfully read a line: ", line)

</geshi>

It is important here to call the [[#tcpsock:settimeout|settimeout]] method ''before'' calling this method.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:receiveuntil ==

'''syntax:''' ''iterator = tcpsock:receiveuntil(pattern, options?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

This method returns an iterator Lua function that can be called to read the data stream until it sees the specified pattern or an error occurs.

Here is an example for using this method to read a data stream with the boundary sequence <code>--abcedhb</code>:

<geshi lang="lua">

local reader = sock:receiveuntil("\r\n--abcedhb")

local data, err, partial = reader()

if not data then

ngx.say("failed to read the data stream: ", err)

end

ngx.say("read the data stream: ", data)

</geshi>

When called without any argument, the iterator function returns the received data right ''before'' the specified pattern string in the incoming data stream. So for the example above, if the incoming data stream is <code>'hello, world! -agentzh\r\n--abcedhb blah blah'</code>, then the string <code>'hello, world! -agentzh'</code> will be returned.

In case of error, the iterator function will return <code>nil</code> along with a string describing the error and the partial data bytes that have been read so far.

The iterator function can be called multiple times and can be mixed safely with other cosocket method calls or other iterator function calls.

The iterator function behaves differently (i.e., like a real iterator) when it is called with a <code>size</code> argument. That is, it will read that <code>size</code> of data on each invocation and will return <code>nil</code> at the last invocation (either sees the boundary pattern or meets an error). For the last successful invocation of the iterator function, the <code>err</code> return value will be <code>nil</code> too. The iterator function will be reset after the last successful invocation that returns <code>nil</code> data and <code>nil</code> error. Consider the following example:

<geshi lang="lua">

local reader = sock:receiveuntil("\r\n--abcedhb")

while true then

local data, err, partial = reader(4)

if not data then

if err then

ngx.say("failed to read the data stream: ", err)

break

end

ngx.say("read done")

break

end

ngx.say("read chunk: [", data, "]")

end

</geshi>

Then for the incoming data stream <code>'hello, world! -agentzh\r\n--abcedhb blah blah'</code>, we shall get the following output from the sample code above:

<geshi lang="text">

read chunk: [hell]

read chunk: [o, w]

read chunk: [orld]

read chunk: [! -a]

read chunk: [gent]

read chunk: [zh]

read done

</geshi>

Note that, the actual data returned ''might'' be a little longer than the size limit specified by the <code>size</code> argument when the boundary pattern has ambiguity for streaming parsing. Near the boundary of the data stream, the data string actually returned could also be shorter than the size limit.

Timeout for the iterator function's reading operation is controlled by the [[#lua_socket_read_timeout|lua_socket_read_timeout]] config directive and the [[#tcpsock:settimeout|settimeout]] method. And the latter takes priority. For example:

<geshi lang="lua">

local readline = sock:receiveuntil("\r\n")

sock:settimeout(1000) -- one second timeout

line, err, partial = readline()

if not line then

ngx.say("failed to read a line: ", err)

return

end

ngx.say("successfully read a line: ", line)

</geshi>

It is important here to call the [[#tcpsock:settimeout|settimeout]] method ''before'' calling the iterator function (note that the <code>receiveuntil</code> call is irrelevant here).

As from the <code>v0.5.1</code> release, this method also takes an optional <code>options</code> table argument to control the behavior. The following options are supported:

* <code>inclusive</code>

The <code>inclusive</code> takes a boolean value to control whether to include the pattern string in the returned data string. Default to <code>false</code>. For example,

<geshi lang="lua">

local reader = tcpsock:receiveuntil("_END_", { inclusive = true })

local data = reader()

ngx.say(data)

</geshi>

Then for the input data stream <code>"hello world _END_ blah blah blah"</code>, then the example above will output <code>hello world _END_</code>, including the pattern string <code>_END_</code> itself.

This method was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:close ==

'''syntax:''' ''ok, err = tcpsock:close()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Closes the current TCP or stream unix domain socket. It returns the <code>1</code> in case of success and returns <code>nil</code> with a string describing the error otherwise.

Note that there is no need to call this method on socket objects that have invoked the [[#tcpsock:setkeepalive|setkeepalive]] method because the socket object is already closed (and the current connection is saved into the built-in connection pool).

Socket objects that have not invoked this method (and associated connections) will be closed when the socket object is released by the Lua GC (Garbage Collector) or the current client HTTP request finishes processing.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:settimeout ==

'''syntax:''' ''tcpsock:settimeout(time)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Set the timeout value in milliseconds for subsequent socket operations ([[#tcpsock:connect|connect]], [[#tcpsock:receive|receive]], and iterators returned from [[#tcpsock:receiveuntil|receiveuntil]]).

Settings done by this method takes priority over those config directives, i.e., [[#lua_socket_connect_timeout|lua_socket_connect_timeout]], [[#lua_socket_send_timeout|lua_socket_send_timeout]], and [[#lua_socket_read_timeout|lua_socket_read_timeout]].

Note that this method does ''not'' affect the [[#lua_socket_keepalive_timeout|lua_socket_keepalive_timeout]] setting; the <code>timeout</code> argument to the [[#tcpsock:setkeepalive|setkeepalive]] method should be used for this purpose instead.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:setoption ==

'''syntax:''' ''tcpsock:setoption(option, value?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

This function is added for [http://w3.impa.br/~diego/software/luasocket/tcp.html LuaSocket] API compatibility and does nothing for now. Its functionality will be implemented in future.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:setkeepalive ==

'''syntax:''' ''ok, err = tcpsock:setkeepalive(timeout?, size?)''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

Puts the current socket's connection into the cosocket built-in connection pool and keep it alive until other [[#tcpsock:connect|connect]] method calls request it or the associated maximal idle timeout is expired.

The first optional argument, <code>timeout</code>, can be used to specify the maximal idle timeout (in milliseconds) for the current connection. If omitted, the default setting in the [[#lua_socket_keepalive_timeout|lua_socket_keepalive_timeout]] config directive will be used. If the <code>0</code> value is given, then the timeout interval is unlimited.

The second optional argument, <code>size</code>, can be used to specify the maximal number of connections allowed in the connection pool for the current server (i.e., the current host-port pair or the unix domain socket file path). Note that the size of the connection pool cannot be changed once the pool is created. When this argument is omitted, the default setting in the [[#lua_socket_pool_size|lua_socket_pool_size]] config directive will be used.

When the connection pool exceeds the available size limit, the least recently used (idle) connection already in the pool will be closed to make room for the current connection.

Note that the cosocket connection pool is per Nginx worker process rather than per Nginx server instance, so the size limit specified here also applies to every single Nginx worker process.

Idle connections in the pool will be monitored for any exceptional events like connection abortion or unexpected incoming data on the line, in which cases the connection in question will be closed and removed from the pool.

In case of success, this method returns <code>1</code>; otherwise, it returns <code>nil</code> and a string describing the error.

This method also makes the current cosocket object enter the "closed" state, so there is no need to manually call the [[#tcpsock:close|close]] method on it afterwards.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

== tcpsock:getreusedtimes ==

'''syntax:''' ''count, err = tcpsock:getreusedtimes()''

'''context:''' ''rewrite_by_lua*, access_by_lua*, content_by_lua*''

This method returns the (successfully) reused times for the current connection. In case of error, it returns <code>nil</code> and a string describing the error.

If the current connection does not come from the built-in connection pool, then this method always returns <code>0</code>, that is, the connection has never been reused (yet). If the connection comes from the connection pool, then the return value is always non-zero. So this method can also be used to determine if the current connection comes from the pool.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

This function is a shortcut for combining [[#ngx.socket.tcp|ngx.socket.tcp()]] and the [[#tcpsock:connect|connect()]] method call in a single operation. It is actually implemented like this:

<geshi lang="lua">

local sock = ngx.socket.tcp()

local ok, err = sock:connect(...)

if not ok then

return nil, err

end

return sock

</geshi>

There is no way to use the [[#tcpsock:settimeout|settimeout]] method to specify connecting timeout for this method and the [[#lua_socket_connect_timeout|lua_socket_connect_timeout]] directive must be set at configure time instead.

This feature was first introduced in the <code>v0.5.0rc1</code> release.

Spawns a new user "light thread" with the Lua function <code>func</code> as well as those optional arguments <code>arg1</code>, <code>arg2</code>, and etc. Returns a Lua thread (or Lua coroutine) object represents this "light thread".

"Light threads" are just a special kind of Lua coroutines that are scheduled by the ngx_lua module.

Before <code>ngx.thread.spawn</code> returns, the <code>func</code> will be called with those optional arguments until it returns, aborts with an error, or gets yielded due to I/O operations via the [[#Nginx API for Lua|Nginx API for Lua]] (like [[#tcpsock:receive|tcpsock:receive]]).

After <code>ngx.thread.spawn</code> returns, the newly-created "light thread" will keep running asynchronously usually at various I/O events.

All the Lua code chunks running by [[#rewrite_by_lua|rewrite_by_lua]], [[#access_by_lua|access_by_lua]], and [[#content_by_lua|content_by_lua]] are in a boilerplate "light thread" created automatically by ngx_lua. Such boilerplate "light thread" are also called "entry threads".

By default, the corresponding Nginx handler (e.g., [[#rewrite_by_lua|rewrite_by_lua]] handler) will not terminate until

# both the "entry thread" and all the user "light threads" terminates,

# a "light thread" (either the "entry thread" or a user "light thread" aborts by calling [[#ngx.exit|ngx.exit]], [[#ngx.exec|ngx.exec]], [[#ngx.redirect|ngx.redirect]], or [[#ngx.req.set_uri|ngx.req.set_uri(uri, true)]], or

# the "entry thread" terminates with a Lua error.

When the user "light thread" terminates with a Lua error, however, it will not abort other running "light threads" like the "entry thread" does.

Due to the limitation in the Nginx subrequest model, it is not allowed to abort a running Nginx subrequest in general. So it is also prohibited to abort a running "light thread" that is pending on one ore more Nginx subrequests. You must call [[#ngx.thread.wait|ngx.thread.wait]] to wait for those "light thread" to terminate before quitting the "world".

The "light threads" are not scheduled in a pre-emptive way. In other words, no time-slicing is performed automatically. A "light thread" will keep running exclusively on the CPU until

# a (nonblocking) I/O operation cannot be completed in a single run,

# it calls [[#coroutine.yield|coroutine.yield]] to actively give up execution, or

# it is aborted by a Lua error or an invocation of [[#ngx.exit|ngx.exit]], [[#ngx.exec|ngx.exec]], [[#ngx.redirect|ngx.redirect]], or [[#ngx.req.set_uri|ngx.req.set_uri(uri, true)]].

For the first two cases, the "light thread" will usually be resumed later by the ngx_lua scheduler unless a "stop-the-world" event happens.

User "light threads" can create "light threads" themselves and normal user coroutiens created by [[#coroutine.create|coroutine.create]] can also create "light threads". The coroutine (be it a normal Lua coroutine or a "light thread") that directly spawns the "light thread" is called the "parent coroutine" for the "light thread" newly spawned.

The "parent coroutine" can call [[#ngx.thread.wait|ngx.thread.wait]] to wait on the termination of its child "light thread".

You can call coroutine.status() and coroutine.yield() on the "light thread" coroutines.

The status of the "light thread" coroutine can be "zombie" if

# the current "light thread" already terminates (either successfully or with an error),

# its parent coroutine is still alive, and

# its parent coroutine is not waiting on it with [[#ngx.thread.wait|ngx.thread.wait]].

The following example demonstrates the use of coroutine.yield() in the "light thread" coroutines

to do manual time-slicing:

<geshi lang="lua">

local yield = coroutine.yield

function f()

local self = coroutine.running()

ngx.say("f 1")

yield(self)

ngx.say("f 2")

yield(self)

ngx.say("f 3")

end

local self = coroutine.running()

ngx.say("0")

yield(self)

ngx.say("1")

ngx.thread.spawn(f)

ngx.say("2")

yield(self)

ngx.say("3")

yield(self)

ngx.say("4")

</geshi>

Then it will generate the output

<geshi lang="text">

0

1

f 1

2

f 2

3

f 3

4

</geshi>

"Light threads" are mostly useful for doing concurrent upstream requests in a single Nginx request handler, kinda like a generalized version of [[#ngx.location.capture_multi|ngx.location.capture_multi]] that can work with all the [[#Nginx API for Lua|Nginx API for Lua]]. The following example demonstrates parallel requests to MySQL, Memcached, and upstream HTTP services in a single Lua handler, and outputting the results in the order that they actually return (very much like the Facebook BigPipe model):

<geshi lang="lua">

-- query mysql, memcached, and a remote http service at the same time,

Waits on one or more child "light threads" and returns the results of the first "light thread" that terminates (either successfully or with an error).

The arguments <code>thread1</code>, <code>thread2</code>, and etc are the Lua thread objects returned by earlier calls of [[#ngx.thread.spawn|ngx.thread.spawn]].

The return values have exactly the same meaning as [[#coroutine.resume|coroutine.resume]], that is, the first value returned is a boolean value indicating whether the "light thread" terminates successfully or not, and subsequent values returned are the return values of the user Lua function that was used to spawn the "light thread" (in case of success) or the error object (in case of failure).

Only the direct "parent coroutine" can wait on its child "light thread", otherwise a Lua exception will be raised.

The following example demonstrates the use of <code>ngx.thread.wait</code> and [[#ngx.location.capture|ngx.location.capture]] to emulate [[#ngx.location.capture_multi|ngx.location.capture_multi]]:

As from the <code>v0.5.0rc32</code> release, all <code>*_by_lua_file</code> configure directives (such as [[#content_by_lua_file|content_by_lua_file]]) support loading Lua 5.1 and LuaJIT 2.0 raw bytecode files directly.

Please note that the bytecode format used by LuaJIT 2.0 is not compatible with that used by the standard Lua 5.1 interpreter. So if using LuaJIT 2.0 with ngx_lua, LuaJIT compatible bytecode files must be generated as shown:

Please refer to the official LuaJIT documentation on the <code>-b</code> option for more details:

http://luajit.org/running.html#opt_b

Similarly, if using the standard Lua 5.1 interpreter with ngx_lua, Lua compatible bytecode files must be generated using the <code>luac</code> commandline utility as shown:

<geshi lang="bash">

luac -o /path/to/output_file.luac /path/to/input_file.lua

</geshi>

Unlike as with LuaJIT, debug information is included in standard Lua 5.1 bytecode files by default. This can be striped out by specifying the <code>-s</code> option as shown:

<geshi lang="bash">

luac -s -o /path/to/output_file.luac /path/to/input_file.lua

</geshi>

Attempts to load standard Lua 5.1 bytecode files into ngx_lua instances linked to LuaJIT 2.0 or vice versa, will result in an error message, such as that below, being logged into the Nginx <code>error.log</code> file:

Loading bytecode files via the Lua primitives like <code>require</code> and <code>dofile</code> should always work as expected.

= HTTP 1.0 support =

The HTTP 1.0 protocol does not support chunked output and requires an explicit <code>Content-Length</code> header when the response body is not empty in order to support the HTTP 1.0 keep-alive.

So when a HTTP 1.0 request is made and the [[#lua_http10_buffering|lua_http10_buffering]] directive is turned <code>on</code>, ngx_lua will buffer the

output of [[#ngx.say|ngx.say]] and [[#ngx.print|ngx.print]] calls and also postpone sending response headers until all the response body output is received.

At that time ngx_lua can calculate the total length of the body and construct a proper <code>Content-Length</code> header to return to the HTTP 1.0 client.

If the <code>Content-Length</code> response header is set in the running Lua code, however, this buffering will be disabled even if the [[#lua_http10_buffering|lua_http10_buffering]] directive is turned <code>on</code>.

For large streaming output responses, it is important to disable the [[#lua_http10_buffering|lua_http10_buffering]] directive to minimise memory usage.

Note that common HTTP benchmark tools such as <code>ab</code> and <code>http_load</code> issue HTTP 1.0 requests by default.

To force <code>curl</code> to send HTTP 1.0 requests, use the <code>-0</code> option.

= Data Sharing within an Nginx Worker =

To globally share data among all the requests handled by the same nginx worker process, encapsulate the shared data into a Lua module, use the Lua <code>require</code> builtin to import the module, and then manipulate the shared data in Lua. This works because required Lua modules are loaded only once and all coroutines will share the same copy of the module. Note however that Lua global variables WILL NOT persist between requests because of the one-coroutine-per-request isolation design.

Here is a complete small example:

<geshi lang="lua">

-- mydata.lua

module("mydata", package.seeall)

local data = {

dog = 3,

cat = 4,

pig = 5,

}

function get_age(name)

return data[name]

end

</geshi>

and then accessing it from <code>nginx.conf</code>:

<geshi lang="nginx">

location /lua {

content_lua_by_lua '

local mydata = require("mydata")

ngx.say(mydata.get_age("dog"))

';

}

</geshi>

The <code>mydata</code> module in this example will only be loaded and run on the first request to the location <code>/lua</code>,

and all subsequent requests to the same nginx worker process will use the reloaded instance of the

module as well as the same copy of the data in it, until a <code>HUP</code> signal is sent to the Nginx master process to force a reload.

This data sharing technique is essential for high performance Lua applications based on this module.

Note that this data sharing is on a ''per-worker'' basis and not on a ''per-server'' basis. That is, when there are multiple nginx worker processes under an Nginx master, data sharing cannot cross the process boundary between these workers.

If server-wide data sharing is required, then use one or more of the following approaches:

# Use the [[#ngx.shared.DICT|ngx.shared.DICT]] API provided by this module.

# Use only a single nginx worker and a single server (this is however not recommended when there is a multi core CPU or multiple CPUs in a single machine).

# Use data storage mechanisms such as <code>memcached</code>, <code>redis</code>, <code>MySQL</code> or