The configuration of query servers is done via environment variables.
Calling os:getenv every time we need a new query process is expensive.
Instead we extact all configured query servers from the environemt on
`couch_proc_manager` startup and cache them in ets table.

Previously the compaction daemon looked for design docs in each shard file.
This worked well for versions < 2.x, however, for clustered databases design
documents will only be found in their respective shards based on the document
id hashing algorithm. This meant that in a default setup of Q=8 only the views
of one shard range, where the _design document lives, would be compacted.

The fix for this issue is to use fabric to retrive all the design documents for
clustered database.

Force couch_replicator_auth_session plugin to refresh the session periodically.
Normally it is not needed as the session would be refreshed when requests start
failing with a 401 (authentication) or 403 (authorization) errors. In some
cases when anonymous writes are allowed to the database and a VDU function is
used to forbid writes based on the authenticated in username, requests with an
expired session cookie will not fail with a 401 and the session will not be
refreshed.

To fix the issue using these two approaches:

1. Use cookie's max-age expiry time to schedule a refresh. To ensure that time
is provided in the cookie, switch the the option to enable it by default. This
handles the issue for endpoints which are updated with this commit.

2. For endpoints which do not put a max-age time in the cookie, use a value
that's less than CouchDB's default auth timeout. If users changed their
auth timeout value, and use VDUs in the pattern described above, and don't
update their endpoints to version which sends max-age by default, they could
adjust `[replicator] session_refresh_interval_sec` to their auth timeout minus
some small delay.

Of course refresh based on auth/authz failures should still works as before.

Previously there were quite a few problems with load_validation_funs
in the case when clustered database is deleted.

- the calls to load_validation_funs were failing with `internal_server` error [1]
- the deleted database stayed opened because:
- the caller of the load_validation_funs (update_doc) stayed alive
- the main_pid of the deleted database wasn't killed either
- there was an infinite loop in ddoc_cache_entry trying to recover ddoc from deleted database

The solution is:
- do not call `recover` for deleted database
- close `main_pid`
- use `erlang:error` to crash the caller

Unfortuantely, os:timestamp() on Windows only has millisecond
accuracy. That means that the two Mango tests checking for a positive
value of execution_time fail, since these tests appear to be running
in <1ms on my test setup (a rather anemic Windows VM!)

This change disables only the check for execution_time in two tests,
and leaves the remainder of the execution_stats checks in place
on Windows.

It also introduces a convenience "make.cmd" file so you can
"make check" without typing "make -f Makefile.win check" all the time.

If configured, CouchDB will add every node in the seedlist to the _nodes
DB automatically, which will trigger a distributed Erlang connection and
a replication of the internal system databases to the local node. This
eliminates the need to explicitly add each node using the HTTP API.

We also modify the /_up endpoint to reflect the progress of the initial seeding
of the node. If a seedlist is configured the endpoint will return 404 until the
local node has updated its local replica of each of the system databases from
one of the members of the seedlist. Once the status flips to "ok" the endpoint
will return 200 and it's safe to direct requests to the new node.

Previously stats counts between job runs were reset. So if a job was stopped
and restarted by the scheduler, its docs_written, docs_read, doc_write_failures,
etc., counts would go back to 0. For doc_write_failures this was especially bad
as it hid the fact that some documents were not replicated to the target
because either a VDU failed or one of the limits were hit.

This change preserves stats across job runs. Everytime active tasks is updated,
the stats object in rep record of each job in scheduler's ets table will be
updated asynchronously. On next job start the job will reinitialize from last
saved stats.

Where the last segment in the environment variable matches the usual
lowercase(!) query language in the design doc `language` field.

Multiple query servers can be configured by using more environment
variables.

Native Query Servers

The mango query server continues to be enabled by default. The erlang
query server continues to be disabled by default. This patch adds
a `[native_query_servers] enable_erlang_query_server = BOOL` setting
(defaults to `"false"`) to enable the erlang query server.

If the legacy configuration for enabling the query server is detected,
that is counted as a `true` setting as well, so existing configurations
continue to work just fine.

Windows

Since the setting of the `./configure` time `PREFIX` happens during
`make release`, I had to adapt the `couchdb` and `couchdb.cmd` scripts
to have the correct env vars set and the `PREFIX` replaced there.

I did this to the best of my abilities and research, but this needs
review from the Windows team (Hi Joan! :).

OS Daemons

Although deprecated in 2.2.0, we’re keeping support for this until 3.x,
but the configuration changes analogous to query servers.

Previously, configuration looked like this:

```
[os_daemons]
name = /path/to/daemon with args
```

With this patch, setup looks like this:

```
COUCHDB_OS_DAEMON_NAME="/path/to/daemon with args"
couchdb
```

Multiple OS Daemons can be started with multiple env vars. The final
segment in the env var becomes the daemon identifier inside CouchDB
as lowercase(!).