Either use KatipT or KatipContextT for a pre-built
transformer stack or add Katip and KatipContext instances to
your own transformer stack. If you do the latter, you may want to
look in the examples dir for some tips on composing contexts and
namespaces.

Define some structured log data throughout your application and
implement ToObject and LogItem for them.

Framework Types

Represents a heirarchy of namespaces going from general to
specific. For instance: ["processname", "subsystem"]. Note that
single-segment namespaces can be created using
IsString/OverloadedStrings, so "foo" will result in Namespace
["foo"].

Verbosity controls the amount of information (columns) a Scribe
emits during logging.

The convention is:
- V0 implies no additional payload information is included in message.
- V3 implies the maximum amount of payload information.
- Anything in between is left to the discretion of the developer.

Katip requires JSON objects to be logged as context. This
typeclass provides a default instance which uses ToJSON and
produces an empty object if toJSON results in any type other than
object. If you have a type you want to log that produces an Array
or Number for example, you'll want to write an explicit instance
here. You can trivially add a ToObject instance for something with
a ToJSON instance like:

Payload objects need instances of this class. LogItem makes it so
that you can have very verbose items getting logged with lots of
extra fields but under normal circumstances, if your scribe is
configured for a lower verbosity level, it will only log a
selection of those keys. Furthermore, each Scribe can be
configured with a different Verbosity level. You could even use
registerScribe, unregisterScribe, and clearScribes to at
runtime swap out your existing scribes for more verbose debugging
scribes if you wanted to.

When defining payloadKeys, don't redundantly declare the same
keys for higher levels of verbosity. Each level of verbosity
automatically and recursively contains all keys from the level
before it.

Severity is used to *exclude log messages* that are < the provided
Severity. For instance, if the user passes InfoS, DebugS items
should be ignored. Katip provides the permitItem utility for this.

Verbosity is used to select keys from the log item's payload. Each
LogItem instance describes what keys should be retained for each
Verbosity level. Use the payloadObject utility for extracting the keys
that should be permitted.

There is no built-in mechanism in katip for telling a scribe that
its time to shut down. unregisterScribe merely drops it from the
LogEnv. This means there are 2 ways to handle resources as a scribe:

Pass in the resource when the scribe is created. Handle
allocation and release of the resource elsewhere. This is what the
Handle scribe does.

Return a finalizing function that tells the scribe to shut
down. katip-elasticsearch's mkEsScribe returns an IO (Scribe,
IO ()). The finalizer will flush any queued log messages and shut
down gracefully before returning. This can be hooked into your
application's shutdown routine to ensure you never miss any log
messages on shutdown.

Name of application. This will typically never change. This
field gets prepended to the namespace of your individual log
messages. For example, if your app is MyApp and you write a log
using "logItem" and the namespace WebServer, the final
namespace will be MyApp.WebServer

Action to fetch the timestamp. You can use something like
AutoUpdate for high volume logs but note that this may cause
some output forms to display logs out of order. Alternatively,
you could just use getCurrentTime.

Initializing Loggers

Create a reasonable default InitLogEnv. Uses an AutoUdate with
the default settings as the timer. If you are concerned about
timestamp precision or event ordering in log outputs like
ElasticSearch, you should replace the timer with getCurrentTime

Dropping scribes temporarily

Remove a scribe from the environment. This does *not* finalize
the scribe. This mainly only makes sense to use with something like
MonadReader's local function to temporarily disavow a single
logger for a block of code.

Unregister *all* scribes. Note that this is *not* for closing or
finalizing scribes, use closeScribes for that. This mainly only
makes sense to use with something like MonadReader's local
function to temporarily disavow any loggers for a block of code.

Finalizing scribes at shutdown

Call this at the end of your program. This is a blocking call
that stop writing to a scribe's queue, waits for the queue to
empty, finalizes each scribe in the log environment and then
removes it. Finalizers are all run even if one of them throws, but
the exception will be re-thrown at the end.

Finalize a scribe. The scribe is removed from the environment,
its finalizer is called and it can never be written to again. Note
that this will throw any exceptions yoru finalizer will throw, and
that LogEnv is immutable, so it will not be removed in that case.

These logging functions use the basic Katip constraint and thus
will require varying degrees of explicit detail such as Namespace
and individual log items to be passed in. These can be described as
the primitives of Katip logging. If you find yourself making multiple
log statements within a logical logging context for your app, you may
want to look into the KatipContext family of logging functions like
logFM and logTM. KatipContext in most applications should be
considered the default. Here's an example of the pain point:

Another pain point to look out for is nesting actions that log in
each other. Let's say you were writing a web app. You want to capture
some detail such as the user's ID in the logs, but you also want that
info to show up in doDatabaseThings' logs so you can associate those
two pieces of information:

In the above example, doDatabaseThings would overwrite that
UserIDContext with its own context and namespace. Sometimes this is
what you want and that's why logF and other functions which only
require Katip exist. If you are interested in combining log
contexts and namespaces, see KatipContext.

Monads where katip logging actions can be performed. Katip is the
most basic logging monad. You will typically use this directly if
you either don't want to use namespaces/contexts heavily or if you
want to pass in specific contexts and/or namespaces at each log site.

For something more powerful, look at the docs for KatipContext,
which keeps a namespace and merged context. You can write simple
functions that add additional namespacing and merges additional
context on the fly.

localLogEnv was added to allow for lexically-scoped modifications
of the log env that are reverted when the supplied monad
completes. katipNoLogging, for example, uses this to temporarily
pause log outputs.

These logging functions use the KatipContext constraint which is a
superclass of Katip that also has a mechanism for keeping track of
the current context and namespace. This means a few things:

Functions that use KatipContext like logFM and logTM do not
require you to pass in LogItems or Namespaces, they pull them from
the monadic environment.

It becomes easy to add functions which add namespaces and/or
contexts to the current stack of them. You can (and should) make that
action scoped to a monadic action so that when it finishes, the
previous context and namespace will be automatically restored.

A monadic context that has an inherant way to get logging context
and namespace. Examples include a web application monad or database
monad. The local variants are just like local from Reader and
indeed you can easily implement them with local if you happen to
be using a Reader in your monad. These give us katipAddNamespace
and katipAddContext that works with *any* KatipContext, as
opposed to making users have to implement these functions on their
own in each app.

Heterogeneous list of log contexts that provides a smart
LogContext instance for combining multiple payload policies. This
is critical for log contexts deep down in a stack to be able to
inject their own context without worrying about other context that
has already been set. Also note that contexts are treated as a
sequence and <> will be appended to the right hand side of the
sequence. If there are conflicting keys in the contexts, the /right
side will take precedence/, which is counter to how monoid works
for Map and HashMap, so bear that in mind. The reasoning is
that if the user is sequentially adding contexts to the right
side of the sequence, on conflict the intent is to overwrite with
the newer value (i.e. the rightmost value).

Additional note: you should not mappend LogContexts in any sort of
infinite loop, as it retains all data, so that would be a memory
leak.

Append some context to the current context for the given monadic
action, then restore the previous state afterwards. Important note:
be careful using this in a loop. If you're using something like
forever or replicateM_ that does explicit sharing to avoid a
memory leak, youll be fine as it will *sequence* calls to
katipAddNamespace, so each loop will get the same context
added. If you instead roll your own recursion and you're recursing
in the action you provide, you'll instead accumulate tons of
redundant contexts and even if they all merge on log, they are
stored in a sequence and will leak memory. Works with anything
implementing KatipContext.

A specialization of mkHandleScribe that takes a FilePath
instead of a Handle. It is responsible for opening the file in
AppendMode and will close the file handle on
'closeScribe'/'closeScribes'. Does not do log coloring. Sets handle
to LineBuffering mode.

Provides a simple transformer that defines a KatipContext
instance for a fixed namespace and context. Just like KatipT, you
should use this if you prefer an explicit transformer stack and
don't want to (or cannot) define KatipContext for your monad
. This is the slightly more powerful version of KatipT in that it
provides KatipContext instead of just Katip. For instance: