The collector collects trace events and keeps them ordered by their
timestamp. The timestamp may either reflect the time when the
actual trace data was generated (trace_ts) or when the trace data
was transformed into an event record (event_ts). If the time stamp
is missing in the trace data (missing timestamp option to
erlang:trace/4) the trace_ts will be set to the event_ts.

Events are reported to the collector directly with the report
function or indirectly via one or more trace clients. All reported
events are first filtered thru the collector filter before they are
stored by the collector. By replacing the default collector filter
with a customized dito it is possible to allow any trace data as
input. The collector filter is a dictionary entry with the
predefined key {filter, collector} and the value is a fun of
arity 1. See et_selector:make_event/1 for interface details,
such as which erlang:trace/1 tuples that are accepted.

The collector has a built-in dictionary service. Any term may be
stored as value in the dictionary and bound to a unique key. When
new values are inserted with an existing key, the new values will
overwrite the existing ones. Processes may subscribe on dictionary
updates by using {subscriber, pid()} as dictionary key. All
dictionary updates will be propagated to the subscriber processes
matching the pattern {{subscriber, '_'}, '_'} where the first '_'
is interpreted as a pid().

In global trace mode, the collector will automatically
start tracing on all connected Erlang nodes. When a node
connects, a port tracer will be started on that node and a
corresponding trace client on the collector node.

All events are filtered thru the collector filter, which
optionally may transform or discard the event. The first
call should use the pid of the collector process as
report handle, while subsequent calls should use the
table handle.

make_key(Type, Stuff) -> Key

Type = record(table_handle) | trace_ts | event_ts

Stuff = record(event) | Key

Key = record(event_ts) | record(trace_ts)

Make a key out of an event record or an old key.

get_table_handle(CollectorPid) -> Handle

CollectorPid = pid()

Handle = record(table_handle)

Return a table handle.

get_global_pid() -> CollectorPid | exit(Reason)

CollectorPid = pid()

Reason = term()

Return a the identity of the globally registered
collector if there is any.

dict_insert(CollectorPid, {subscriber, SubscriberPid}, Void) -> ok

dict_insert(CollectorPid, Key, Val) -> ok

Insert a dictionary entry
and send a {et, {dict_insert, Key, Val}} tuple
to all registered subscribers.

If the entry is a new subscriber, it will imply that
the new subscriber process first will get one message
for each already stored dictionary entry, before it
and all old subscribers will get this particular entry.
The collector process links to and then supervises the
subscriber process. If the subscriber process dies it
will imply that it gets unregistered as with a normal
dict_delete/2.

dict_lookup(CollectorPid, Key) -> [Val]

CollectorPid = pid()

FilterFun = filter_fun()

CollectorPid = pid()

Key = term()

Val = term()

Lookup a dictionary entry and return zero or one value.

dict_delete(CollectorPid, Key) -> ok

CollectorPid = pid()

SubscriberPid = pid()

Key = {subscriber, SubscriberPid} | term()

Delete a dictionary entry
and send a {et, {dict_delete, Key}} tuple
to all registered subscribers.

If the deleted entry is a registered subscriber, it will
imply that the subscriber process gets is unregistered as
subscriber as well as it gets it final message.