This is the driver. The driver is not meant to know “how” the query resolves, but rather “when” to execute “what”.

On the other side are layers. They are responsible for dissecting the packets and informing the driver about the results. For example, a produce layer generates query, a consume layer validates answer.

Tip

Layers are executed asynchronously by the driver. If you need some asset beforehand, you can signalize the driver using returning state or current query flags. For example, setting a flag AWAIT_CUT forces driver to fetch zone cut information before the packet is consumed; setting a RESOLVED flag makes it pop a query after the current set of layers is finished; returning FAIL state makes it fail current query.

Layers can also change course of resolution, for example by appending additional queries.

This doesn’t block currently processed query, and the newly created sub-request will start as soon as driver finishes processing current. In some cases you might need to issue sub-request and process it before continuing with the current, i.e. validator may need a DNSKEY before it can validate signatures. In this case, layers can yield and resume afterwards.

The YIELD state is a bit special. When a layer returns it, it interrupts current walk through the layers. When the layer receives it,
it means that it yielded before and now it is resumed. This is useful in a situation where you need a sub-request to determine whether current answer is valid or not.

Resolution plan - Query resolution plan, a list of partial queries (with hierarchy) sent in order to satisfy original query. This contains information about the queries, nameserver choice, timing information, answer and its class.

Nameservers - Reputation database of nameservers, this serves as an aid for nameserver choice.

A processing layer is going to be called by the query resolution driver for each query,
so you’re going to work with struct kr_request as your per-query context.
This structure contains pointers to resolution context, resolution plan and also the final answer.

This is only passive processing of the incoming answer. If you want to change the course of resolution, say satisfy a query from a local cache before the library issues a query to the nameserver, you can use states (see the Static hints for example).

intproduce(kr_layer_t*ctx,knot_pkt_t*pkt){structkr_request*req=ctx->req;structkr_query*qry=req->current_query;/* Query can be satisfied locally. */if(can_satisfy(qry)){/* This flag makes the resolver move the query * to the "resolved" list. */qry->flags.RESOLVED=true;returnKR_STATE_DONE;}/* Pass-through. */returnctx->state;}

It is possible to not only act during the query resolution, but also to view the complete resolution plan afterwards. This is useful for analysis-type tasks, or “per answer” hooks.

The APIs in Lua world try to mirror the C APIs using LuaJIT FFI, with several differences and enhancements.
There is not comprehensive guide on the API yet, but you can have a look at the bindings file.

Packet is the data structure that you’re going to see in layers very often. They consists of a header, and four sections: QUESTION, ANSWER, AUTHORITY, ADDITIONAL. The first section is special, as it contains the query name, type, and class; the rest of the sections contain RRSets.

First you need to convert it to a type known to FFI and check basic properties. Let’s start with a snippet of a consume layer.

During produce or begin, you might want to want to write to packet. Keep in mind that you have to write packet sections in sequence,
e.g. you can’t write to ANSWER after writing AUTHORITY, it’s like stages where you can’t go back.

The request holds information about currently processed query, enabled options, cache, and other extra data.
You primarily need to retrieve currently processed query.

consume=function(state,req,pkt)print(req.options)print(req.state)-- Print information about current querylocalcurrent=req:current()print(kres.dname2str(current.owner))print(current.stype,current.sclass,current.id,current.flags)end

In layers that either begin or finalize, you can walk the list of resolved queries.

locallast=req:resolved()print(last.stype)

As described in the layers, you can not only retrieve information about current query, but also push new ones or pop old ones.

-- Push new querylocalqry=req:push(pkt:qname(),kres.type.SOA,kres.class.IN)qry.flags.AWAIT_CUT=true-- Pop the query, this will erase it from resolution planreq:pop(qry)

some functions got inlined from headers, but you can use their kr_* clones:
kr_rrsig_sig_inception(), kr_rrsig_sig_expiration(), kr_rrsig_type_covered().
Note that these functions now accept knot_rdata_t* instead of a pair
knot_rdataset_t* and size_t - you can use knot_rdataset_at() for that.

knot_rrset_add_rdata() doesn’t take TTL parameter anymore

knot_rrset_init_empty() was inlined, but in lua you can use the constructor

knot_rrset_ttl() was inlined, but in lua you can use :ttl() method instead

knot_pkt_qname(), _qtype(), _qclass(), _rr(), _section() were inlined,
but in lua you can use methods instead, e.g. myPacket:qname()

knot_pkt_free() takes knot_pkt_t* instead of knot_pkt_t**, but from lua
you probably didn’t want to use that; constructor ensures garbage collection.

The API provides an API providing a “consumer-producer”-like interface to enable user to plug it into existing event loop or I/O code.

Example usage of the iterative API:

// Create request and its memory poolstructkr_requestreq={.pool={.ctx=mp_new(4096),.alloc=(mm_alloc_t)mp_alloc}};// Setup and provide input queryintstate=kr_resolve_begin(&req,ctx,final_answer);state=kr_resolve_consume(&req,query);// Generate answerwhile(state==KR_STATE_PRODUCE){// Additional query generate, do the I/O and pass back answerstate=kr_resolve_produce(&req,&addr,&type,query);while(state==KR_STATE_CONSUME){intret=sendrecv(addr,proto,query,resp);// If I/O fails, make "resp" emptystate=kr_resolve_consume(&request,addr,resp);knot_pkt_clear(resp);}knot_pkt_clear(query);}// "state" is either DONE or FAILkr_resolve_finish(&request,state);

The rank meaning consists of one independent flag - KR_RANK_AUTH, and the rest have meaning of values where only one can hold at any time. You can use one of the enums as a safe initial value, optionally | KR_RANK_AUTH; otherwise it’s best to manipulate ranks via the kr_rank_* functions.

If the CONSUME is returned then dst, type and packet will be filled with appropriate values and caller is responsible to send them and receive answer. If it returns any other state, then content of the variables is undefined.

Keeps information about current query processing between calls to processing APIs, i.e. current resolved query, resolution plan, … Use this instead of the simple interface if you want to implement multiplexing or custom I/O.

Values from kr_rank, currently just KR_RANK_SECURE and _INITIAL. Only read this in finish phase and after validator, please. Meaning of _SECURE: all RRs in answer+authority are _SECURE, including any negative results implied (NXDOMAIN, NODATA).

if RTT is greater then KR_NS_TIMEOUT, address will placed at the beginning of the nsrep list once in cache.ns_tout() milliseconds. Otherwise it will be sorted as if it has cached RTT equal to KR_NS_MAX_SCORE + 1.

This small collection of “generics” was born out of frustration that I couldn’t find no
such thing for C. It’s either bloated, has poor interface, null-checking is absent or
doesn’t allow custom allocation scheme. BSD-licensed (or compatible) code is allowed here,
as long as it comes with a test case in tests/test_generics.c.

array - a set of simple macros to make working with dynamic arrays easier.

The C has no generics, so it is implemented mostly using macros. Be aware of that, as direct usage of the macros in the evaluating macros may lead to different expectations:

May evaluate the code twice, leading to unexpected behaviour. This is a price to pay for the absence of proper generics.

Example usage:

array_t(constchar*)arr;array_init(arr);// Reserve memory in advanceif(array_reserve(arr,2)<0){returnENOMEM;}// Already reserved, cannot failarray_push(arr,"princess");array_push(arr,"leia");// Not reserved, may failif(array_push(arr,"han")<0){returnENOMEM;}// It does not hide what it really isfor(size_ti=0;i<arr.len;++i){printf("%s\n",arr.at[i]);}// Random deletearray_del(arr,0);

Both the head and tail of the queue can be accessed and pushed to, but only the head can be popped from.

Example usage:

// define new queue type, and init a new queue instancetypedefqueue_t(int)queue_int_t;queue_int_tq;queue_init(q);// do some operationsqueue_push(q,1);queue_push(q,2);queue_push(q,3);queue_push(q,4);queue_pop(q);assert(queue_head(q)==2);assert(queue_tail(q)==4);// you may iteratetypedefqueue_it_t(int)queue_it_int_t;for(queue_it_int_tit=queue_it_begin(q);!queue_it_finished(it);queue_it_next(it)){++queue_it_val(it);}assert(queue_tail(q)==5);queue_push_head(q,0);++queue_tail(q);assert(queue_tail(q)==6);// free it upqueue_deinit(q);// you may use dynamic allocation for the type itselfqueue_int_t*qm=malloc(sizeof(queue_int_t));queue_init(*qm);queue_deinit(*qm);free(qm);

Note

The implementation uses a singly linked list of blocks where each block stores an array of values (for better efficiency).

The implementation tries to keep frequent keys and avoid others, even if “used recently”, so it may refuse to store it on lru_get_new(). It uses hashing to split the problem pseudo-randomly into smaller groups, and within each it tries to approximate relative usage counts of several most frequent keys/hashes. This tracking is done for more keys than those that are actually stored.