Service

It’s spec is completely unimportant, as Istio will ignore it. It just needs to
exist so that src can send events to it. If it doesn’t exist, it implies that
something went wrong during chan reconciliation. See
Channel Controller.

VirtualService

chan creates a VirtualService which redirects its hostname to the
in-memory-channel dispatcher.

Which should return
map[apiVersion:eventing.knative.dev/v1alpha1 kind:Channel name:chan]. If it
doesn’t, then src was setup incorrectly and its spec needs to be fixed.
Fixing should be as simple as updating its spec to have the correct sink
(see example.yaml).

Now that we know src is sending to chan, let’s verify that it is Ready.

You should see something like Updated deployment src-xz59f-hmtkp. Let’s see
the health of the Deployment that ContainerSource created (named in the
message, but we will get it directly in the following command):

If this is unhealthy, then it should tell you why. E.g.
'pods "src-xz59f-hmtkp-7bd4bc6964-" is forbidden: error looking up service account knative-debug/events-sa: serviceaccount "events-sa" not found'.
Fix any errors so that it the Deployment is healthy.

The KubernetesEventSource is fairly simple, as it delegates all functionality
to an underlying ContainerSource, so there is likely no
useful information in its logs. Instead more useful information is likely in the
ContainerSource Controller’s logs. If you want to
look at KubernetesEventSource Controller’s logs anyway, they can be see with:

Pay particular attention to any lines that have a logging level of warning or
error.

Data Plane

The entire Control Plane looks healthy, but we’re still not
getting any events. Now we need to investigate the data plane.

The Knative event takes the following path:

Event is generated by src.

In this case, it is caused by having a Kubernetes Event trigger it, but
as far as Knative is concerned, the Source is generating the event denovo
(from nothing).

src is POSTing the event to chan’s address,
chan-channel-45k5h.knative-debug.svc.cluster.local.

src’s Istio proxy is intercepting the request, seeing that the Host matches
a VirtualService. The request’s Host is rewritten to
chan.knative-debug.channels.cluster.local and sent to the
Channel Dispatcher,
in-memory-channel-dispatcher.knative-eventing.svc.cluster.local.

The Channel Dispatcher receives the request and introspects the Host header
to determine which Channel it corresponds to. It sees that it corresponds
to knative-debug/chan so forwards the request to the subscribers defined in
sub, in particular svc, which is backed by fn.

fn receives the request and logs it.

We will investigate components in the order in which events should travel.

Note that a few log lines within the first ~15 seconds of the Pod starting
like the following are fine. They represent the time waiting for the Istio proxy
to start. If you see these more than a few seconds after the Pod starts, then
something is wrong.

The success message is debug level, so we don’t expect to see anything. If you
see lines with a logging level of error, look at their msg. For example:

"msg":"[404] unexpected response \"\""

Which means that src correctly got the Kubernetes Event and tried to send it
to chan, but failed to do so. In this case, the response code was a 404. We
will look at the Istio proxy’s logs to see if we can get any further
information:

These are lines emitted by Envoy. The line is
documented as Envoy’s
Access Logging.
That’s odd, we already verified that there is a
VirtualService for chan. In fact, we don’t expect to see
chan-channel-45k5h.knative-debug.svc.cluster.local at all, it should be
replaced with chan.knative-debug.channels.cluster.local. We keep looking in
the same Istio proxy logs and see:

This shows that the VirtualService created for chan,
which tries to map two hosts,
chan-channel-45k5h.knative-debug.svc.cluster.local and
chan.knative-debug.channels.cluster.local, is not working. The most likely
cause is duplicate VirtualServices that all try to rewrite those hosts. Look
at all the VirtualServices in the namespace and see what hosts they rewrite:

Note: This shouldn't happen normally. It only happened here because I had local edits to the Channel controller and created a bug. If you see this with any released Channel Controllers, open a bug with all relevant information (Channel Controller info and YAML of all the VirtualServices).

Both are owned by chan. Deleting both, causes the
Channel Controller to recreate the correct one. After
deleting both, a single new one is created (same command as above):

Which looks correct. Most importantly, the return code is now 202 Accepted. In
addition, the request’s Host is being correctly rewritten to
chan.knative-debug.channels.cluster.local.

Channel Dispatcher

The Channel Dispatcher is the component that receives POSTs pushing events into
Channels and then POSTs to subscribers of those Channels when an event is
received. For the in-memory-channel used in this example, there is a single
binary that handles both the receiving and dispatching sides for all
in-memory-channelChannels.

First we will inspect the Dispatcher’s logs to see if it is anything obvious: