gRPC is a high performance, open-source universal RPC framework, see https://grpc.io.
As such, it presents many analogies with REST , as both gRPC and REST use the request/response
HTTP protocol to send and receive data. REST as an architectural style implements HTTP protocol between a client and a
server through a set of constraints, typically a method (GET) and a resource or endpoint.
RPC implements client and server stubs that essentially make it possible to make calls to
procedures over a network address as if the procedure was local.
gRPC is built on HTTP/2 hence benefits of features such
as bidirectional streaming, flow control, header compression and multiplexing requests.
gRPC's default serialization protocol, Protocol Buffer, also transmits data in binary format
which is smaller and faster as compared to JSON or XML.
Protocol buffer's latest version proto3 makes it easy to
define services and automatically generate client libraries.

At this point maybe it's a good idea to take a Wireshark trace of the client-server communication;
You will need at least Wireshark 2.5.0 to be able to dissect gRPC; at the end of this post https://www.eclipse.org/forums/index.php/t/1089118/
there's an entry by Gustavo Gonnet explaining how to compile and install Wireshark 2.5.0 on Ubuntu.

OK, now that we understand what's going on on a network level, let's try to replicate the client functionality in Titan.

To start with, we need to generate the ProtoBuff TTCN-3 files from the .proto file in the above article
(the protocol module generator has been recently updated for proto3):

This id has to be communicated to the port, then saved into a port variable and reused later for the same connection.
This is done here by sending the connection id to the port as an outgoing message and saved:

-if someone took a trace of the HTTP2 exchange may have noticed that HTTP2 frames travel not alone but in groups;
this is the reason the port expects not a single frame but a record of frames which are encoded one by one then concatenated and
made into a TCP payload:

In receiving direction, the port will receive a payload consisting a of a number of glued HTTP2 frames, but will
know to slice them as we register the message length function against the connection id:

All the code, Node.js and TTCN-3, plus the logs and traces are attached.

One possibly relevant question one may ask here:
the Node.js client has about 45 lines of code, while the TTCN-3 code is roughly ten times longer. Is the TTCN-3 language
that much worse/difficult to work with/performing poorly etc.?

There are two pertinent answers to this: first of all, what we are comparing is only the tips of the icebergs, most of the code
at work here is under the surface ( a lot less visible for Node.js, somewhat less visible for TTCN-3).
But the second part of the answer is that by using Titan and TTCN-3 the user can control, modify, twist and turn every bit of the messaging
involved, as that's what it is meant for; and complexity is the price paid for this flexibility. Of course the same flexibility can be achieved with Node.js or any other language as well, but in that case the Node.js or whatever code will inflate too accordingly.