My thoughts on gRPC

Recently I’ve been trying out gRPC on the first impression, it was really fun to work with, once I the hang of writing a .proto file and using the code generator protoc, all I had to do was implement the interface (the one with the Server suffix) which I find very easy to do with the IDE of my choice (GoLand), just of the case of pressing Alt+Enter and then implementing interface it’s generate the spoiler plates for you so you don’t have to write them yourself; once I got past that, and I got the microservices all up and running, all I had to do was set up the client and execute the method, I didn’t even have to think about serialization as protoc will take care of that for you which is very nice, all I had to do was write the .proto file.

At work, I have worked with large JSON REST API, I had to write and as silly as it sounds an XML file to aid in deserialization of JSON, I am not going to lie, it’s just plain stupid and was not fun at all, a real pain to debug as well, here is an example.

As you can see it’s not as painful as writing it in XML. But with gRPC, I don’t even have to think about serialization as I mentioned earlier. You might want to check out quicktype.io it converts JSON into gorgeous, typesafe code in any language, so you don’t have to do things the hard way. (They don’t support PHP at the time of this writing.)

One could argue, I should have used PHP’s json_encode and json_decode, no that terrible with large data structures, but that will be another story to tell.

I can see myself using gRPC for communication between the back-office app server and individual CLI applications to use with CRON, I believe that will be the better approach compared to Symfony’s one size fits all bin/console.

Panoramic View of Elizabeth Castle

Revamp Infrastructure

I decided to revamp my existing infrastructure to something I can easily manage and set up backups and checkpoints. I was running the now discontinued Antergos which is based on top of Arch Linux which is a brilliant operating system, but it’s not very suitable for the enterprise because they often prefer matured application, with Arch Linux you always get given the latest application it’s too new for the enterprise especially for databases, so I thought it would be better to have a new infrastructure setup.

I decided to replace Antergos with Microsoft Hyper-V Server 2019 which is a cut-down version of Windows Server, just only with Hyper-V and nothing else, once I got that up and running managing the server was a walk in the park.

As you can see in the screenshot, I have set up four different virtual machines, ArchApp obviously runs on Arch Linux and serves as the user-facing production application server and that where my the code base for this website is hosted, ArchPortal is the backdoor is for the administrator (that me) to get in behind the firewall from outside the premises, so the admin can manage the other server on the network and the last two UbuntuDev and UbuntuProd those are the data server, one for production and the other for development, they both on Ubuntu 18.04 LTS (long-term support) and they have Mongo, Postgres and Redis installed and are all locked down to a specific version and I’ll only upgrade them when I’m ready to do so.

I did try to use Docker & Podman, it’s kept breaking my development server when I tried to run a backup, it’s did run well on my production server, but I decided to not use them anymore as I find them very difficult to monitor and probably won’t actually get used by the enterprise especially docker. The enterprise just prefers something that is very to monitor and does not break down too easily.

I was able to run a backup on both UbuntuProd and UbuntuDev with any issue, as Mongo and Postgres are running directly on the virtual machine, running the backup was easy, all I had to do was create two scripts one on the server and one on the client.

Experiment with Microservice and Message Bus (NSQ)

Because I didn’t have much practical experience with microservices and I never had the opportunity to do
it at the company I currently work for, I thought I do a little experiment that involves the use of microservices and a message bus.

The waiter and the chef are microservices, the customer is just the CLI that talks to the waiter to make an order and then the waiter send the order to chef using the message bus (NSQ), it’s pretty simple and basic really, all it’s does is that it’s send “Pepperoni Pizza” over the bus. But I only needed to prove to myself that I can build microservices, I believe I have succeeded in that, YAY. 😄

What I like about NSQ, compared TCP/IP I don’t have to manually set up a listener and manage the buffer, I can do it but it’s a little bit tedious; so instead I easily set up a Publisher the one that sends the information and the Consumer the one that receives the information.

Just a random update

It’s been a little while since I made my last blog post, well I have been a little busy lately
with work, working out at the gym, learning to play the guitar, learning a bit of Japanese
(trying to master Hiragana ひらがな and Katakana カタカナ is a little bit tricky, hopefully, I get there),
playing a couple of video games, mainly Crash Team Racing Nitro-Fueled and Super Mario Maker 2.

I also took the time to learn Rust, I enjoyed it, it’s just
the IDE I’m working with is struggling to work with external libraries, so I’m going to stay off
Rust until that is fixed, it’s just difficult for me to stay productive without auto-complete, I
just can’t keep looking at the document there and back, it’s will burn me out and that no
good and I need to work fast. 😄

I'll take time to learn other programming languages, I just don’t want to be that person who uses
JavaScript for absolutely everything it’s just unrealistic. If I was developing a video game,
I would have used C++, it’s a big language with no garbage collection, I will learn it when I get
the time and overcome the fear, but hopefully, it will be fun. I will also take the time to learn Ruby.

Deploying Docker Git Containers Remotly

If I were to deploy a docker git containers and it’s requires no credential, I could use the following command

$ docker build -t example/image https://github.com/docker/rootfs.git

But what if it does require credentials and I wanted to use ssh public key authentication, the thing is the docker daemon might not have access to the private key used to log into ssh, but there is a solution one could use the git command to create the archive (or tallball) and then pass it to docker, for example.

Update: This command will not work with podman, you have to extract the tarball.

I don’t need to do port forward or do anything messy, like running ssh in the background, which I have to close when I’m done with it, I just rather not do it that way. Using port forwarding to upload a tarball, honestly, I find that clumsy. I prefer clean simple and elegant solutions to a complex problem.

Dealing with Dynamic Ip Address

For dealing with dynamic ip addresses, the most elegant solution I could find, is to place the ip address of the network into a simple file using a shell script and sync the folder across different machine that you trust using file sync software like Syncthing or Relio Sync. Here an example of a script:

Yes, one could say you can use DynDNS but there is a few disadvantages to that approach, for example, you can’t control where you’re sourcing the IP address from, you’ll end up distributing the IP address globally which may not be desirable and you’re handing over control to a third party.

How would I use the file with SSH, that easy I show you an example.

$ ssh -o 'HostKeyAlias myhost' username@$(cat ~/IpAddress/Server)

It’s really that simple, just make sure you add the alias to ~/.ssh/known_hosts and you’re done 🙂

On Swapping Mongo Driver

I recently swap Mongo DB driver from a third party to the official driver, the process mostly went smoothly because I was well disciplined in writing high quality code and sticking to good practice otherwise it would of took me a lot longer to complete.

I did have a few issues along the way.

Refactoring Gridfs Collections

The third party Mongo DB drivers somewhat conforms to Gridfs specification, the third party allows you to specify the collection for files and chunks, which is nice but the official driver does not allow you to specify the collection instead you have to specify the bucket name and the default value is ‘fs’ and I left it like that.

The naming convention for the collection are ‘bucketName.files’ and ‘bucketName.chunks’, as I mentioned earlier I left it at the default so I had to rename the collections to ‘fs.files’ and ‘fs.chunks’ when I did that live I had to deploy immediately so there is little downtime.

I also had a few data type mismatch so I had to write a script and execute it’s manually in Robo 3T, replacing NumberInt (int32) to NumberLong (int64) and that fixed the mismatch.

The third party allowed access to the metadata, but the official did not, so I had to create a clone of ‘fs.files’ and called it ‘filesMeta’ so can still access and update the metaData; also I kept ID 1:1 with each other.

Data type mismatch with the key of the map

The third party allows you to use any data type as the key, the official only allows string; so I had to change the data type of the key from int to string and problem solved. It’s not too bad after all I did attached a method to the map, all I had to do is convert int to string in that method. I’m using Golang, it’s strongly typed, I had to do explicit conversion, but I don’t mind, I like clarity, clarity is always good.

// Before
typePageCollectionstruct{Refstring`bson:"Ref"`Collectionmap[int]Page`bson:"Collection"`}func(pPageCollection)GetPage(pageNumberint)Page{page,found:=p.Collection[pageNumber]checkIfFound(found)returnpage}// After
typePageCollectionstruct{Refstring`bson:"Ref"`Collectionmap[string]Page`bson:"Collection"`}func(pPageCollection)GetPage(pageNumberint)Page{page,found:=p.Collection[fmt.Sprint(pageNumber)]checkIfFound(found)returnpage}

The most complained about language is JavaScript and it’s loosely typed which is the opposite of strongly typed and trust me the data type mismatch take longer to figure out in JS than it’s does with Go!

My own take on GVM

This is the script I used to manage Go’s SDK, it’s built on top of the official Google go way. I run it inside Windows Subsystem for Linux (WSL), it’s manages both WSL and Windows itself in one call inside WSL.