]]>An open source framework inspired by Tornado, Sinatra and Flask for building web applications using idiomatic Go

Rationale – Do we really need another web framework?

I ask myself this question every time i’m lured by the next bright and shiny thing which comes along. There’s going to be some effort which goes into the learning curve and in the end, the payback should be substantially more than the upfront investment. Here’s some of the issues I consider when determining whether to commit to learning some new framework or technology:

A framework in general should give you a boost in velocity for getting important work done.

Good frameworks put the scaffolding in place needed to solve complex patterns.

They get out of the way, allowing you to focus on the business logic needed for your solution.

Good frameworks should be secure and extensible.

There should be vibrant community support, ongoing activity and adoption by projects.

With these top level goals in mind, lets see if the Beego Framework can get you up and running quickly and have some fun at the same time.

Prerequisites

You’ll need to have Go installed and GOPATH configured so you can build and install Go applications and packages. If you’re not sure how to do this and wish to continue, start here.

Installation

Run these commands to install the Beego framework and the Command Line Interface (CLI) project tool.

We’re going to create a web application in Go using the Beego Framework. To give you an idea of the simplicity of creating a web application in Beego, we’ll create a site using one line of code in the main function.

Spoiler alert

Here’s the page you’ll get when you browse the website. Although you’ll get a 404 not found error when you visit the home page, the web site is up and running and you connected to it, it just doesn’t have home page (yet).

http://localhost:8080

Beego default home page

Hope this spoiler alert didn’t ruin the fun. Let’s take a look now at the code you’ll need to create this simple site. Create the main.go application below and run the application using the command shown in the comments below.

With the demonstration of a simple web application creation completed, lets explore the Beego Framework a little deeper. If you’re familiar with the Model View Controller (MVC) web pattern than you probably have enough background to get started with Beego. The main event loop waits for connections and hands off eachroute requests to the controller which may interact with an ORM. Views are rendered by a template engine are returned to the requester.

Rendered views can be HTML pages or JSON payloads. Beego may be responding as a Web Server or acting in the role of a high performance API server. Using the CLI we installed earlier, lets generate a new project and see how it differs from our first website example.

The CLI will generate a number of default folders to help you get started. There will be top level folders for models, views and controllers as well as folders for routes, configuration and web support.

You can also use the CLI to run the project, when you do it will watch for changes you make and reload the website automatically.

Note: when I tried running the code above generated by the CLI I got the same 404 error we saw earlier. To resolve it, I had to tell the main function where to find the router config, see sample below.

# run using the CLI
$ bee run
# view home page in the browser
http://localhost:8080/

This time when you browse to the home page you should see the welcome banner below. The basic project scaffolding has created a route for you to reach the controller and renders an HTML view from a Go template file which is returned to the browser. The hooks are in place for interacting with a backend data model, but those decisions are left for you.

Successful home page route

To add an additional route which handles separate concerns, you would create a new controller and reference the controller from the router. In the example below we’ll reuse the example index.tpm and override to template parameters with our own. In the real world, you might instead create a Single Page Application (SPA), but that process would lead us down a completely different path, instead we’ll constrain our focus.

When the server restarts you’ll see the Admin server running on the AdminPort number you configured above. When you browse to the Admin server you’ll be able to review Health Check and other vital stats for the running server.

Browse to Admin dashboard

http://localhost:8088/qps

API Scaffolding

Lets create a new Beego scaffolding for an API project. Change your directory to a location you would like the new project created in and use the CLI to create the project artifacts.

# create a new API project
$ bee api bee-api-gs
# chane into the project folder and run the project
$ cd bee-api-gs
$ bee run

Like with the earlier Web project, the folder structure will be similar, but with no need for the static and views folders.

With the project running you can start playing with it by sending JSON REST commands. You can use curl if you like, I prefer httpie and will show a few interactions below.

As of this post, there appears to be fairly regular commit’s to the Beego repository, over 3500 to date by about 300 contributors and the repo is trending toward 24K stars.

Due to it’s high performance, clean code and ease of use there seems to be a growing interest to integrate the framework into cloud applications. A recent one which I reviewed is the CCNF Harbor Project.

Strong community interest is a good sign that a project may be around for a while and worth your consideration and investment.

Summary

With the Beego Framework we’re able to rapidly create the scaffolding needed to build serious Web and REST API applications. The framework is built using Go, a relatively new language which is easy to learn and was built to solve common problems encountered in Cloud Computing. The scaffolding comes with built a built in analytics dashboard to give you insights into operations. It’s an active project and seems to have an enthusiastic community and growing project support.

]]>Harbor is an open source trusted cloud native registry project that stores, signs, and scans content.

Executive Summary

Harbor is an incubating project in the Cloud Native Computing Foundation (CNCF). Harbor extends the open source Docker Distribution by adding the capabilities necessary for organizations such as: security, identity and management.

Harbor is a cloud native registry providing support for both container images and Helm charts. Granular access control grants or restricts user access to different repositories at the project level. A user can have different permission for images or Helm charts within a project.

Harbor service architecture for Kubernetes and Docker container management

Container images and Helm charts can be replicated (synchronized) between multiple registry instances based on policies. The policies can be filtered using tags and labels. If an error occurs during replication, harbor will automatically retry. To ensure your container images are free from known Common Vulnerabilities and Exposures (CVE’s), Harbor performs container image scans regularly and supports policy checks to prevent vulnerable images from being deployed.

Harbor leverages OpenID Connect (OIDC), a simple identity layer on top of the OAuth 2.0 protocol to verify the identity of users authenticated by an external authorization server or identity provider. Single sign-on can be can be supported for users logging into the Harbor portal. Harbor provides support for existing enterprise LDAP/AD for user authentication and management, and supports importing LDAP groups into Harbor granting permissions to specific projects.

To support container images signing Harbor integrates Notary for managing trusted collections of content. Publishers can digitally sign collections and consumers can verify integrity and origin of content.

Using the Harbor user portal, user can easily browse, search repositories and manage projects. All of the site operations to the repositories are audited and tracked through logs. Administrators can interact with the portal using REST API’s, the API definitions can be found in the Swagger Doc here.

Harbor containers can be can be easily deployed using into your Kubernetes cluster using Helm Charts or with Docker Compose.

Conclusion

Harbor is well along the maturity curve in becoming a graduated CNCF project, the recent Oct 2019 pentest concluded that the number of findings were very low, the overall results and general impression of the codebase were positive.

The capabilities gained by using an Open Source solution such as Harbor for: container security scanning, role base management, monitoring, auditing and logging can be a big win for your organization. Earlier adopters will be better positioned as inevitable product hardening and maturity is realized.

A terminal multiplexer allows you to control a number of terminal sessions to other hosts from within a single screen. It’s also able to preserve session state in case you happen to lose your connection to the host. You simply reconnect to your session and everything remains just as you left it. It’s pretty amazing!

In this article, we’ll take a look at some of the capabilities in tmux to help you get up and running quickly, or to give you a chance to kick the tires to see if you find it suitable to your work habits. I’ll be using a small lab or Raspberry Pi’s that we setup in an earlier post: moving Kubernetes closer to the bare metal. Feel free to recreate that lab if you like, or just follow along. I think you’ll be able to get the gist of tmux capabilities simply by following along.

Depending on the flavor of Linux you’re running on the package manager may vary.

The Raspian OS is based on a Debian release and we’ll be using apt. You may need to replace this with dnf, yum, brew, pacman or another package manager depending on your Linux flavor.

The screen will display lots of key bindings. You can override the defaults for these binding as well as other default tmux behaviors. Enter the desired changes into ~/.tmux.conf to override defaults. We’ll explore some common overrides later in this article.

With tmux installed let’s go through some basic commands.

Start tmux with a session name

# name our sessions in case we need to resume it later
$ tmux new -s gs-tmux

We provided a session name to aid us if we need to recover from an error and readily find the session we would like to resume.

Initial tmux session showing shells, hostname and current datetime

The initial display looks like a standard shell with a green status bar showing:

Session Name

Pseudo terminal namd

Hostname

Datetime

To interact with TMux you’ll enter a HotKey followed by a command. The default command key is Control-b, for which i’ll be using the shortcut form ^b. We’ll begin by creating 3 new shells by typing ^bc for a total of 3. You’ll notice in the status bar that you now have 3 shells.

Return to the first shell and lets rename the shell and create some panes.

# starting at shell-0 change the name to Control Plane
# using ^b, backspace to clear old name, enter new name
$ ^b,
Control Plane
$^bn
# repeat for shell-1 and shell-2, using names: Worker1 and Worker2
# if you deleted and shells create with ^bn
$ ^b,
Worker1
$ ^bn
$ ^b,
Worker2
# return to shell-0 and create some panes
# create a row pane with ^b"
# create a column pane with ^b%
$ ^b"
$ ^b%
# navigate your panes using ^b and the arrow keys
# you can zoom in and out of a pane with ^bz

If you’re still with me you should have a tmux session that looks like the picture below.

Dashboard view of K3s pods, performance and messaging

Note that the status bar now shows a more descriptive name for each shell. The Control Plane shell has 3 panes looking at 3 different views is the system. I hope you are getting a sense for the powerful dashboards you can create using TMux. In this session I went back to the demo lab we created for the project moving Kubernetes closer to the bare metal and am watching the performance using gotop while sending a JSON message to check the health of a custom POD we deployed.

We’ve just scratched the surface on what you can do in tmux, but I hope it’s enough to whet your whistle and encourage you to dig in further. Before we leave you might consider playing with the .tmux.conf file to override some default settings. Here’s a few to get you started.

]]>https://bestow.info/consolidate-your-terminals-using-tmux/feed/0Some assembly requiredhttps://bestow.info/some-assembly-required/
https://bestow.info/some-assembly-required/#respondSun, 12 Apr 2020 23:47:08 +0000http://bestow.info/?p=649In some-assembly-required we discuss how you can build Docker containers which run on different target hardware architectures.

]]>Building stuff on larger, more capable machines, that will eventually run in smaller more constrained environments

Statement of intent

As a hobbyist and tinkerer I would like to be able to assemble containers which, at the press of a button, can be run on a multitude of different target platforms.

Reifying the intent

My build box is a Windows 10 laptop running Docker desktop version 19.03.8. Docker version 19.03 is a significant release, in particular, this release includes buildx, an experimental feature. If you google for: “docker buildx arm”, you’ll learn that about a year ago Docker and Arm announced a business relationship, whereby Docker the company would provide a new capability using the BuildKit engine, for creating cross platform images that would run on Arm and other Linux machines.

How convenient is that?!

In this post we’ll be using the experimental buildx option, through the docker cli, to leverage BuildKit to create a container image which will deploy to and run in a Raspberry Pi4 Kubernetes cluster. There’s a lot of behind the scenes details which you’ll probably want to know, which is why I mentioned the earlier google search terms. In the space below we will focus on getting it done, rather than understanding how does it work. How it works has been described numerous times already.’

To get us started we’ll be using a project that we worked with earlier in a Simple micro service in Go. If you haven’t done so already go ahead and download it into your Go source folder. There’s some minor changes that i’ve added to the project, there’s now another file called Dockerfile-linux which I use locally to build and deploy to my Docker Hub account.

TARGETPLATFORM and BUILDPLATFORM are referenced during the build to aid with debugging. With Docker desktop v19.03 installed, you should enable the experimental feature, restart the Docker Engine and verify that buildx is working.

With our Raspberry Pi 4 image created and deployed into Docker Hub, lets see if we can push it out to the Kubernetes cluster we created in my post: moving Kubernetes closer to the bare metal. As we continue in this example we’re going create a pod, service and gateway using our earlier Rancher K3S Kubernetes cluster from the link to the post above.

In the snippet below we’ll set a watch using a shell on the K3S server to verify our components are created.

$ sudo watch kubectl get all

In another shell we’ll run the yaml file we created above to deploy our pod and make it accessible to the outside. It may take a few minutes to download the basic-svc from Docker Hub and deploy it to a container in our K3S cluster, so be patient.

In my basic-svc git repository i’ve added a Helm Chart which deploys the basic service as we just did above. Be sure to do a pull request if you have an older repository downloaded.

You might recall that one of the last steps we performed when we installed k3s in moving Kubernetes closer to bare metal, was to install Helm. In that process we didn’t exercise Helm so i’ve included some basic getting started steps below.

We’ve completed a lot. We now have a methodology for docker which can create images for Linux amd64, Amazon EC2 A1 64-bit Arm, Raspberry Pis running armv7 and potentially more. Then we created some yaml and a Helm chart to push our docker container into our K3S cluster. Lastly we interacted with our service pod.

]]>With some basic equipment, a little bit of time and a passion for the very best – anyone can make mouth watering succulent ribs!

We all have our own preference when it comes to the best style of ribs. Some prefer Carolina style or Memphis style or Kansas City, and they’re all good. Having had some time to immerse myself riding the BBQ Trail, the hands down winner for me is Texas style, dry rubbed with a Southwestern medley of the right blend of seasoning, herbs and spices. When I make a rub, i’ll scale up the recipe to get a good 3 – 4 uses from it. Here’s my basic go to dry rub recipe:

Use on Ribs, Chicken, Flank Steak, whatever …

4 Tbs paprika

1.3Tbs black pepper

4 Tbs garlic powder

1.3 Tbs oregano

3 Tbs kosher salt

1 Tbs cayenne

3 Tbs brown sugar

1 Tbs coriander

1.3 Tbs sage

1 Tbs cumin

The oregano and sage are picked and dried from my wife Diane’s herb garden outside of our front door. If you don’t have your own herb garden the store bought will do.

Ribs dry rub preparation

After removing the meat from the plastic wrapper I like to run it under that faucet to remove any leftover smells from packaging. Next i’ll pat the meat dry with paper towels to remove as much moisture as possible from the surface of the meat. Then it’s time to apply the dry rub. I’ll sprinkle the dry rub on both sides of the meat applying a more generous portion to the meaty side. I apply the rub with a spoon to keep from contaminating unused mix. Using your hands apply the rub to the meat as if you’re applying lotion to your hands.

Weber Chimney

Before applying the dry rub I usually load the charcoal into the chimney and light the fire. It can sometimes take 20 minutes for the coals to burn down to hot glowing embers so it’s good to have this task running in parallel.

In the bottom of the chimney i’ll crumple a few pages from the newspaper, not packing too tight, but just enough to catch fire easily when surrounded by pockets of air. I let the chimney rest in our fire-pit while the coals are turning into embers. You’ll need a safe place like this to keep it so to not accidentally catch the back yard on fire. Nothing will irritate first responders more than coming out to extinguish your yard fire, before your delicious ribs have are cooked!

Staging the Smoker

Ready set smoke!

The smoker I use is an moderately priced Dyna Glo with an offset chamber which provides indirect heat. Our goal is to smoke at a temperature that we’ll hold to 225 – 250 degrees F (the thermometer’s ideal region), this technique is commonly referred to as cooking low and slow.

The reason for the lower temperature is to induce a chemical reaction between amino acids in proteins while reducing sugars in the meat. This reaction will cause a browning of the meat known as the Maillard reaction. The slow cooking will lead to a crunchy crust and develop a richness and depth of flavors and texture. I use a long set of tongs for re-positioning the meat as it cooks and to push hot coals into the offset chamber. You want to be extra careful not to touch any of the hot surfaces.

Another thing I do is to add a drip container to the bottom of the chamber. It both helps to keep the floor of the smoker clean and provides additional moisture to the chamber as the food cooks. I fill the drip container about half way with a mixture of water and apple cider vinegar or a fruit juice. It’s up to you how you adjust the percentage of water to juice flavors. You might consider starting simple with 15 – 20% juice to water and increase from there if you would like more of the steamy fruity flavors.

Dyna Glo Smoker with offset chamber

The hot coals from the chimney should be the last thing which goes into the smoker before you close the door. As you can see from the picture I have some wood mulch under my smoker. As you add more coal to the chamber there will be small hot embers which fall to the ground. To ensure that I don’t cause an outside fire, I soak the area beneath the offset chamber with water before the hot coals go in. The last thing I do before the door is close is to add some hardwood chips like: apple, cedar, mesquite or hickory. It won’t take long before you see the smoke billowing out from the smoker. The external thermometer on the door begins to climb into the ideal range. You’ll want to keep a close eye on the temperature and feed to coals to keep the cooking temperature in the ideal region. I’ve found that adding between 10 – 15 briquettes and wood chips on the half hours to be a good average.

At this time you can go into maintenance mode, tending the fire, tending to your guests, enjoying some cold beer if it’s an especially hot day, or just enjoying the beer anyway.

How much smokiness will you need? This is a subjective question, the answer is – it’s going to be however much you like. It might take you a few trial and error runs to figure this out. The next question you’re probably wondering is – when will the ribs be done?

You can’t tell when they’re done just by looking, you’ll need a temperature probe. Good, accurate thermometers can be obtained for less than $20. The USDA recommendation for food safety is 145 degrees F for ribs. Diane prefers wet ribs to dry, so after a rack reaches safe temperature i’ll pull a dry rack for myself and paint the remaining rack with a wet sauce.

The wet rack will go back into the smoker for an additional 20 – 30 minutes or until it begins to caramelize. Keep some of your favorite dipping sauce handy to drizzle over the top of them.

I hope all of this talk about smoking meats has inspired you to get out into you back yard and give it a try.

]]>Our earlier Kubernetes examples work well in the Cloud, but can we run minimal HA clusters in smaller footprints or in embedded systems?

Culinary reduction

In cooking, reduction is the process of thickening and intensifying the flavor of a liquid mixture such as a soup, sauce, wine, or juice by simmering or boiling. In software engineering reduction is a process of: refactoring an application to minimize resource usage so an application can perform well in environments where resources or compute capacity are limited, yet preserving or enhancing core capabilities.

Instead of bringing our kettles to a boil to reduce a sauce, we’ll take a look at an already reduced Kubernetes implementation capable of running in a minimal resource environment, such as an embedded processor. There’s lots of great Kubernetes implementations in the market that you can get started with. My primary goal was to find one that I could experiment with using my Raspberry Pi’s in my mad scientist’s lab. My goal presented some interesting challenges such as:

The solution must run on an ARM7 processor

It must be able to run in a small memory and resource footprint

The runtime services shouldn’t be resource intensive

K3S is distancing from the competition and moving closer toward the sweet spot

The implementation I chose is one which seems to be standing out more and more from the crowd, Rancher K3S.

Recent Forrester research shows Rancher pulling away from the pack when considering their strategic vision and capabilities. Rancher received Forrester’s leader rating based upon it’s: runtime and orchestration, security features, image management, vision and future roadmap.

The most common use case which Rancher is it’s practical application to edge computing environments. Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.

To create a reduced Kubernetes implementation footprint, Rancher removed etcd and a large part of cloud support, these being the larger heavyweight components of most core Kubernetes implementations. They also replaced Docker with contained and use flannel to create virtual networks that give a subnet to each host for use with container runtimes. In removing etcd, they needed a new mechanism for supporting a distributed key/value store. There are several ways K3S can be configured to support the distributed key/value store and high availability (HA) including:

In this post we’ll be installing the simpler embedded SQLite in order to focus our time on getting a K3S cluster up and running. In a future post we may explore the more battled hardened implementations.

K3S Server with embedded SQLite

A complete K3S server implementation runs within a single application space. Agent nodes are registered using a websocket connection initiated by the K3S agent process. The connections are maintained by a client-side load balancer which runs as part of the agent process.

Agents register with the server using the node cluster secret along with a randomly generated password for the node. Passwords are stored in /etc/rancher/node/password. The server stores passwords for individual nodes in /var/lib/rancher/k3s/server/cred/node-passwd. Subsequent attempts must use the same password.

Here’s a great introduction to K3S by creator, Chief Architect and Rancher Labs co-founder Darren Shepherd.

We’ll look more at the configurations settings as we get into the installation and configurations, so lets get started. The installation process couldn’t be much simpler. We’ll download and install the latest K3S application, which is a self extracting binary that installs K3S and runs as a Linux service. The size of the binary download weighs in at slightly less than 50MB and the extracted runtime footprint consumes a tad less than 300MB.

Raspberry Pi Cluster

Name

Type

Notes

Viper

Pi 4

Server

Cobra

Pi 4

Worker

Adder

Pi 4

Worker

Chum

Pi 3

Worker

Nectarine

Pi Zero

Worker

Note: To install on Pi3 and Pi Zero I had to run a pre-requisite prior to running the worker install (below) that I didn’t run for the Pi4.

With our server up and running lets take a look at the performance characteristics. In the gotop snapshot below note that the quad-core CPU’s are barely breathing hard, memory consumption is hovering around 20% and there’s ample room to scale up.

Next we’ll install our worker nodes. When installing K3S it checks for the presence of environment variables: K3S_URL and K3S_TOKEN. When it finds K3S_URL it assumes we’re installing a worker node and uses the K3S_TOKEN value to connect to the cluster. The token can be found on the server in this file: /var/lib/rancher/k3s/server/node-token .

Note: Each machine must have a unique hostname. If your machines do not have unique hostnames, pass the K3S_NODE_NAME environment variable and provide a value with a valid and unique hostname for each node.

To install additional containers you might consider using Helm, the Kubernetes package manager. Helm comes in binary form for your target platform, you can download the latest release here. Be sure to install the ARM version, not the ARM64 version if you’re running on Pi3 or Pi4. While the armhf processor supports 64 bits, versions of Raspian at the time of this writing are compiled to run 32 bit applications. Here’s how you an install Helm.

Installing Helm

# set the version you wish to install
export HELM_VERSION=3.0.2
# download helm and un-tar
wget https://get.helm.sh/helm-v$HELM_VERSION-linux-arm.tar.gz
tar xvf helm-v$HELM_VERSION-linux-arm.tar.gz
# see if it works
linux-arm/helm ls
# move helm to a location in your path
sudo mv linux-arm/helm /usr/local/bin
# cleanup
rm -rf helm-v$HELM_VERSION-linux-arm.tar.gz linux-arm
# Note: if you downloaded the arm64 bit version you would get this error
# linux-arm64/helm help
# -bash: linux-arm64/helm: cannot execute binary file: Exec format error

With helm installed you can configure it to reference the latest repositories and to work with the cluster you configured.

With the Helm repositories configured we can now install applications into our cluster. If you would like to install the Kubernetes Dashboard follow the installation procedures here.

At this point you have everything you need to create, replicate, install and run your applications in K3S. After you’re done playing with your K3S cluster you can tear it down and cleanup artifacts using the following commands.

]]>https://bestow.info/moving-kubernetes-closer-to-the-bare-metal/feed/0Home wine makinghttps://bestow.info/home-wine-making/
https://bestow.info/home-wine-making/#respondSun, 22 Mar 2020 17:30:10 +0000http://bestow.info/?p=555Home wine making can be relaxing and personally rewarding as you watch your wine cellar grow. Your wine will also make a nice gift to friends, colleagues and business partners.

]]>Both relaxing and rewarding, if you’re patient enough to wait for your pipeline to fill with homemade product.

I got my start into home wine making partly inspired by Ray Bradbury’s story Dandelion Wine. The thought of being able to capture time in a bottle, not just any time but summertime, caught my fancy. During the long, cold, dark nights of winter you could pop the cork, smell the sweet intoxicating scents of summer and dream of happier times.

One of the first signs of spring to make their appearance is those hearty yellow flowers beginning to bloom. I walked to the local library where they could be picked without fear of contaminants, such as weed killers. Here’s a dandelion wine recipe for the curious. Dandelions don’t make the best ingredient for wine making, they have none a the raw materials which fermentation depends upon, like sugars. While it’s fun, relatively easy and rewarding dandelions aren’t in the same class as natures sweetest fruit, the grape.

Basic skills needed for home wine making

If you can follow a basic recipe and have the patience to wait for the wine to complete it’s rest in the bottle, then you can probably make good wine. Notice how I encapsulated the algorithm in an if/then statement.

With the successful venture of dandelion wine under my belt, I was ready to branch out into uncharted waters, but keeping my boat close to the shore. For me it was a step-by-step process, taking baby steps first. Scouring the internet for recipes I also found a number of home brew suppliers which also carried wine making equipment and juices.

Home wine making quickstart

My favorite among these was Midwest Brewing, they had a great catalog which I could browse through offline. While tempted to pick up home brewing, it was apparent from the 80+ pages that I could pump thousands of $’s into home brewing and would probably need to add an addition onto the hacienda to keep it all. No, wine making for me was a simpler proposition taking up far less storage space and if I decided to bail out on the hobby there would be less surplus to sell off on eBay.

After you have a basic starter kit, you can get wine a recipe kit which includes the pressed juice from some of the best wine regions around the world. There’s usually 5-6 steps to follow and all of the ingredients needed to produce a batch of 30 bottles in 30 days. A little back of the envelope math shows that you can get in for between $2 – $5 per bottle, depending on the quality of the juice you buy. Juices from world class regions will cost you about $5 per bottle, but if you prefer good wine it’s worth it.

If you’re like me, you’ll start with a modestly priced wine to get the kinks out of your manufacturing process, then increase the juice ingredient cost as your process improves. In about half a year you can have a substantial wine cellar. Shown on the left is a can of 1 step cleaning powder which you mix 1 tbsp with a gallon of water. The cleaner is used on all our wine making equipment and bottles. It’s both sanitary and food safe. The worst enemy to wine making is uninvited contaminants, which the cleaning solution remedies. Also shown in the picture are a bottle corker and corks.

To save some $’s on bottles and their shipping cost, I discovered that if you over-tip the wait staff at wine bars they’re only too happy to save their empty bottles for you. I know that some of you are thinking you could leave the labels on the expensive bottles, refill them with your homemade wine, heat shrink a new hat over the cork and voila! If you’re thinking about trying something like this, my answer to that is: no, no, no and no. It’s been tried before, and aside from being unethical and illegal it’s also uncool, so don’t do it.

When your fermentation is complete, when the wine has clarified and stabilized you’re ready to begin bottling.

With cleaned bottles decorating the tree, we siphon the wine from the carboy into a container from which we’ll fill the bottles.

The bottle tree has a manual pump in the bowl on top, you add some of the cleaning solution and pump until the bottle is clean.

Decorate the tree with the bottles and the cleaner drains out. While the bottles are draining you can siphon the wine from the carboy into the primary fermentation bucket. It’s a food grade container and you shouldn’t try improvising with one which isn’t. I have an extra one with a spigot which I use to fill the bottles.

Preparing for bottling

During the bottle washing process I also added the corks to a bowl with the cleaning solution. The wet cork will be easier to get into the bottles. When bottling is complete they should be left in your basement standing up for a few days. Label your bottles, record when they were bottles and what you made. Then you should forget about them for at least a year. Certainly any leftovers or bottles that can’t be filled up to the shoulders should be kept aside and tasted during the process.

There should be a small air gap between the bottom of the cork and the shoulders of the bottle. If you cant fill the bottle you may as well drink it. Your wine will improve with time, so temper your expectations when consuming an immature bottle.

The wine shown in the pictures comes from our own modest vineyard, it’s a mix of concord and muscadine grapes, which are indigenous to our region, but not sought after in terms of quality wine. They’re better for jam, but I wanted to try a recipe from scratch. Last year the hot dry season produced an exceptional crop of grapes, I made the batch following these guidelines.

When you’re ready to try a recipe from scratch, consider buying 50 – 100 pounds of grapes from a local vineyard. You’ll need a fruit press to crush the grapes and a source for the other ingredients which come in the kit.

I hope you enjoyed this brief introduction and a ready to put these new learning skills to good use!

Istio is a Service Mesh, which is another term for: a network of containerized applications working together to discover, measure, manage, load balance, monitor and possibly recover your application. In the setup which follows, we’ll go through the process of starting a GKE cluster and deploying a sample application connected through an Istio sidecar.

This article intends to give you an understanding of what it takes to run your Kubernetes applications injected with Istio instrumentation. In future posts we’ll get further into the details of building upon this base of knowledge. So lets get the configurations out of the way first.

Download and install Istio for your operating system. If you’re installing on windows be sure to get the archive which contains the examples, we’ll be using them later.

As your Kubernetes depoyments get more and more complex, you’re going to need a Service Mesh like Istio to both give you insights into operations and to manage your applications. In this post we’ll move a bit faster from a setup standpoint, having benefited from our earlier work in Terraforming K8S Cluster creation and it’s prerequisite posts.

I’ll be following Google’s Istio Install instructions, and have described below the configuration i’m using to create my Istio cluster in GKE. You can follow along or try your own configuration using Terraform or YAML scripting.

You might be wondering what all this new configuration and sidecar setup is doing for us. To that end, lets take a deeper look into what a sidecar is and what capabilities they’ll add to our projects.

In our earlier posts on Kubernetes, we talked about a 1 to 1 relationship between pods and containers, while this is generally the case there are patterns where two or more containers may collaborate within a pod. The sidecar pattern is one such example. Istio is a sidecar container which essentially injects service mesh capabilities into your application container which aren’t there by default. It accomplishes this by creating a proxy between your container and the Kubernetes control plane when your container is started. The Pilot manages and configures the proxies to route traffic. Kubernetes will also configure Mixers to enforce policies and collect telemetry. The Citadel can be configured to provide secure transports between services.

The main purpose of the Istio sidecar is to provide your container with service mesh capabilities such as: logging, monitoring, telemetry, instrumentation and more without you having to customize them in your application container. You leverage Kubernetes standards and best practices which have been hardened and tested in PROD, by many other applications. Hopefully by now your cluster is created and running.

After your cluster has been started, click on Services and Gateways to see the Istio Service Pods and Ingress Gateway running.

We can now cut over from the Google Istio Install procedure we were following to the Istio Getting Started procedure to launch a demo app. We’ll be going through those steps below, the getting started link is provided for more detailed reference.

Istio provides several configuration profiles to help get started, eventually you’ll create your own profile. To get started we’ll be using the Demo profile, which will give us access to the largest set of potential Services we may want to use. Istio profiles provide customization of the Istio control plane and to the sidecars in the Istio data plane, that are necessary when your application configurations become more complex.

While Demo provides the greatest potential set of services, Default provides the least options and may be more appropriate for a PROD environment. This is something you’ll need to consider when you decide to leave our sandbox environment and need to think harder about securing your applications.

If you haven’t already done so, change your directory to the Root where you installed Istio and the samples. On my Windows 10 laptop I installed in my C:\Tools directory. Our next configurations will be relative to this location and use the provided YAML configurations in that folder.

# deploy istio sample application
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created

You will notice that the sample Book application should start. As each pod becomes ready, the Istio sidecar will deploy along with it. The unique names and possibly the cluster IP address will be different for you.

To run the Verification step described in the Istio Getting started guide, you’ll need a Linux shell. But, if you’re running from Windows 10 like I am you’ll have better luck using a Git Bash shell. I’m assuming you have Git installed, otherwise this might be a good place to stop (developer humor ha ha, why else would you be reading this).

We’re almost ready to test our sample application from the browser, but first we’ll need to determine how to reach it running inside the GKE cluster. If you’re on Windows, run these commands form Git Bash. You can cheat and look for the IP address of the istio-ingressgateway in the browser on the Services & Ingress page, but you’ll need these environment variables later when you open the Kiali browser tab from your shell.

You should now be able to reach the product page through the gateway url. Your IP address will be different than mine, and mine will be destroyed after I complete the cleanup, but the url should look like the one below.

You can now get a sense for some default monitoring provided by the sidecar using Kiali. Kiali is a console for Istio with service mesh configuration capabilities. Lets go ahead and open a browser tab to Kiali, login using user admin and password admin.

# open browser to kiali
istioctl dashboard kiali

Your basic setup is now complete, you should be able to navigate through some of the screens in Kiali to get a better sense for default monitoring capabilities you can get from Istio out-of-the-box. Don’t be disappointed if you don’t see the graph page the getting started document shows, it has a dependency on Prometheus that the default install didn’t provide for us.

]]>https://bestow.info/playing-with-istio/feed/0Terraform K8S cluster creationhttps://bestow.info/terraform-k8s-cluster-creation/
https://bestow.info/terraform-k8s-cluster-creation/#respondThu, 12 Mar 2020 23:19:25 +0000http://bestow.info/?p=475When we started with Kubernetes we followed a step-by-step approach. Your devcsecops pipelines will need a more industrial strength approach, like Terraform.

In our article on Lifting Kubernetes up to the Cloud, we followed a step-by-step approach to get our Kubernetes clusters running in GCP. This approach is fine when you have a onesy or twosy microservices. But, when you have tens or thousands or tens of thousands we’ll need a better automated approach that scales. That’s where Teraform comes in.

With Teraform we define our infrastructure as code and can deploy into Azure, AWS, Google, Oracle and many other clouds. Teraform allows you to deploy, update and destroy infrastructure solutions without touching a web console. It integrates seamlessly with CICD pipelines. To get started you’ll need to download Teraform for your OS. After you’ve installed and configured Teraform to your path, run teraform with no options to see the available commands. We’ll play with a handful of these when we create our GKE cluster.

Lets get started by logging in to the Google Cloud Platform and creating a new project. Click on Select a project then New Project. For the project name enter terraform and keep the unique number Google provides so that the result looks something like this: terraform-15209. The parent organization can be left blank. Click Create and GCP will create your new project.

Next we’ll need to create a Service Account so that we can interact with GCP from a gcloud shell using Teraform. Service Accounts can be found under the IAM & Admin menu. In our service account we’re going to bind some access permissions. Under service accounts, select Create Service Account, then enter terraform for the service account name and leave the service account id as is. Go ahead and enter a Description and click Create.

Click Role and choose the Editor role, Add another role choose Kubernetes Engine –> Kubernetes Engine Admin. Select Continue and Create a Key, create the JSON key type and it will download to your machine. Be sure to keep this key private, it provides access to your kingdom.

With your secret key downloaded, create a folder for your project and move your key into it. Rename the key to: service-account.json. Create the file below and name it main.tf.

If you don’t already have the Google Cloud SDK (gcloud) installed go ahead and install it now. Open a gcloud shell and change into your project folder. Run the gcloud command to get a list of your projects, you should see the terraform project you created earlier.

With the main.tf created lets initialize Teraform to pull down the artifacts it needs to build a cluster in the GKE. After we run init, we’ll run terraform plan to get an idea of what Terraform is going to create for us. At this point the command is run locally and won’t interact with the cloud.

If all goes according to plan you’re ready to apply your plan and create a cluster.

# execute the plan
$ terraform apply

You should get an error telling you that the Kubernetes SDK hasn’t been used yet in your new project, which is true, it hasn’t. Follow the link Google provides and the remedy they recommend, then run the apply command again when the Kubernetes Engine is enabled.

It should take several minutes, or so, to create your cluster and node. You should get a message indicating: Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

At this point, you may need to run the kubectl version command. If you get an x509 certificate error, you can reload your cert for the new cluster you created.

At this point you can go back to the article Lifting Kubernetes up to the Cloud, picking up at section: Deploy our basic service, if you would like to deploy our test dummy container, create gateway and scale it up.

When you’re done playing don’t forget to run the cleanup so the billing stops. To delete your cluster you can run terraform destroy.

I hope you’ve gotten a good feel for the power and capabilities of Terraform for creating infrastructure as code. the CLI API guide has an extensive rich set of commands you can apply across a number of different clouds helping make you a better devsecops profesional.

In an earlier post we discussed Securing your Mule 4 config properties and your feedback was great. While most of you liked the approach, others felt it was a little too techie for their DevSecOps teams. The feeling was that: there were too many moving parts and the consensus was for a GUI to wrap the encode and decode details.

To that end, I created a GUI wrapper using TornadoFx. You will still need to download the Mule 4 secure-properties-tool.jar that I linked in my previous article. Also, I posted an Uber Jar (not the car hailing company) to my GitHub project packages. The Uber Jar will need to be run from the same folder where you downloaded the mule jar as is shown in the code snippet below.

When you run the Uber Jar, the SecureProps EncDec application will start up giving your the opportunity to encode or decode your secrets.

Wrapping Mule 4 secure properties with TornadoFx

In the Secure Properties GUI select the crypto algorithm and cipher you wish to use. The password is the same mule will use to decode the properties at startup. You’ll add your secret to the secret field and press Run to generate the encoding. The default encoding is for insertion in a .properties file. To produce an encoding for a YAML file be sure to click on the checkbox before hitting the Run button. You can always reverse the process by adding the encoding to the secret field. Just be sure not to include the ![] which wraps the encoding.

Hopefully this will allay the earlier concerns about simplifying the encoding process. In the space that remains i’ll hit some of the high points about TornadoFx and the application.

There’s a lot of info packed into the links above, but probably the main question most people have is: how long will it take to learn a new framework start doing building cool UI’s using TornadoFx? The answer of course is it depends. It depends on whether you’re just starting or have some background in JavaFx, JavaScript, Web development frameworks and the like.

The good news is that TornadoFx extends on the basic concepts of other UI development frameworks, so if you’ve have some prior experience and an understanding of MVC patterns you should be in good shape.

To get you started, you can find the code for the Mule Secure properties app in my GitHub repository. The Uber jar is also there if you would just like to use it for securing your Mule 4 properties. While useful, it’s still an immature version which doesn’t validate any of the parameters you send to the Mule Jar. For example, it will be perfectly happy accepting a password length that’s unacceptable to algorithms like AES which require 16 bytes and you might get results like this: “Invalid AES key length: 8 bytes“. As long as you conform to the happy path it should work fine and acceptable for a non-production grade demo app.

Lets take a look at some code. To run you application, we’ll wrap the JavaFx Stage adds the dimensions for our window, overrides the default application icon with one of our own and passes our MainView class as our view entry point.

Our MainView and EncDecController fulfills the contractual obligations described by the MVC pattern. Our controller runExec method executes’s the Mule 4 Secure properties config jar, encoding or decoding our secret, using an asynchronous pattern so that our UI view thread doesn’t block. It takes a string as an argument, which we entered manually to the jar when we first reviewed the Mule jar in our last article.

In the MainView.kt you’ll find most of our UI controls and layout styles. Though, you will probably want to consolidate and centralize most of your layout definitions in a file like the Styles.kt, which is similar in nature to a styles.css. For a better understanding of how Layouts and Controls work in TornadoFx you’ll probably want to review Edvin’s TornadoFx guide. To make you even more productive, Edvin has created a TornadoFx plugin for IntelliJ Idea.

Whether you’re inspired to create your own TornadoFx application or just use one, like this one, I hope you found this an interesting read and that the links help to get you up to speed quickly. As always, I look forward to your feedback and thoughts on improvements.