unethical blogger2018-01-27T08:31:49-08:00http://unethicalblogger.com/R. Tyler Croytyler@linux.comWorking with JavaScript callback APIs from async/await2018-01-27T00:00:00-08:00http://unethicalblogger.com/2018/01/27/working-with-callbacks-and-asyncawait<p>To ignore Node.js as a possibility in certain problem domains, for which it is
the best tool for the job, is a tremendously silly and at times unprofessional
decision. While I don't delight in writing JavaScript, I must acknowledge that
JavaScript has matured quite nicely over the past ten years. Perhaps the most
helpful addition, for me at least, are the <code>async</code> and <code>await</code> keywords which
aim to prevent the callback nightmare many casual JavaScript developers may
dread.</p>
<p>Particularly for Node applications, callbacks provided a mechanism through
which highly event-driven code could be executed. Inside the runtime, this generally
means the execution thread can defer certain slow operations, such as timers or
network I/O, until the timer fires or the socket's buffer has data available
for the application. All the while, executing other "work" within the
application. I was first introduced to this cooperative multitasking
approach over a decade ago, via "greenlets" in Python, the tools and libraries
I used were hacks on top of CPython, and never caught any significant adoption.
Node, however, is "just JavaScript" which practically every web application
must maintain some familiarity with anyways. This allowed Node to enter a
niche, which <a href="https://golang.org">Go</a> would later intrude upon, of lightweight
and high-connection-count services.</p>
<p>Unfortunately, callback-oriented code is fairly difficult to read and
understand as it's execution-flow cannot be read linearly by scrolling down in
the text editor. For this reason, in my opinion, the <code>async</code> and <code>await</code> syntax
sugar is so valuable in JavaScript. Borrowing from
<a href="https://javascriptasyncfunction.com/">javascriptasyncfunction.com</a>,
callback-oriented code such as:</p>
<pre><code class="javascript">function foo(onSuccess) {
var request = new XMLHttpRequest();
request.open('GET', 'https://swapi.co/api/people/1/', true);
request.onload = function() {
if (request.status &gt;= 200 &amp;&amp; request.status &lt; 400) {
var data = JSON.parse(request.responseText);
onSuccess(data.name);
}
};
request.send();
}
</code></pre>
<p>Can be re-written as:</p>
<pre><code class="javascript">async function foo() {
const response = await fetch('https://swapi.co/api/people/1/');
const parsedResponse = await response.json();
return parsedResponse.name;
}
</code></pre>
<p>This is all well and good, but only works because the APIs underneath, e.g.
<code>fetch</code>, have been introduced to support it. For the unfortunate developer
(read: me) who must work with the legacy "callback-oriented" APIs, it might not
be obvious how to use <code>async</code> and <code>await</code> in an application which <em>must</em>
integrate with callback-driven libraries.</p>
<hr />
<p>While banging my head against this problem I learned that JavaScript engines
introduced the <code>Promise</code> API, which was somehow related, but it was never
succinctly clear how.</p>
<p>What I found so terribly confusing was: I had always seen the <code>async</code> and
<code>await</code> keywords used together but never with a callback-oriented API.</p>
<p>It helps to tease the two apart, and explain them separately:</p>
<p><strong>async</strong>: should be used with a function declaration to denote that it can be
deferred and will, in effect, implicitly return a <code>Promise</code>.</p>
<p><strong>await</strong>: should be used to block a sequential flow of execution until a
<code>Promise</code> can be resolved. <code>await</code> cannot be used unless the function
containing it is marked <code>async</code>.</p>
<p>Let's say I want to take a function, which currently uses callbacks, and
incorporate it into the rest of my <code>async</code>/<code>await</code> application. The trick, it
turns out, is to wrap it with a <code>Promise</code>:</p>
<pre><code class="javascript">function sendMessage(payload) {
return new Promise((resolveFunction, rejectFunction) =&gt; {
clientAPI.send(payload, (error, response) =&gt; {
/* in the callback */
/* if there was an error, invoke the `reject` function as part of
the Promise API. */
if (error) { rejectFunction(error); }
/* if there was a response, inoke the `resolve` function as part of
the Promise API */
resolveFunction(response);
});
});
}
</code></pre>
<p>This <code>sendMessage</code> function can then be used in other <code>async</code> type functions,
e.g.:</p>
<pre><code class="javascript">async function notifyBroker() {
let response = await sendMessage({ping: true});
/* do something with `response` */
}
</code></pre>
<p>This doesn't completely change the writing of JavaScript to a sequential model,
the top-level invocation of this function must treat it as a <code>Promise</code>, e.g.:
<code>notifyBroker().then(() =&gt; { /* callback when notifyBroker() completes */ });</code></p>
<p>It does, however, make it a lot easier to author non-blocking code without
a descent into callback hell.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/AJ5nmAQ3OuY" height="1" width="1" alt=""/>http://unethicalblogger.com/2018/01/27/working-with-callbacks-and-asyncawait.htmlSupport Escalations to Engineering are Outages2018-01-14T00:00:00-08:00http://unethicalblogger.com/2018/01/14/enterprise-support-escalations<p>I have been thinking a lot about customer support over the past two years. My
role as "Director of Evangelism" has placed me at the leading edge of what
could be referred to as "customer success" or "user education." What I have
come to appreciate, especially in Enterprise-focused startup companies, is the
connected and complimentary roles between Product, Engineering, Quality,
Evangelism, Customer Support, and Sales. In an Enterprise-focused organization
what defines the success for each of these groups is fundamentally the same,
but they are not all equally "connected" to the customer's feedback and
concerns.</p>
<p>My mental model is one of a line spiraling outward from the customer. The
Account Executive should have the highest understanding of what that
particular customer needs and wants. Moving outward, the Support team should
have a fairly good understanding of that customer's initiatives over time,
their stumbling blocks, and so on. Further away from a discrete customer,
Evangelism/Marketing/Advocacy should understand the general problem domains
that these "types" or "personas" of customers are facing, in order to tailor
education or marketing content to help inform them. Perhaps furthest out from
a single customer, Product and Engineering must understand classes of problems
faced by types of customers, and devise solutions therefore. This of course is
not to say that Product and Engineering <em>should</em> be ignorant to the needs of
customers, but in order for a Software Business to scale, they may necessarily
focus less on individual customers' needs and instead try to create generalized
solutions to problem domains.</p>
<p>Each of the four companies I have worked at thus far had Customer Support in
some form or fashion, but only at the last two, those which turned their focus
more towards Enterprises, have I noticed patterns of "escalations" into the
Engineering teams. <strong>Escalations</strong> in Support, like those in Operations, are
the passing of tickets which require either more expertise, more authority, or
a larger response than the previous level of responsibility.</p>
<p>Suffice it to say, Support looks really a lot like an Operations team to me.
Looks like an Ops, complains like an Ops, drinks like an Ops, must be an Ops!</p>
<p>What tends to happen in Operations teams with regards to escalations is that
sometimes an incident requires custom knowledge by the person who is
responsible for the application to resolve. Those weird, yet-to-be-documented,
behaviors from an application which go bump in the night and degrade service.
When these things happen, typically somebody from Engineering is looped into
the discussion, some developer who is not accustomed to their phone ringing in
the night will sleepily answer only to be barraged with trivia about code they
have written. In high-performing and mature organizations, typically the next
day or whenever the incident has been resolved, people want to have
retrospectives. They want to perform a root-cause analysis and fix the root
cause so that next time they can sleep off their future hangovers in peace and
quiet.</p>
<p>From my observations of Enterprise support, something eerily similar to the
first part tends to occur. Somewhere between a customer's infrastructure and
our software, something goes wrong, or a weird yet-unknown use-case crops up
which is not well supported by our software, and causes grief for a customer.
Even the most stellar of Support teams will eventually need to escalate to
Engineering, if for no other reason than to ask "what the hell is <em>supposed</em> to
happen here?"</p>
<p>While I plead ignorance of what goes for best practices in Customer or
Technical Support circles, I wonder what would happen if we treated every
single escalation into Engineering like a "<strong>production outage</strong>?"</p>
<p>If the Support team is unable to resolve an issue for a customer, in the
strictest terms, to me that is either: an education problem to resolve within
the Support team or <strong>a bug</strong>.</p>
<p>The first option is easy to resolve, training, documentation, more mentorship
are all easily within reach for the savvy organization. The second one is a
<em>very</em> difficult pill to swallow, and where treating an escalation as an outage
offers the most rewards.</p>
<p>"The customer has done something wrong and this is a self-inflicted problem."</p>
<p>Bug. The software should not allow the customer to get into broken states.</p>
<p>"But the customer is using the software incorrectly!"</p>
<p>Bug. If the software cannot be easily used properly, then the design and user
experience are broken.</p>
<p>"But the customer applied local scripts and hacks, we cannot support those!"</p>
<p>Bug! If a customer has to further extend the software in order to make it
useful, then perhaps we're not solving the problems for the customer we thought
we were.</p>
<p>Perhaps my favorite part of the Outage Retrospective or Post-Incident Analysis
is that it forces an organization to pause and reflect on whether it is
successfully delivering the solutions it portends to deliver. Like an NTSB
Accident Report, walking an incident back, chronicling all the missed
opportunities for remediation, documenting the numerous fail-safes which didn't
help, and so on, when applied well can only lead to better software, a stronger
organization, and more satisfied customers.</p>
<p>I don't really know whether this is already done inside in some form within
organizations, including my own. I do know, however, that treating failures not
as inevitabilities but as opportunities to improve, is the only sure path
forward.</p>
<p>The fastest possible resolution for a customer support ticket, is to prevent it
from ever needing to be filed.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/uLehdKBVDFU" height="1" width="1" alt=""/>http://unethicalblogger.com/2018/01/14/enterprise-support-escalations.htmlProvision a personal Kubernetes in 3 minutes on Azure2018-01-08T00:00:00-08:00http://unethicalblogger.com/2018/01/08/personal-kubernetes<p>At my previous company one frequent request made by developers was along the
lines of "I want to be able to run a development stack on my machine." Frankly,
I never understood this desire, and still don't. While I would agree that my
laptop is underpowered, running a stack of JVMs and other applications, in
addition to a web browser, would bring most machines to a crawl. An ideal
alternative, is to simply operate a personal Kubernetes environment in a public
cloud. Fortunately, that is now a genuinely <strong>simple</strong> task.</p>
<p>Last year the team at Microsoft working on Azure introduced "AKS", a managed
<a href="https://kubernetes.io">Kubernetes</a> environment. They also introduced <a href="https://azure.microsoft.com/en-us/features/cloud-shell/">Cloud
Shell</a> which allows
for a quick shell in the Azure portal for running authenticated commands. What
they didn't talk too much about, was that Cloud Shell comes pre-baked with:</p>
<ul>
<li><code>az</code> the Azure CLI tool, already authenticated.</li>
<li><code>kubectl</code> the Kubernetes command-line interface.</li>
<li><code>helm</code> a package manager for Kubernetes.</li>
</ul>
<p>With both of these, it's <strong>absurdly</strong> easy to provision a Kubernetes
environment in under 3 minutes.</p>
<ul>
<li>Create a resource group: <code>az group create -n &lt;name&gt; -l &lt;location&gt;</code></li>
<li>Create a Kubernetes environment: <code>az aks create -n &lt;name&gt; -l &lt;location&gt; -g
&lt;group&gt; -k &lt;kubernetes-version&gt; --node-count &lt;count&gt;</code></li>
</ul>
<p>Below is an example I just ran:</p>
<pre><code>tyler@Azure:~$ az group create -n unethicalblogger -l eastus
Location Name
---------- ----------------
eastus unethicalblogger
tyler@Azure:~$ az aks create -n ub -g unethicalblogger --node-count 1 --generate-ssh-keys -k 1.8.2 -l eastus
SSH key files '/home/tyler/.ssh/id_rsa' and '/home/tyler/.ssh/id_rsa.pub' have been generated under ~/.ssh to allow SSH acces
s to the VM. If using machines without permanent storage like Azure Cloud Shell without an attached file share, back up your
keys to a safe location
AAD role propagation done[############################################] 100.0000%
DnsPrefix Fqdn KubernetesVersion Location Name
ProvisioningState ResourceGroup
-------------------------- -------------------------------------------------------- ------------------- ---------- ------ ------------------- ----------------
ub-unethicalblogger-be5308 ub-unethicalblogger-be5308-ccbb80df.hcp.eastus.azmk8s.io 1.8.2 eastus ub Succeeded unethicalblogger
</code></pre>
<p><img src="/images/post-images/provision-kubernetes/cloud-shell.png" alt="Provisioning with Cloud Shell" /></p>
<hr />
<p>In the above example I only deployed on "node" (Virtual Machine) which means
the Kubernetes environment is going to cost about $50/month. Of course, I can
scale that up with <code>az aks scale</code> if I need more capacity, but for small
personal projects, this is more t han enough.</p>
<p>With my own personal Kubernetes provisioned, I can start dropping Helm charts
into the environment without wasting any of laptops resources.</p>
<p>Quite fancy!</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/cFRtI7moDAE" height="1" width="1" alt=""/>http://unethicalblogger.com/2018/01/08/personal-kubernetes.htmlEnforcing administrative policy in Jenkins, the hard way2018-01-05T00:00:00-08:00http://unethicalblogger.com/2018/01/05/jenkins-policy-enforcement<p>One foggy morning a few weeks ago, I received a disk usage alert courtesy of
the Jenkins project's infrastructure on-call rotation. In every infrastructure
ever, disk usage alerts seem to be the most common alert to crop up, something
<em>somewhere</em> is not properly cleaning up after itself. This time, the alert was
from our own <a href="https://ci.jenkins.io/">Jenkins environment</a>. The logging
filesystem wasn't the problem, the filesystem hosting <code>JENKINS_HOME</code> was
perilously close to running out of space. The local time, about 6:20 in the
morning, and yours truly was quietly furious at the back of a bus headed into
San Francisco for the day.</p>
<p>To put it delicately, Jenkins has always been a pain for Systems
Administrators. What was originally a huge selling point, the WYSIWYG
configuration screens, over time, and thanks to the healthy adoption of
"infrastructure as code" tooling such as Puppet, has become a weakness. With the
introduction of "Pipeline as Code" as a core concept in Jenkins 2,
circa 2016, the problem was even further exacerbated. Empowering developers
with some level of code-driven autonomy is now a key aspect of any modern
development tool, but without corresponding tooling and controls for
administrators, such autonomy rapidly leads to chaos.</p>
<p>Back on the bus ride, the usage of <code>JENKINS_HOME</code> slowly inched towards 100%. A
quick analysis indicated that most of the disk space was being occupied by
what any capable Jenkins admin would expect:</p>
<ul>
<li>Old archived artifacts.</li>
<li>Old test reports.</li>
<li>Old console logs.</li>
</ul>
<p>With Jenkins Pipeline, developers have control. To the detriment of
administrators like me, who have no (<em>simple</em>) means to systematically enforce
things like log rotation.</p>
<p>That doesn't mean administrators are left entirely out in the cold, but rather
we have to enforce administrative policy <strong>the hard way</strong>.</p>
<h3>Scripting Jenkins</h3>
<p>Jenkins has support for built-in <a href="http://groovy-lang.org">Groovy</a> scripting,
which is the usual solution for enforcing administrative policy in Jenkins.
In order to rectify the disk usage situation, I wrote a little snippet of
Groovy which will forcefully purge <strong>all but the last 5 runs</strong> of every
Pipeline in the "Plugins" folder on the system:</p>
<pre><code class="groovy">Jenkins.instance.items.each { f -&gt;
if (f.name == 'Plugins') {
f.items.each { p -&gt;
/* each p is really a Multibranch Pipeline, which looks like a
* folder, so need to iterate over its items */
p.items.each { pipeline -&gt;
if (pipeline.builds.size() &gt; 5) {
println "Deleting from ${p}"
/* Delete runs older than the last five */
pipeline.builds[5 .. -1].each { it.delete() }
}
}
}
}
}
</code></pre>
<p>Scary! Right now I have only added this little Groovy script to the
infrastructure team's runbooks. If I wanted to enforce this more
systematically, I would add file to the <code>init.groovy.d/</code> directory on the
Jenkins master.</p>
<h4>init.groovy.d</h4>
<p>Many administrators aren't aware of the <code>init.groovy.d/</code> directory, which can
be added to <code>JENKINS_HOME</code>. The <em>really really</em> useful characteristic of Groovy
scripts added to <code>init.groovy.d/</code> is that they are executed after Jenkins
plugins are loaded, but before Jenkins is "ready" and starts accepting web
requests or executing workloads. These qualities make <code>init.groovy.d/</code> an ideal
place to insert scripts which:</p>
<ul>
<li><strong>Clean up the filesystem</strong>, such as with my forceful log rotation script
referenced above.</li>
<li><strong>Enforce security policy</strong>, like my Groovy scripts which <a href="https://github.com/CodeValet/master/blob/master/init.groovy.d/disable-cli.groovy">disable the
Jenkins CLI</a>, or <a href="https://github.com/CodeValet/master/blob/master/init.groovy.d/setup-github-oauth.groovy">configure GitHub OAuth-based authentication and authorization</a>.</li>
<li><strong>Configure monitoring tooling</strong>, such as <a href="https://github.com/CodeValet/master/blob/master/init.groovy.d/configure-datadog.groovy">the Datadog
plugin</a></li>
<li><strong>Pre-configure Pipeline Libraries</strong>, like those which should be <a href="https://github.com/CodeValet/master/blob/master/init.groovy.d/pipeline-global-configuration.groovy">enabled
globally for all Pipelines</a></li>
</ul>
<p>As I mentioned in my previous post <a href="/2017/07/24/groovy-automation-for-jenkins.html">Developing Groovy Scripts to Automate
Jenkins</a>, creating these
scripts requires a <strong>lot</strong> of knowledge about how Jenkins works on the inside.
While this is definitely "the hard way," the end result is a much more
automated and manageable Jenkins environment.</p>
<p>To learn more about scripting Jenkins, I highly recommend the talk embedded
below, given by my pal Sam Gleske at Jenkins World 2017.</p>
<center><iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/qaUPESDcsGg" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen></iframe><br/></center>
<h3>Scripting Pipeline</h3>
<p>In my previous post <a href="/2017/08/03/overriding-builtin-steps-pipeline.html">Overriding steps in Pipeline with Shared Library sleight
of hand</a>, I discussed another
option for enforcing administrative policy: overriding Pipeline steps. While I
won't repeat too much, I do wish to point out a very useful pattern to
consider: enforcing timeouts on built-in steps. Take the <code>sh</code> step as an
example, by default in Jenkins there is no built-in, configurable or otherwise,
way to constrain the time spent by a step. This means a malicious or
incompetent developer can run script which performs an infinite loop,
wastefully tying up resources in the Jenkins environment.</p>
<p>By overriding the <code>sh</code> step, I can wrap it with a 2 hour timeout safe-guard as
is implemented below. Once the Shared Library has been implicitly loaded in the
Global Pipeline Libraries configuration, developers won't notice any changes,
but the beleaguered administrator will sleep a bit easier at night.</p>
<pre><code class="groovy">def call(Map params = [:]) {
String script = params.script
Boolean returnStatus = params.get('returnStatus', false)
Boolean returnStdout = params.get('returnStdout', false)
String encoding = params.get('encoding', null)
timeout(time: 2, unit: HOURS) {
/* invoke the built-in sh step */
return steps.sh(script: script,
returnStatus: returnStatus,
returnStdout: returnStdout,
encoding: encoding)
}
}
/* Convenience overload */
def call(String script) {
return call(script: script)
}
</code></pre>
<h3>An easier way?</h3>
<p>Work is currently being undertaken, spear-headed by <a href="https://github.com/ewelinawilkosz2">Ewelina
Wilkosz</a> at Praqma
under <a href="https://github.com/jenkinsci/jep/tree/master/jep/201">JEP-201</a> titled
"Configuration as Code."</p>
<blockquote><p>We want to introduce a simple way to define Jenkins configuration from a
declarative document that would be accessible even to newcomers. Such a
document should replicate the web UI user experience so the resulting structure
looks natural to end user. Jenkins components have to be identified by
convention or user-friendly names rather than by actual implementation class
name.</p></blockquote>
<p>While I haven't had the time to really dive deeper into what Ewelina and her
crew are proposing, they are certainly in the right ballpark for making Jenkins
easier to administer, and policies easier to enforce.</p>
<hr />
<p>Once you come to terms with scripting Jenkins, there are a number of ways in
which policy can be enforced using those scripts. My current preferred method
is to use <code>init.groovy.d/</code>, but those only apply during boot/restarts. It's
also possible to execute those very same scripts via the Jenkins CLI, which I
have done in the past. Through a clever combination of shell, Groovy, and
Puppet scripting, it's possible to write idempotent scripts which Puppet can
run every time the Puppet Agent runs, ensuring on-going compliance.</p>
<p>Just because it isn't easy, doesn't mean it's impossible,</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/85qz22BLjKQ" height="1" width="1" alt=""/>http://unethicalblogger.com/2018/01/05/jenkins-policy-enforcement.htmlTransparently supporting external Artifacts in Jenkins2018-01-04T00:00:00-08:00http://unethicalblogger.com/2018/01/04/external-artifacts-jenkins<p>One of the first pain points many organizations endure when scaling Jenkins is
the rapid accumulation of artifacts on their master's filesystem. Artifacts
are typically built packages such as <code>.jar</code>, <code>.tar.gz</code>, or <code>.img</code>
files, which are useful to persist after a Pipeline Run has completed for later
review as necessary. The problem that manifests over time, is quite
predictable, archived artifacts incur significant disk usage on the master's
filesystem and the network traffic necessary to store and serve the artifacts
becomes a non-trivial problem for the availability of the Jenkins master.</p>
<p>Perhaps one of my "favorite" (read: not my favorite) responses from the "Not
Actually Helpful Brigade" to questions or
concerns about scaling artifact storage on the
Jenkins mailing list is something along the lines: "Archived artifacts aren't
supposed to be used like that, you should really be using Artifactory or
Nexus."</p>
<p><strong>Not. Actually. Helpful.</strong></p>
<p>One of my number one pet-peeves with any piece of software is when people tell
me that I'm using it wrong. <strong>No.</strong> If I'm not supposed to use Jenkins in this
fashion, and Jenkins doesn't prevent me from doing so, that's a bug in Jenkins,
full stop.</p>
<p>While discussing this a bit with my crazy-idea co-conspirator
<a href="https://github.com/i386">Jimbo</a>, I came to a delightfully devious idea: <strong>what
if I could transparently make artifact archival external to Jenkins?</strong></p>
<p>Traditionally in Jenkins people solve problems with plugins. I hate plugins. I
hate to write them. I hate managing their N-different upgrade lifecycles in
Jenkins environments I maintain. I hate that "write a plugin" is the de-facto
answer given to many who wish to do interesting things in Jenkins.</p>
<p>I do, however, love Jenkins Pipeline. I love writing Jenkins Pipeline. I love
that I can put Jenkins Pipelines in a <code>Jenkinsfile</code> and check it into my source
repo. I love that I can do many interesting things with Jenkins via Pipelines.</p>
<h3>Implementing crazy things</h3>
<p>Pipeline provides two steps, <code>archive</code> which was deprecated against all
sensible logic, and <code>archiveArtifacts</code> which does the exact same thing with
more arguments and verbosity. Starting with the <a href="/2017/08/03/overriding-builtin-steps-pipeline.html">overriding built-in
steps</a> pattern, which I
discussed last August, I set about re-implementing these two steps in a <a href="https://jenkins.io/doc/book/pipeline/shared-libraries/">Shared
Library</a></p>
<p>Part of the challenge with implementing a Pipeline Shared Library is that the
Groovy code implemented in them executes within the Jenkins <strong>master</strong> JVM,
whereas Pipeline <em>steps</em> execute within the Jenkins <strong>agent</strong> JVM. The
consequence of this is that I cannot simply load a Java library which supports
uploading files to Azure Blob Storage (for example) because when that code
would execute, it would be executing inside the Jenkins <strong>master</strong> rather than
the <strong>agent</strong> and therefore would not have access to the filesystem.</p>
<p>Approaching this problem from a slightly different angle: I need to be able to
get "my" Pipeline Shared Library code to execute on the <strong>agent</strong> in order to
have access to the filesystem. Reaching into my Pipeline bag of tricks, which
looks suspiciously similar to my Pipeline pit of despair, I grabbed the
built-in <code>libraryResource</code> step which can "Load a resource file from a shared
library." The following snippet of (Scripted Pipeline) code will allow me to drop code onto an
<strong>agent</strong> for execution:</p>
<pre><code> String uploadScript = libraryResource 'io/codevalet/externalartifacts/upload-file-azure.sh'
writeFile file: 'my-special-script', text: uploadScript
sh 'bash my-special-script'
</code></pre>
<p>Overriding <code>archiveArtifacts</code> is only half of the solution however, from the
web UI in Jenkins, end-users should still be able to access the archived
artifacts.</p>
<p>Included in my
<a href="https://github.com/CodeValet/external-artifacts/blob/master/vars/archiveArtifacts.groovy">override</a>
is code which will generate an HTML file with a redirect to the artifact in
Azure, and use the <em>actual</em> built-in <code>archiveArtifacts</code> to store that.
Presently I don't have a more elegant solution for a "artifact pointer" but I'm
sure that could be solved via an actual plugin :).</p>
<p>By defining some environment variables and credentials at an administrative
level, to indicate where artifacts should be stored, and by using the "Load
Implicitly" pattern discussed in the <a href="/2017/08/03/overriding-builtin-steps-pipeline.html">overriding built-in
steps</a> blog post, I can
override the artifact archival for end-users in my Jenkins environment.</p>
<p><img src="/images/post-images/external-artifacts/finished-flow.png" alt="Finished product" /></p>
<h3>Future work</h3>
<p>My <a href="https://github.com/CodeValet/external-artifacts">current work-in-progress</a>
relies on a crazy Bash script for uploading files to Azure, which means it has
some system dependencies and does not work on Windows. I plan to work around
this by implementing the artifact upload with Go and embedding Go binaries in
the Shared Library for delivery with <code>libraryResource</code>.</p>
<p>The other bit of future work I would like to implement is <code>unarchive</code>, which is
actually a real built-in step in Pipeline, but doesn't seem to actually be
usable in any tangible sense. There are some cross-Pipeline use-cases for
"unarchiving" an artifact for re-use, which is currently not well supported in
Pipeline.</p>
<p>Another potential area of exploration would be overriding <code>stash</code> and <code>unstash</code>
steps to use this external artifact storage mechanism to avoid some of the
<a href="https://jenkins.io/projects/remoting/">Remoting</a> performance penalties which
are associated with larger stashes.</p>
<h3>Conclusion</h3>
<p>.
After a night of fervent hacking on this experiment, I cannot yet confidently
state whether it's a terrible or brilliant idea. I do think this approach has
the potential to be an "easy win" for making Jenkins more scalable, without
requiring significant surgery in Jenkins core or the surrounding plugins.</p>
<p>Assuming this pattern has potential, I can imagine it being trivial to support
S3, Azure Blob Storage, Swift, and any number of other storage backends. If
they can be supported via a simple Go program, then why not!</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/4J3LJA3fqVk" height="1" width="1" alt=""/>http://unethicalblogger.com/2018/01/04/external-artifacts-jenkins.htmlGoogle Hangouts is dead, long live Google Hangouts2017-12-06T00:00:00-08:00http://unethicalblogger.com/2017/12/06/dialing-in-for-google-meet<p>In this post I would like to share a handy little workaround for returning to
Google Hangouts, despite Google Meet. Having narrowly escaped working at Google
via acquisitions <em>twice</em>, I have stood by and watched as the Ad Words
money-pipe funded rewrite after boondoggle after rewrite. When Google
<a href="https://techcrunch.com/2017/02/28/google-quietly-launches-meet-an-enterprise-friendly-version-of-hangouts/">announced "Google
Meet"</a>
earlier this year as an "enterprise-friendly version" of Google Hangouts, I was
annoyed, but not surprised.</p>
<p>Google seems to be so systematically incapable of building a great product
experience, that it comes as no surprise that organizations continue to be
stuck in a weird limbo between Google Hangouts and Google Meet. Google Meet
somehow doesn't support nearly as many features as Google Hangouts, which I
guess makes it enterprise-friendlier, but it also is <em>broken</em> in ways that
Google Hangouts is not. Screen-sharing has never worked for me on any
Linux-based browser, Chrome included, and works only to varying degrees for my
colleagues on macOS or Windows. Unlike Google Hangouts, sometimes audio and
video stop working inexplicably, requiring in some cases fully quitting the
browser. Google Meet also removed the ability to <strong>dial-in telephones</strong>, which
to me is a <em>killer</em> feature for Google Hangouts; any conference phone, or
mobile user, regardless of customer site or location, I can at <em>least</em> dial-in
via Google Hangouts. In order to bring those users into a Google Meet, they
just be using a Google account under Google Chrome.</p>
<p>"Enterprise-friendly."</p>
<p>Fortunately, like most products at Google, Google Hangouts is not fully dead.
You can still create meetings with Google Hangouts, right from within Google
Meet even!</p>
<p><strong>Here's how you get to Google Hangouts:</strong></p>
<p><img src="/images/post-images/google-hangouts/meet-screen.png" alt="Google Meet" /></p>
<p>From the main Google Meet interface, click "Use Meeting Code."</p>
<p><img src="/images/post-images/google-hangouts/use-meeting-code.png" alt="Use Meeting Code" /></p>
<p>Enter in a clever name, in accordance with HR guidelines and policies, and then
start your meeting.</p>
<p><img src="/images/post-images/google-hangouts/hangout.png" alt="Viola Google Hangouts" /></p>
<p>Oh hey, this looks familiar! From the old Hangouts interface, if you click the
little "Add Participant" icon on the left of the top bar, you can enter in a
phone number to dial-in another participant.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/u7N1Fv3eKLQ" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/12/06/dialing-in-for-google-meet.htmlImplementing Virtual Hosts across Namespaces in Kubernetes2017-12-04T00:00:00-08:00http://unethicalblogger.com/2017/12/04/virtualhosts-across-namespaces-in-k8s<p>After learning how to build my first terrible website, in ye olden days,
perhaps the second useful thing I ever really learned was to run multiple websites on
a single server using
<a href="https://httpd.apache.org/docs/current/vhosts/index.html">Apache VirtualHosts</a>.
The novelty of being able to run more than one application on a server was
among the earliest things I recall being excited about. Fast forward to
the very different deployment environments we have available today, and I find
myself excited about about the same basic kinds of things. Today I thought I
would share how one can implement a concept similar to Apache's VirtualHosts
across Namespaces in Kubernetes.</p>
<p><em>Note:</em> I won't cover too much about networking in Kubernetes in this blog post, but I
recommand Julia Evans' post titled <a href="https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/">Operating a Kubernetes Network</a>,
which is a good, alebit advanced, overview of some of the things going on in
Kubernetes behind the scenes to make networking in Kubernetes "work."</p>
<p>The basic gist of what I was trying to accomplish was as follows: deploy
multiple application stacks, each separated into a Kubernetes
<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/">Namespaces</a>
unto itself, and mount those all under the same public IP. Exposing
applications to "the outside world" is relatively simple, numerous blog posts,
documentation, and Stack Overflow snippets demonstrate this. Segregating
workloads into Namespaces seems to be less popular however.</p>
<p>The most important component for making applications "serve things" from a
Kubernetes cluster is the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress
Resource</a>.</p>
<p>Well, that's not entirely true.</p>
<p>The most important component for making applications "serve things" from a
Kubernetes cluster is the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers">Ingress
Controller</a>.</p>
<p>Two things coarsely referred to as "Ingresses" in examples and blog posts,
which typically never discussed Namespaces, made searching and understanding
the bits of information I needed to understand, rather tricky. As <a href="https://twitter.com/evnsio/status/928284557060780032">@evnsio
highlights</a>: "Explaining
Kubernetes ingress controllers is hard".</p>
<p>Deep within a GitHub issue, which has been lost to the sands of time in my
browser history, a passing comment revealed what I had been misunderstanding:
<strong>a Kubernetes cluster will have one Ingress Controller, but can have many
Ingress Resources.</strong></p>
<p>Instead of trying to deploy an Ingress Controller to each namespace along with
my application, this informed me that I only needed to deploy one controller,
and then add an Ingress Resource for each "VirtualHost" in my environment. I
was also confused by the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting">Name-based virtual hosting
documentation</a>
which uses a single <code>.yaml</code> configuration to describe multiple hosts. I found
myself exploring how to dynamically generate a mega-yaml-configuration
including all my "virtual hosts" across Namespaces.</p>
<p>As luck would have it, this is <strong>completely unnecessary</strong>! The nginx Ingress
Controller pays attention to the entire Kubernetes object space, not just its
own Namespace, for Ingress Resources. In effect this means that any Ingress
Resource, in <strong>any</strong> Namespace, is going to be picked up and for which nginx rules
will be generated.</p>
<p><strong>Eureka!</strong></p>
<p>With this knowledge in hand, not only could I separate name-based virtual hosts
into their own Namespaces, with little bits of Ingress Resource configuration
as shown below. I could <em>also</em> use the same host name and different
<em>paths</em> across Namespaces. In essence, one Namespace could have an Ingress Resource
mapping a <code>path</code> of <code>/</code>, while another Namespace with its application stack,
might have an Ingress Resource mapping a <code>path</code> of <code>/blog</code>. The <em>one</em> Ingress
<strong>Controller</strong> amagalmates those into a single nginx configuration to handle
inbound traffic.</p>
<h3>Configurations</h3>
<p>Below is an example of an nginx Ingress Controller and a single Ingress
Resource which is applied to a single Namespace.</p>
<p><strong>ingress resource</strong></p>
<pre><code class="yaml">apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: 'http-ingress'
namespace: 'jenkins-rtyler'
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- rtyler.codevalet.io
secretName: ingress-tls
rules:
- host: 'rtyler.codevalet.io'
http:
paths:
- path: '/'
backend:
serviceName: 'jenkins-rtyler'
servicePort: 80
</code></pre>
<p><strong>ingress controller</strong></p>
<pre><code class="yaml">---
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Namespace
metadata:
name: 'nginx-ingress'
- apiVersion: v1
kind: Service
metadata:
name: 'nginx'
namespace: 'nginx-ingress'
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
sessionAffinity: 'ClientIP'
selector:
app: 'nginx'
- apiVersion: v1
kind: ConfigMap
metadata:
namespace: 'nginx-ingress'
name: 'nginx'
data:
proxy-connect-timeout: "15"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
hsts-include-subdomains: "false"
body-size: "64m"
server-name-hash-bucket-size: "256"
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: 'nginx'
namespace: 'nginx-ingress'
spec:
replicas: 1
template:
metadata:
labels:
app: 'nginx'
spec:
containers:
- image: 'gcr.io/google_containers/nginx-ingress-controller:0.8.3'
name: 'nginx'
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 80
- containerPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=webapp/webapp
- --nginx-configmap=nginx-ingress/nginx
</code></pre>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/7tCKMHtH9Aw" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/12/04/virtualhosts-across-namespaces-in-k8s.htmlJenkins on Kubernetes with Azure storage2017-12-01T00:00:00-08:00http://unethicalblogger.com/2017/12/01/aks-storage-research<p><em>This research was funded by <a href="https://cloudbees.com/">CloudBees</a> as part of my
work in the CTO's Office with the vague guideline of "ask interesting
questions and then answer them." It does not represent any specific product
direction by CloudBees and was performed with
<a href="https://jenkins.io">Jenkins</a>, rather than CloudBees products, and Kubernetes
1.8.1 on Azure.</em></p>
<p>At <a href="/tag/azure.html">this point</a> it is certainly no secret that I am fond of the
work the Microsoft Azure team have been doing over the past couple years. While
I was excited to announce <a href="https://jenkins.io/blog/2016/05/18/announcing-azure-partnership/">we had
partnered</a> to
run Jenkins project infrastructure on Azure. Things didn't start to get <em>really</em>
interesting until they announced <a href="https://azure.microsoft.com/en-us/services/container-service/">Azure Container
Service</a>. A
mostly-turn-key Kubernetes service alone was pretty interesting, but then
"<a href="https://azure.microsoft.com/en-us/blog/introducing-azure-container-service-aks-managed-kubernetes-and-azure-container-registry-geo-replication/">AKS</a>"
was announced, bringing a, much needed, <em>managed</em> Kubernetes resource into the
Azure ecosystem. Long story short, thanks to Azure, I'm quite the fan of
Kubernetes now too.</p>
<p>Kubernetes is brilliant at a lot of things. It's easy to use, has some really
great abstractions for common orchestration patterns, and is superb for running
stateless applications. State<strong>ful</strong> applications also run fairly well on
Kubernetes, but the challenge usually has <em>much</em> more to do with the
application, rather than Kubernetes. Jenkins is one of those challenging
applications.</p>
<p>Since Jenkins is my jam, this post covers the ins-and-outs of deploying a
Jenkins master on Kubernetes, specifically through the lens of Azure Container
Service (AKS). This will cover the basic gist of running a Jenkins environment
on Kubernetes, evaluating the different storage options for "Persistent
Volumes" available in Azure, outlining their limitations for stateful
applications such as Jenkins, and will conclude with some recommendations.</p>
<ul>
<li><a href="#filesystem">Jenkins and the File System</a></li>
<li><a href="#k8s-storage">Kubernetes Storage</a></li>
<li><a href="#azure-disk">Azure Disk</a></li>
<li><a href="#azure-file">Azure File</a></li>
<li><a href="#conclusions">Conclusions</a></li>
</ul>
<p><a name="filesystem"></a></p>
<h2>Jenkins and the File System</h2>
<p>To understand how Jenkins relates to storage in Kubernetes, it's useful to
first review how Jenkins utilizes its backing file system. Unlike many
contemporary web applications, Jenkins does not make use of a relational
database or any other off-host storage layer, but rather writes a number of
files to the file system of the host running the master process.</p>
<p>These files are not data files, or configuration files, in the traditional
sense. The Jenkins master maintains an internal tree-like object model, wherein
generally each node (object) in that tree is serialized in an XML format to the
file system. This does not mean that every single object in memory is written
to an XML file, but a non-trivial number of "live" objects representing
Credentials, Agents, Projects, and other configurations, may be periodically
written to disk at any given time.</p>
<p>A concrete example would be: when an administrator navigates to
<code>http://JENKINS_URL/manage</code> and changes a setting such as "Quiet Period" and
clicks "Save", the <code>config.xml</code> file (typically) in <code>/var/lib/jenkins</code> will be
rewritten.</p>
<p>These files aren't typically read in any periodic fashion, they're usually
only read when objects are loaded into memory during the initialization of Jenkins.</p>
<p>Additionally, XML files will span a number of levels in the directory
hierarchy. Each Job or Pipeline will have a directory in
<code>/var/lib/jenkins/jobs/&lt;jobname&gt;</code> which will have subfolders containing files
corresponding to each Run.</p>
<p>In short, Jenkins generates a large number of little files across a broad, and
sometimes deep, directory hierarchy. Combined with the read/write access
patterns Jenkins has, I would consider it a "worst-case scenario" for just
about any commonly used network-based storage solution.</p>
<p>Perhaps some future post will more thoroughly profile the file system
performance of Jenkins, but suffice it to say: it's complicated.</p>
<p><a name="k8s-storage"></a></p>
<h2>Kubernetes Storage</h2>
<p>With a bit of background on Jenkins, here's a cursory overview storage in
Kubernetes. Kubernetes itself provides a consistent, cross-platform, interface
primarily via three "objects" if you will: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">Persistent
Volumes</a>,
Persistent Volume Claims, and <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/">Storage
Classes</a>. Without
diving too deep into the details, workloads such as Jenkins will typically make
a "Persistent Volume Claim", as in "hey give me something I can mount as a
persistent file system." Kubernetes then takes this and confers with the
configured Storage Classes to determine how to meet that need.</p>
<p>In Azure these claims are handled by one of two provisioners:</p>
<ul>
<li><a href="#azure-disk">Azure Disk</a>: an abstraction on top of Azure's "data disks"
which are attached to a Node within the cluster. These show up on the actual
Node as if a real disk/storage device has been plugged into the machine.</li>
<li><a href="#azure-file">Azure File</a>: an abstraction on top of Azure Files Storage, which
is basically CIFS/SMB-as-a-Service. CIFS mounts are attached to the Node
within the cluster, but rapidly attachable/detachable like any other CIFS/SMB
mount.</li>
</ul>
<p>Both of these can be used simultaneously to provide persistence for stateful
applications in Kubernetes running on Azure, but their performance and
capabilities are not going to be interchangeable.</p>
<p><a name="azure-disk"></a></p>
<h3>Azure Disk</h3>
<p>In AKS, two Storage Classes are pre-configured by default, yet neither one is
configured to <a href="https://github.com/Azure/AKS/issues/48">actually <strong>be</strong> the default Storage
Class</a>:</p>
<ul>
<li><code>default</code>: utilizes the "Standard" storage (as in, hard drive, spinning
magnetic disks) model in Azure.</li>
<li><code>managed-premium</code>: utilizes the "Premium" storage (as in, solid state
drives).</li>
</ul>
<p>The only real distinctions between the two which I have observed are going to be
speed and cost.</p>
<h4>Limitations</h4>
<p>Regardless of whether "Standard" or "Premium" storage is used for Azure
Disk-backed Persistent Volumes in Kubernetes (AKS or ACS) the limitations are
the same.</p>
<p>In my testing, the most frustrating limitation is the <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general">fixed number of data disks which can be attached to a Virtual Machine in Azure</a>.</p>
<p>As of this writing, the default Virtual Machine size used when provisioning AKS
is: <code>Standard_D1_v2</code>. One vCPU and 3.5GB of memory and a data disk limit of
<strong>four</strong>. Fortunately the default node count for AKS is current 3, but this
means that a default AKS cluster cannot currently support more than 12
Persistent Volumes backed by Azure Disk at once.</p>
<p>An easy way to change that is to provision larger Virtual Machine sizes with
AKS, but this <strong>cannot be changed</strong> once the cluster has been provisioned. For
my research clusters I have stuck with a minimum size of <code>Standard_D4_v2</code> which
provides up to 32 data disks per Virtual Machine, e.g.:
<code>az aks create -g my-resource-group -n aks-test-cluster -s Standard_D4_v2</code></p>
<p>The Azure Disk Storage Class in Kubernetes also only supports the
<code>ReadWriteOnce</code> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes">access mode</a>.
In effect a Persistent Volume can only be mounted read/write by a single Node
within the Kubernetes cluster. By understanding how Azure Disk volumes are
provisioned and attached to Virtual Machines in Azure, this makes total sense.
The impact of this means that the only allowable <code>replica</code> setting for any
given workload which might use this Persistent Volume is <strong>1</strong>.</p>
<p>This has one further limitation on scheduling and high-availability for
workloads running on the cluster. Detaching and attaching disks to these
Virtual Machines is a <strong>slow</strong> operation. In my experimenting this varied from
approximately 1 to 5 minutes.</p>
<p>For a "high availability" stateful workload, this means that a Pod dying or
being killed by a rolling update, may incur a non-trivial outage <strong>if</strong>
Kubernetes schedules that Pod for a different Node in the cluster. While there
is support <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/">specifying node affinity</a>
in Kubernetes, I have not figured out a means of encouraging Kubernetes to keep
a workload scheduled on whichever Node has mounted the Persistent Volume.
Though it would be possible to explicitly pin a Persistent Volume to a specific
Node, and then pin a Pod to that Node, a lot of workload flexibility would be
lost.</p>
<h4>Benefits</h4>
<p>It may be tempting to think at this point "Azure Disk is not good, so
everything should just use Azure File!" But there are benefits to Azure Disk
which should be considered. Azure Disk is, for lack of a better description, a
big dumb block store. In that simplicity are its strengths.</p>
<p>While Persistent Volumes backed by Azure Disk can be slow to detach or reattach
to a Node, once they're present, they're fast. Operations like disk scans,
small reads and writes, all <em>feel</em> like trivially fast operations from the
Jenkins standpoint. In my testing the difference between a Jenkins master
running on local instance storage (the Virtual Machine's "main" disk) and
running a Jenkins master on a partition from a Data Disk, is imperceptible.</p>
<p>Another benefit which I didn't realize until I evaluated <a href="#azure-file">Azure
File</a> backed Persistent Volumes is that, as a big dumb block
store, Azure Disks are essentially whatever file system format you want them to
be. In AKS they default to <code>ext4</code> which is perfectly happy and native to me,
meaning my Linux-based containers will make the correct assumptions about the
underlying file system's capabilities.</p>
<p><a name="azure-file"></a></p>
<h3>Azure File</h3>
<p>AKS does not set up an Azure File Storage Class by default, but the Kubernetes
versions which are available (1.7.7, 1.8.1) have the support for Azure File
backed Persistent Volumes. In order to add the storage class, pass something
like the following to Kubernetes via <code>kubectl create -f azurefile.yaml</code>:</p>
<pre><code class="yaml">---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
annotations:
labels:
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: 'mygeneralpurposestorageaccount'
reclaimPolicy: 'Retain'
# mountOptions are passed into mount.cifs as `-o` options
mountOptions:
</code></pre>
<p>According to <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file">the Azure File documentation</a>
it's not necessary to specify the <code>storageAccount</code> key, but I had some
difficulty coaxing AKS to provision an Azure Storage Account on its own, so I
manually provisioned one within the "hidden" AKS Resource Group"
(<code>MC_&lt;group&gt;_&lt;aks-name&gt;_&lt;location&gt;</code>) and entered the name into
<code>azurefile.yaml</code>.</p>
<p>Full disclosure: I <strong>hate</strong> Storage Accounts in Azure. Where nearly everything
else in Azure rather enjoyable to use, and neatly tucked into Resource Groups,
and have reasonable naming restrictions, Storage Accounts are crummy and live
in an Azure <em>global namespace</em> so if somebody else chooses the same name as what
you want, tough luck. The reason this is somewhat relevant to the current
discussion is that Storage Accounts <em>feel old</em> when you use them. Everything
about them <em>feels</em> as if it's from a by-gone era in Azure's development (ASM).</p>
<p>The feature used by the Azure File Storage Class is what I would describe as
"Samba/CIFS-as-a-Service." Kubernetes is basically utilizing the
Microsoft-technology-equivalent of NFS.</p>
<p>But it's not NFS, it's CIFS. And that is <strong>important</strong> to Linuxy container
folks.</p>
<h4>Limitations</h4>
<p>The biggest limitations with Azure File backed Persistent Volumes in Kubernetes
are really limitations of
<a href="https://technet.microsoft.com/en-us/library/cc939973.aspx">CIFS</a>, and frankly,
they are <em>infuriating</em>. An application like Jenkins will make, what were at one
point, reasonable assumptions about the operation system and underlying
file system. "If it looks like a Linux operating system, I am going to assume
the file system supports symbolic links" comes to mind. Jenkins will attempt to
create symbolic links when a Pipeline Run or Build completes, to update a
<code>lastSuccessfulBuild</code> or <code>lastFailedBuild</code> symbolic link, which are useful for
hyperlinks in the Jenkins web interface.</p>
<p>Jenkins should no doubt be more granular and thoughtful about file system
capabilities, but I'm willing to bet that a number of other applications which
you might consider deploying on Kubernetes are also making assumptions along
the lines of "it's a Linux, so it's probably a Linuxey file system" which Azure
File backed Persistent Volumes invalidate.</p>
<p>Volumes which are attached to the Node, are attached <a href="https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-344091454">with very strict
permissions</a>.
On a Linux file system level, an Azure File backed volume attached at <code>/mnt/az</code>
would be attached with <code>0700</code> permissions allowing <em>only</em> root access. There
are two options for working around this, as far as I am aware of:</p>
<ol>
<li>Adding a <code>uid=1000</code> to the <code>mountOptions</code> specified for the Storage Class in
the <code>azurefile.yaml</code> referenced above. Unfortunately this would require that
every container attempting to utilize Azure File backed volumes use the same
uid.</li>
<li>Specifying a
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">securityContext</a>
for the container with: <code>runAsUser: 0</code>. This makes me feel exceptionally
uncomfortable, and I would absolutely not recommend running any untrusted
workloads on a Kubernetes cluster with this setting.</li>
</ol>
<p>The final, and for me the most important, limitation for Azure File backed
storage is the performance. Presently there is <a href="https://feedback.azure.com/forums/217298-storage/suggestions/8374074-offer-premium-storage-option-for-azure-file-servic">no Premium model offered for
Azure Files Storage</a>,
which I would presume means that Azure File volumes are backed by spinning hard
drives, rather than solid state.</p>
<p>The performance bottleneck for Jenkins is <em>not</em> theoretical however. With a
totally fresh Persistent Volume Claim for a Jenkins application, the
initialization of the application took upwards of <strong>5-15 minutes</strong>, namely:</p>
<ul>
<li>2-3 <em>seconds</em> to create the Persistent Volume and bind it to a Node in the
Kubernetes cluster.</li>
<li>3-4 minutes to "extract [Jenkins] from war file". When <code>jenkins.war</code> runs the
first time, it unpacks the <code>.war</code> file into <code>JENKINS_HOME</code> (usually
<code>/var/lib/jenkins</code>) and populates <code>/var/lib/jenkins/war</code> with a number of small
static files. Basically, unzipping a 100MB archive which contains hundreds of
files.</li>
<li>5-10 minutes from "Starting Initialization" to "Jenkins is ready." In my
observation this tends to be highly variable depending on the size of Jenkins
environment, how many plugins are loaded, and what kind of configuration XML
files must be loaded at initialization time.</li>
</ul>
<p>The closest comparison to Azure File backed storage and the performance
challenges I have observed with it, are similar to challenges the CloudBees
Engineering team observed with <a href="https://aws.amazon.com/efs/">Amazon EFS</a> when
it was first announced. The disk read/write patterns exhibited by Jenkins
caused trouble on EFS as well, but that has seen marked improvement over the
last 6 months, whereas Azure Files Storage doesn't appear to have had
significant performance improvements in a number of years.</p>
<h4>Benefits</h4>
<p>Despite performance challenges, Azure File backed Persistent Volumes are not
without their benefits. The most notable benefit, which is what originally
attracted me to the Azure File Storage Class, is the support for the
<code>ReadWriteMany</code> access mode.</p>
<p>For some workloads, of which Jenkins is not one of them, this would enable a
<code>replicas</code> setting greater than 1 and concurrent Persistent Volume access
between the running containers. Even for single container workloads, this is a
valuable setting as it allows for effectively zero-downtime rolling updates and
re-deployments when a new Pod is scheduled on a different underlying Node.</p>
<p>Additionally, Azure File volumes can be simultaneously mounted by other machines in the
resource group, or even across the internet, which can be very useful for
debugging or forensics when something goes wrong (things usually go wrong).
Compared to an Azure Disk volume which would require a <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/">container to be successfully
running</a> in the Kubernetes environment before you could dig into the disk.</p>
<p><a name="conclusions"></a></p>
<h2>Conclusions</h2>
<p>Running a highly available Jenkins environment is a non-trivial exercise. One
which requires a substantial understanding of both the nuances of how Jenkins
interacts with the file system but also how users expect to interact with the
system. While I was optimistic at the outset of this work that Kubernetes, or
more specifically AKS, might significantly change the equation; it has not.</p>
<p>To the best of my understanding, this work applies evenly to Azure Container
Service (ACS) and Azure Container Service (AKS) (naming is hard), since both
are using the same fundamental Kubernetes support for Azure via the Azure Disk
and Azure File Storage Classes. Unfortunately I don't have time to do a serious
performance analysis of Data Disks using Standard storage, Data Disks using
Premium Storage, and Azure File mounts. I would love to see work in that area
published by the Microsoft team though!</p>
<p>At this point in time, those seeking to provision Jenkins on ACS or AKS, I
strongly recommend using the Azure Disk Storage Class with Premium storage.
That will not help with "high availability" of Jenkins, but at least once
Jenkins is running, it will be running swiftly. I also recommend using <a href="https://jenkins.io/doc/book/pipeline">Jenkins
Pipeline</a> for all Jenkins-based
workloads, not just because I fundamentally think it's a better tool than
classic Freestyle Jobs, but it has built-in <strong>durability</strong>. Using Jenkins in
tandem with the <a href="https://plugins.jenkins.io/azure-vm-agents">Azure VM Agents</a>
plugin, workloads are kicked out to dynamically provisioned Virtual Machines,
and when the master goes down, from which recovery can take 5-ish minutes in
the worst case scenario, the outstanding Pipeline-based workloads will not be
interrupted during that window.</p>
<p>I still find myself excited about the potential of AKS, which is currently in
"public preview." My recommendation to Microsoft would be to spend a
significant amount of time investing in both storage and cluster performance to
strongly differentiate AKS from Kubernetes provided on other clouds.
Personally, I would love to have: faster stateful applications, auto-scaled
Nodes based on compute (or even Data Disk limits!), and cross-location
<a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/">Federation</a>
for AKS.</p>
<p>Maybe in 2018!</p>
<hr />
<h3>Configuration</h3>
<p>Below is the configuration for the Service, Namespace, Ingress, and Stateful
Set I used:</p>
<pre><code class="yaml">---
apiVersion: v1
kind: "List"
items:
- apiVersion: v1
kind: Namespace
metadata:
name: "jenkins-codevalet"
- apiVersion: v1
kind: Service
metadata:
name: 'jenkins-codevalet'
namespace: 'jenkins-codevalet'
spec:
ports:
- name: 'http'
port: 80
targetPort: 8080
protocol: TCP
- name: 'jnlp'
port: 50000
targetPort: 50000
protocol: TCP
selector:
app: 'jenkins-codevalet'
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: 'http-ingress'
namespace: 'jenkins-codevalet'
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- codevalet.io
secretName: ingress-tls
rules:
- host: codevalet.io
http:
paths:
- path: '/u/codevalet'
backend:
serviceName: 'jenkins-codevalet'
servicePort: 80
- apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "jenkins-codevalet"
namespace: "jenkins-codevalet"
labels:
name: "jenkins-codevalet"
spec:
serviceName: 'jenkins-codevalet'
replicas: 1
selector:
matchLabels:
app: 'jenkins-codevalet'
volumeClaimTemplates:
- metadata:
name: "jenkins-codevalet"
namespace: "jenkins-codevalet"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
template:
metadata:
labels:
app: "jenkins-codevalet"
annotations:
spec:
securityContext:
fsGroup: 1000
# https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-344091454
runAsUser: 0
containers:
- name: "jenkins-codevalet"
image: "rtyler/codevalet-master:latest"
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
- containerPort: 50000
name: jnlp
resources:
requests:
memory: 384M
limits:
memory: 1G
volumeMounts:
- name: "jenkins-codevalet"
mountPath: "/var/jenkins_home"
env:
- name: CPU_REQUEST
valueFrom:
resourceFieldRef:
resource: requests.cpu
- name: CPU_LIMIT
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: MEM_REQUEST
valueFrom:
resourceFieldRef:
resource: requests.memory
divisor: "1Mi"
- name: MEM_LIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: "1Mi"
- name: JAVA_OPTS
value: "-Dhudson.DNSMultiCast.disabled=true -Djenkins.CLI.disabled=true -Djenkins.install.runSetupWizard=false -Xmx$(MEM_REQUEST)m -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85"
</code></pre>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/mNqoOFMKHiI" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/12/01/aks-storage-research.htmlRunning tasks with Docker and Azure Functions2017-11-20T00:00:00-08:00http://unethicalblogger.com/2017/11/20/tasks-with-docker-azure-functions<p>Months ago Microsoft announced <a href="https://docs.microsoft.com/en-us/azure/container-instances/">Azure Container
Instances</a> (ACI), which
allow for rapidly provisioning containers "in the cloud." When they were first
announced, I played around with them for a bit, before realizing that the
pricing for running a container "full-time" was almost 3x what it would cost to
deploy that container on an equitable Standard A0 virtual machine. Since then
however, Azure has added support for a "Never" restart policy, which opens the
door for using Azure Container Instances for <a href="https://docs.microsoft.com/en-us/azure/container-instances/container-instances-restart-policy">arbitrary task
execution</a>.</p>
<p>The ability to quickly run arbitrary containerized tasks is a really exciting
feature. Any Ruby, Python, JavaScript, script that I can package into a Docker
container I can kick out to Azure Container Instances in seconds, and pay by
the second of runtime. <strong>Very</strong> exciting, but it's not practical for me to
always have the Azure CLI at the ready to execute something akin to:</p>
<pre><code>az container create \
--resource-group myResourceGroup \
--name mycontainer \
--image rtyler/my-silly-container:latest \
--restart-policy Never
</code></pre>
<p>Fortunately, Microsoft publishes a number of client libraries for Azure,
including a Node.js one. This is where introducing <a href="https://docs.microsoft.com/en-us/azure/azure-functions/">Azure
Functions</a> can help
make Azure Container Instances really <em>shine</em>. Similar to AWS Lambda, or
Google Cloud Functions, Azure Functions provide a light-weight computing
environment for running teeny-tiny little bits of code, typically JavaScript,
"in the cloud."</p>
<p>This past weekend I had an arguably good argument for combining the two in a
novel fashion: launching a (containerized) script every ten minutes.</p>
<p>The expensive and old fashioned way to handle this would be to just deploy a
small VM, add a crontab entry, and spend the money to keep that machine online
for what equates to approximately 6 hours of work throughout the month.</p>
<ul>
<li>Standard A0 virtual machine monthly cost: $14.64</li>
<li>Azure Container Instance, for 6 hours a month, cost: $0.56</li>
</ul>
<p>In this blog post I won't go too deeply into the creation of an Azure Function,
but I will focus on the code which actually provisions an Azure Container
Instance from Node.js.</p>
<h3>Prerequisites</h3>
<p>In order to provision resources in Azure, we must first create the Azure
credentials objects necessary. For better or worse, Azure builds on top of
Azure Active Directory which offers an absurd amount of role-based access
controls and options. The downside of that flexibility is that it's supremely
awkward to get simple API tokens set up for what seem like otherwise mundane
tasks.</p>
<p>To provision resources, we will need an "Application", "Service Principal", and
"Secret". The instructions below will use the Azure CLI:</p>
<ul>
<li><code>openssl rand -base64 24</code> will generate a good "client secret" to use.</li>
<li><code>az ad app create --display-name MyAppName --homepage http://example.com/my-app --identifier-uris http://example.com/my-app --password $CLIENT_SECRET</code> creates the Azure Active Directory Application, mind the "App ID" (aka client ID).</li>
<li><code>az ad sp create --id $CLIENT_ID</code> will create a Service Principal.</li>
<li>And finally, I'll assign a role to that Service Principal: <code>az role assignment create --assignee http://example.com/my-app --role Contributor --scope /subscriptions/$SUBSCRIPTION)_ID/resourceGroups/my-apps-resource-group</code>.</li>
</ul>
<p>In these steps, I've isolated the Service Principal to a specific Resource
Group (<code>my-apps-resource-group</code>) to keep it away from other resources, but also
to make it easier to monitor costs.</p>
<p>A number of these variables will be set in the Azure Function "Application
Settings" to enable my JavaScript function to authenticate against the Azure
APIs.</p>
<h3>Accessing Azure from Azure</h3>
<p>Writing the JavaScript to actually launch a container instance was a little
tricky, as I couldn't find a single example in the <a href="https://github.com/Azure/azure-sdk-for-node/tree/master/lib/services/containerinstanceManagement">azure-arm-containerinstance
package</a>.</p>
<p>In the "Codes" section below is the entire Azure Function, but the only major
caveat is that in my example I've "hacked" the <code>apiVersion</code> which is used when
accessing the Azure REST APIs, as the current package hits an API which doesn't
support the "Never" restart policy for the container.</p>
<p>With the Azure SDK for Node, authenticating properly, it's feasible to do all
kinds of interesting operations in Azure, creating, updating, or deleting
resources based on specific triggers from Azure Functions.</p>
<h3>Future Possibilities</h3>
<p>The code below is among the most simplistic use-cases imaginable for
combining Azure Functions and Azure Container Instances. Thinking more broadly,
one could conceivably trigger short-lived containers 'on-demand" in response to
messages coming from Event Hub, or even inbound HTTP requests from another user
or system. Imagine, for example, if you wanted to provide a quick demo of some
application to new users on your website. One Azure Function provisioning
containers for specific users, and another periodically reaping any containers
which have been running past their timeout, would be both cheap and easily
deployed.</p>
<p>I still wouldn't use Azure Container Instances for any "full-time" workload,
their pricing model is fundamentally flawed for those kinds of tasks. If you
have workloads which are run for only seconds, minutes, or hours at a time,
they make a <em>lot</em> more sense, and with Azure Functions, are cheaply and easily
orchestrated.</p>
<h3>Codes</h3>
<hr />
<p><strong>2017-12-05 update</strong>: corrected the following code to delete any previously
existing container group, to more effectively emulate a "cron."</p>
<hr />
<p><strong>index.js</strong></p>
<pre><code>module.exports = function (context) {
const ACI = require('azure-arm-containerinstance');
const AZ = require('ms-rest-azure');
context.log('Starting a container');
AZ.loginWithServicePrincipalSecret(
process.env.AZURE_CLIENT_ID,
process.env.AZURE_CLIENT_SECRET,
process.env.AZURE_TENANT_ID,
(err, credentials) =&gt; {
if (err) {
throw err;
}
let client = new ACI(credentials, process.env.AZURE_SUBSCRIPTION_ID);
/* First delete the previous existing container group if it exists */
client.containerGroups.deleteMethod(group, containerGroup).then((r) =&gt; {
context.log('Delete completed', r);
let container = new client.models.Container();
context.log('Launching a container for client', client);
container.name = 'twitter-processing';
container.environmentVariables = [
{
name: 'SOME_ENV_VAR',
value: process.env.SOME_ENV_VAR
}
];
container.image = 'my-fancy-image-name:latest';
container.ports = [{port: 80}];
container.resources = {
requests: {
cpu: 1,
memoryInGB: 1
}
};
context.log('Provisioning a container', container);
client.containerGroups.createOrUpdate(group, containerGroup,
{
containers: [container],
osType: osType,
location: region,
restartPolicy: 'never'
}
).then((r) =&gt; {
context.log('Launched:', r);
context.done();
}).catch((r) =&gt; {
context.log('Finished up with error', r);
context.done();
});
});
});
};
</code></pre>
<p><strong>package.json</strong></p>
<pre><code>{
"name": "foobar-processing",
"version": "0.0.1",
"description": "Timer-triggered function for running an Azure Container Instance",
"main": "index.js",
"author": "R Tyler Croy",
"dependencies": {
"azure-arm-containerinstance": "^1.0.0-preview"
}
}
</code></pre>
<p><strong>function.json</strong></p>
<pre><code>{
"disabled": false,
"bindings": [
{
"direction": "in",
"schedule": "0 */10 * * * *",
"name": "tenMinuteTimer",
"type": "timerTrigger"
}
]
}
</code></pre>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/5WhiPnoGjDI" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/11/20/tasks-with-docker-azure-functions.htmlAzure OpenDev Wrap-up2017-11-08T00:00:00-08:00http://unethicalblogger.com/2017/11/08/azure-opendev-wrapup<p>A couple weeks ago I boarded a plane at the always-adorable
<a href="https://sonomacountyairport.org/">Charles M. Schulz Sonoma County Airport</a>
en route to Seattle to participate in a <a href="/2017/10/05/azure-opendev.html">Microsoft Azure OpenDev Event</a>.
Thanks to my pal Ken Thompson, who recently joined Microsoft as a product
marketing manager for their Open Source DevOps team, I was invited to talk
about all things Jenkins with a dash of Azure.</p>
<p>It's been no secret that I have <a href="https://twitter.com/agentdero/status/898957691510374401">become a fan of Azure</a> lately.
Microsoft's investment in open source technologies as a means of driving
innovation for their cloud platform is <em>very</em> exciting for me. While they
<a href="https://twitter.com/agentdero/status/904808509065142272">don't get everything right</a>,
I have seen tremendous month-to-month, and year-to-year, improvements from the
Azure team since I first started using Azure a few years ago.</p>
<p>Setting aside the Azure lovefest and getting back to the matter at hand
however: the Azure OpenDev event. Ken and his team decided to try something
different for this event and invited a number of folks from different
organizations like
<a href="https://www.youtube.com/watch?v=D3C12ojRcp0&amp;list=PLLasX02E8BPBmGz-fYt_TTqAxluLdcXEg&amp;index=3">Ryan from GitHub</a>,
<a href="https://www.youtube.com/watch?v=sNLAECL6wx8&amp;list=PLLasX02E8BPBmGz-fYt_TTqAxluLdcXEg&amp;index=5">Matt from Chef</a>,
<a href="https://www.youtube.com/watch?v=koYCkjYSkQ0&amp;list=PLLasX02E8BPBmGz-fYt_TTqAxluLdcXEg&amp;index=6">Nic from HashiCorp</a>,
and
<a href="https://www.youtube.com/watch?v=tOqWX9JWEYc&amp;list=PLLasX02E8BPBmGz-fYt_TTqAxluLdcXEg&amp;index=7">Christoph from Elastic</a>.
This line-up not only made for a really informative block of video content, but
it also made the whole experience quite fun too. From the pre-event speakers
dinner, to the <a href="https://twitter.com/bitwiseman/status/923374447897092096">panel
discussion</a> we had at
the "after-party"/Seattle Jenkins Area Meetup, it was two days of what felt
like non-stop talking and excitement.</p>
<p><img src="/images/post-images/azure-opendev/toon.jpg" alt="Toon version" /></p>
<h2>Things I said</h2>
<p>During my discussion with <a href="https://twitter.com/ashleymcnamara">Ashley</a> I talked
about (at length!) <a href="https://jenkins.io/doc/book/pipeline">Jenkins Pipeline</a>
which, regardless of who my employer presently is, has definitely moved the
needle for Jenkins automation forward in a spectacular way. In addition to
Pipeline, we also discussed and walked through some real-live Jenkins instances
running <a href="https://jenkins.io/projects/blueocean">Blue Ocean</a>.</p>
<p>We also discussed, briefly, some of the <a href="https://github.com/jenkins-infra/">Jenkins project's own infrastructure code</a>.
Composed of Puppet, Terraform, Jenkins Pipeline, and a schmear of bash
script.</p>
<p>The video below is a bit of a whirlwind tour, dabbling in Jenkins, the
project's infrastructure, and some Azure tools available for Jenkins.</p>
<center>
<strong>Behold! The least flattering still-shot possible</strong>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jOWY6wa38J0" frameborder="0" allowfullscreen></iframe>
</center>
<h2>Things I didn't have time to say</h2>
<p>Unfortunately 30 minutes goes by really fast and I couldn't cover absolutely
everything I wanted to talk about. I did warn Ashley beforehand however, that I
can probably talk about Jenkins things for hours on end.</p>
<p>I wanted to talk more about how the Jenkins project now uses
<a href="https://kubernetes.io">Kubernetes</a> quite heavily, on Azure, to power our
"application tier." Contrasted with our infrastructure tier of virtual
machines, storage accounts, databases, or load balancers, I wanted to explain
that the "application tier" fits perfectly in the Kubernetes world, and enables
different web applications, bots, and services to be rapidly developed and
continuously delivered.</p>
<p>I also wanted to talk about how we use Puppet to <a href="https://github.com/jenkins-infra/jenkins-infra/tree/staging/dist/profile/manifests/kubernetes">manage our
Kubernetes</a>
resources after we tried a number of different approaches for managing our
Kubernetes-based infrastructure. Having realized that Puppet has all the basics
which we needed, but found ourselves reinventing: multiple environments,
secrets management, state management.</p>
<p>I didn't quite get a chance to talk about some of the Jenkins project's own
Jenkins Pipelines, like <a href="https://github.com/jenkins-infra/jenkins.io/blob/master/Jenkinsfile">this
Jenkinsfile</a>
which is what actually builds the <a href="https://jenkins.io">jenkins.io</a> static site
and uploads assets to Azure Storage. Or <a href="https://github.com/jenkins-infra/jenkins-infra/blob/staging/Jenkinsfile">this
Jenkinsfile</a>
which tests, lints, and ensures our Puppet code is correct. Fortunately, I did
talk a <em>little</em> bit about <a href="https://github.com/jenkins-infra/azure/blob/master/Jenkinsfile">this Jenkinsfile</a>
which manages our Terraform build, test, and deploy pipeline.</p>
<p>I alluded too the <a href="https://ci.jenkins.io">Jenkins project's own Jenkins environment</a>
but there's an entire presentation's worth of content in how I have architected
that Jenkins environment.</p>
<p>I could literally talk for hours about Jenkins and Jenkins-related topics.</p>
<p><strong>Hours</strong>.</p>
<p>I'm not certain if that's a good or a bad thing however; probably best not to
think too much about it.</p>
<hr />
<p>Overall between the speaker dinner, the OpenDev event, the after-party/JAM, and
the after-after party, the entire experience was challenging, informative, and
enjoyable. I do hope the team at Microsoft continues to host these types of
"rougher" open source friendly events in the future</p>
<p>Whether they realize it or not, Microsoft is in a great position to encourage
and facilitate some really interesting cross-project collaboration with more
events like this, so fingers crossed that they will step up to the plate.</p>
<p><img src="/images/post-images/azure-opendev/standing-tall.jpg" alt="Standing Tall" /></p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/-oFUGPSwe38" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/11/08/azure-opendev-wrapup.htmlCall for Proposals: Testing and Automation @ FOSDEM 20182017-10-31T00:00:00-07:00http://unethicalblogger.com/2017/10/31/fosdem-testingautomation<p>2018 will be the sixth year for the Testing/Automation dev room at
<a href="https://fosdem.org/2016">FOSDEM</a>. This room is about creating better
software through a focus on testing and automation at all layers of
the stack. From creating libraries and end-user applications all the
way down to packaging, distribution and deployment. Testing and
automation is not isolated to a single toolchain, language or
platform, there is much to learn and share regardless of background!</p>
<h1>What</h1>
<p>Since this is the sixth year we're hosting the Testing and Automation
dev room, here are some ideas of what we would like to see, and what
worked in prior years, they're just ideas though! Check out the
<a href="https://archive.fosdem.org/2013/schedule/track/testing_and_automation/">2013</a>,
<a href="https://archive.fosdem.org/2014/schedule/track/testing_and_automation/">2014</a>,
<a href="https://archive.fosdem.org/2015/schedule/track/testing_and_automation/">2015</a>,
<a href="https://archive.fosdem.org/2016/schedule/track/testing_and_automation/">2016</a>,
<a href="https://archive.fosdem.org/2017/schedule/track/testing_and_automation/">2017</a>
schedules for inspiration.</p>
<h3>Testing in the real, open source world</h3>
<ul>
<li>War stories/strategies for testing large scale or complex projects</li>
<li>Tools that extend the ability to test low-level code</li>
<li>Projects that are introducing new/interesting ways of testing "systems"</li>
</ul>
<h3>Cool Tools (good candidates for lightning talks)</h3>
<ul>
<li>Explain/demo how your open source tool made developing quality software better</li>
<li>Combining projects/plugins/tools to build amazing things "Not enough
people in the open source community know how to use $X, but here's a
tutorial on how to use $X to make your project better."</li>
</ul>
<h1>Where</h1>
<p>FOSDEM is hosted at <a href="https://fosdem.org/2018/practical/transportation/">Universite libre de Bruxelles in Brussels,
Belgium</a>. The
Testing and Automation dev room is likely slated for Building H, room
2213, which seats ~100.</p>
<h1>When</h1>
<ul>
<li>CFP Submission Deadline: <strong>23:59 UTC, 26 November 2017</strong></li>
<li>Schedule Announced: <strong>15 December 2017</strong></li>
<li>Presentations: <strong>3 February 2018</strong></li>
</ul>
<h1>How</h1>
<p>Please submit one (or more) 30-40 minute talk proposal(s) OR one (or
more) 10 minute lightning talk proposal(s) by <strong>23:59 UTC on November
26th 2017</strong>. We will notify all those submitting proposals about their
acceptance by December 15th 2017.</p>
<p>To submit a talk proposal (you can submit multiple proposals if you'd
like) with <a href="https://penta.fosdem.org/submission/FOSDEM18/">Pentabarf</a>,
the FOSDEM paper submission system. Be sure to select <code>Testing and
Automation</code> otherwise we won't see it!</p>
<p>You can create an account, or use an existing account if you already have one.</p>
<p>Please note: FOSDEM is a
<a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FLOSS</a>
community event, by and for the community, please ensure your topic is
appropriate (i.e. this isn't the right forum for commercial product
presentations)</p>
<h1>Who</h1>
<ul>
<li><a href="https://github.com/rtyler">R. Tyler Croy</a> - Jenkins hacker</li>
<li><a href="https://github.com/markewaite">Mark Waite</a> - Jenkins/Git hacker</li>
</ul>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/Ye8_BdiJkOk" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/10/31/fosdem-testingautomation.htmlThis is your reality now2017-10-23T00:00:00-07:00http://unethicalblogger.com/2017/10/23/this-is-reality<p>The traffic on the Bay Bridge connecting San Francisco to Oakland is one of the
most congested routes of traffic in all of Northern California. Somehow it gets
even worse on Saturday and Sunday. One weekend, a few years ago, I was driving my wife
and some of the women from her soccer team back to Berkeley, from a game in
San Francisco's Golden Gate Park. On the east side of the bridge, before
inching onto I-580N, I was pretty pissed off, and half-joking half-frustrated
shook back-and-forth at the steering wheel "GAHHHHHHHHHHHH." The woman sitting
behind me, who was certainly the "funny one" of the group, put her hand on my
arm and gently said "Tyler, this is your reality now."</p>
<p>Certainly a well-delivered line, perfect timing, received with laughter all around, but
the phrase has stuck in my memory longer than the woman's name.</p>
<p>My <a href="/2017/10/09/fire-coming-down-the-mountain.html">last post</a> I wrote as a way
to process and capture the trauma of watching fire rip into northern Santa
Rosa. A town I have adopted and which is the subject of a number of picturesque
photos I have posted over the past three years, always titled with my
unofficial city motto: "Santa Rosa: It's nice."</p>
<p>The day after I wrote that post, I ended up at the <a href="http://chimeraarts.org">Chimera Arts and
Makerspace</a> in Sebastopol, the little hippie town west
of Santa Rosa, where I joined a fledgling effort called <a href="http://sonomafireinfo.org">Sonoma Fire
Info</a>. I took the remainder of the week off from
work, and our little volunteer organization rapidly became a clearinghouse for
verified information across the county in its time of need. Soaking up the
efforts of over 60 volunteers who made thousands of phone calls, scoured social
media, and captured truth amid the chaos. In a two week period, the website had
been viewed by over 100k people.</p>
<p>I think we did a great job of informing Sonoma County. The rest of the country,
and world, remains frustratingly less informed about an event from which my adorable
little city is going to take <em>years</em> to recover.</p>
<p>The fire that I watched whip down the hillside is known as the "Tubbs
Fire". The fire that I could see from miles away on Llano Rd during our
voluntary evacuation to Sebastopol at 3:45 that morning is known as the "Nuns
Fire." While I saw both of these with my own eyes, there were <strong>four other
fires</strong>, of various sizes, engorged by 50-70mph winds, raging in Northern
California:</p>
<ul>
<li>The "Sulphur Fire" burned in Lake County to our northeast.</li>
<li>The "Pocket Fire" destroyed parts of northern Sonoma county.</li>
<li>The "Redwood Valley Fire" incinerated Mendocino County further to the north.</li>
<li>The "Atlas Fire" tore through Napa County to our east.</li>
</ul>
<p>At one time there were <strong>six active fires</strong> in the part of Northern California north of
San Francisco and west of Sacramento. To put this into a historic context,
<strong>four</strong> of those six fires rank in the 20 most destructive (structures destroyed)
wildfires ever recorded in California history:</p>
<p><img src="/images/post-images/your-reality-now/destructive-fires.jpg" alt="The 20 most destructive fires" />
(posted by <a href="https://twitter.com/CAL_FIRE/status/921441414981885952/photo/1">@CALFIRE</a> on October 20th)</p>
<p>The most destructive (Tubbs), and sixth most destructive (Nuns), wildfires in
the Bear Republic's history scarred Sonoma county on a difficult to understand
and on a difficult to process scope.</p>
<p>The impact on Santa Rosa, in particular, from this <a href="https://twitter.com/agentdero/status/921609069810532353">unfathomably big fire</a>
cannot be understated. Considered the fifth most populous city in the "Bay
Area," with just over 170k residents, it lost <strong>5%</strong> of its housing in less than
twelve hours. The gale-force winds which woke me up at 12:30am on October 9th
pushed the fire through neighborhoods, across 4-6 lanes of Highway 101, and
through hundreds more homes before it could be stopped, all in a matter of
about 8 hours.</p>
<hr />
<p>We returned to our house the Thursday night after the fires started, exhausted.
After a full day working at Chimera on Sonoma Fire Info, and some dinner that
Friday, I holed up in my office and continued scouring the internet for news
and updates when I startled at the sound of water falling on the tin patio roof.</p>
<p>My first thought: "did a water-tanker helicopter just fly over?" Followed
quickly by "no fucking way, did it start raining!?" Bolting out the front door,
I was disappointed to learn it had not started raining, but then was bemused to
find my neighbor, watering my house.</p>
<p>I can understand the compulsion to water down the house "just in case" in areas
near wildfires, but this wasn't a "just in case" rather, my neighbor caught an
ember burning on my roof earlier in the week. He had since taken to watering both our
houses a couple times a day.</p>
<p>I also learned from my night-owl of a neighbor that he had been sitting on my
corner-lot house's porch, and brandished his pistol a few times at some cars
which took an especially slow roll through our neighborhood, not about to let
any thieves take advantage of the situation.</p>
<p>The CALFIRE maps show that we are almost exactly one mile south of the last
structures completely destroyed by the Tubbs Fire.</p>
<p>This was close, terrifyingly close.</p>
<hr />
<p>The next Monday, a week after the fires broke out, I return to work, to
questions of "are things okay?"</p>
<p>I lie.</p>
<p>Everybody in Sonoma county who didn't lose a house, knows somebody who did.
Thousands of people will have to wait until early 2018 for the EPA to remove
thousands of tons of toxic ash and debris, requiring a clean-up operation of
unprecedented size, before they can begin to rebuild. Large portions of
Sugarloaf Ridge State Park are burned, the majority of Annadel State Park is
destroyed. Most of the little Sonoma Valley towns I drive through on my way to
Napa have suffered severe damage.</p>
<p>This region, this adopted home of mine, is scarred in places beyond appreciation
for many Americans, including some who live here.</p>
<p>Much as I would like to wallow in that frustration and despair, there is no
direction to go but forward. There is nothing that will undo what has been
done, nothing will make this "okay."</p>
<p>There is no option for Sonoma county, and Santa Rosa, but to enjoy the warmth
of the autumn sun, pick up the pieces, and to rebuild.</p>
<p>"This is your reality now."</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/js4N2uarncI" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/10/23/this-is-reality.htmlWatching fire come down the mountain2017-10-09T00:00:00-07:00http://unethicalblogger.com/2017/10/09/fire-coming-down-the-mountain<p>The insanely strong gusts of wind would not stop clattering the tin roof panels
over the back patio. Begrudgingly, I awoke, dressed, and tried to secure the
roof panels before the neighbors got too ornery. Stepping up the ladder, I
noticed an orange glow north of the house. Just after midnight, I had not heard
any sirens, I jumped into the car on the assumption that one of those houses
by the park was burning and had not yet been reported.</p>
<p>Wearing a flannel, jeans, and my flip-flops, I speed off into the night. Not
entirely sure what aid I could render, as a mostly-useless person wearing
inappropriate fire-fighting footwear.</p>
<p>Passing the park, seeing nothing, I figure it's the neighborhood behind, and
continued driving. The next neighborhood doesn't show any fire but I smell
smoke, so I continue on towards Fountaingrove Parkway which crosses one of the
highest ridges in Santa Rosa.</p>
<p>Atop Fountaingrove Parkway, I see the hills to the north, an area I later learn
is "Shiloh ridge", are glowing.</p>
<p>I do not see flames, but they're glowing. I turn my hat backwards so the gusts
of wind don't blow my hat from my head. Not more than two minutes pass and
flames crest the ridge.</p>
<p>"Oh shit" I exclaim to nobody in particular.</p>
<p>Walking back to the car, I stand on the bumper for a better view and see the
flames already pushing more than halfway down Shiloh Ridge. In a matter of
minutes, the ridge glowing against the smokey night sky had erupted in flames.</p>
<p>"Oh fuck this!" and I scurry into the car and speed off.</p>
<hr />
<p>Driving back to house, I call my wife, who is rather surprised to learn I'm not
sleeping beside her. She puts a kettle on, and starts preparing the go-bag. I
arrive home around 1:00, half the sky is clear with a full moon, the other half
smoke filled with an orange backlight.</p>
<p>While preparing some stuff to go, we start listening to the scanner, and begin
to watch Twitter.</p>
<p>Within 30 minutes the evacuation notices are rolling out.</p>
<p>Within 60 minutes the fire jumps over US Highway 101.</p>
<hr />
<p>We voluntarily evacuated to Sebastopol at 3:00.</p>
<hr />
<p>Between Santa Rosa and Sebastopol, the air foggy with smoke and ash, we are
able to see fires raging on the hills to the southeast of Santa Rosa. Arriving
in Sebastopol at 3:45, everybody had already been awoken by the smell of smoke.</p>
<p>By 10:00, significant chunks of northern Santa Rosa have burnt to the ground.
The neighborhood from that glowing ridge, which I saw around midnight: gone.
The valley below, where I watched the flames flicker down the hill: gone. The
ridge I stood atop for all of five minutes, is now also on fire.</p>
<p>It is still uncertain how the fire will develop throughout the day, how long
the fire will burn, and how scarred the beautiful Sonoma and Napa Valleys will
be when it's all over.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/JUk5pJgANCU" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/10/09/fire-coming-down-the-mountain.htmlJoin the Azure OpenDev Event2017-10-05T00:00:00-07:00http://unethicalblogger.com/2017/10/05/azure-opendev<p>Quite possibly my favorite part about working on open source infrastructure is
that I can <strong>share</strong> as much as I want! Contrary to corporate infrastructures,
where most of it is closed source and hidden away, open source project
infrastructure is by its very nature open. From a pedagogic standpoint, this
allows me to teach others without needing to create contrived demonstrations or
examples, but we can instead refer to the <a href="https://github.com/jenkins-infra">real
code</a> being used to deploy the Jenkins
project.</p>
<p>On <strong>October 25th</strong> at <strong>9am PST</strong> I will be at Microsoft's Channel9 studios
with a <a href="https://azure.microsoft.com/en-us/opendev/">number of other smart
people</a> to talk open source tools
and technologies with Microsoft's Azure cloud platform.</p>
<center><img src="/images/post-images/azure-opendev/opendev.png" title='Azure
OpenDev, Oct 25 2017'/></center>
<p>My session will begin at 9:45am, and will focus on <strong>Continuous delivery of infra to Azure</strong>:</p>
<blockquote><p>The Jenkins project hosts most of its infrastructure—a combination of
Terraform, Kubernetes, and Puppet—in Azure. As an open source project, it
automates the delivery of their own infrastructure-as-code, all of which is, of
course, open source.</p>
<p>In this session, Tyler will show some live examples of infrastructure
continuous delivery with Jenkins and Azure.</p></blockquote>
<p>Based on the previous Azure OpenDev events that I have seen, this should be a
lot of fun, I hope you're able to tune in!</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/1cOkA7tOFro" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/10/05/azure-opendev.htmlThey will blame you2017-10-04T00:00:00-07:00http://unethicalblogger.com/2017/10/04/they-will-blame-you<p>Over the past decade two things have become increasingly clear: practically
every modern industry is part of "the software industry," in one way or
another, and "the software industry" is rife with shortcuts and technical debt.
Working in an Operations or Systems Administration capacity provides a
front-row seat to many of these dysfunctional behaviors. But it's not just
sysadmins, many developers are also called to engage in or allow: half-baked
product launches, poor-quality code deployments, or subpar patch lifecycle
management.</p>
<p>Make no mistake, if something goes wrong, <strong>they will blame you.</strong></p>
<p>Just yesterday, I was working on my truck in the driveway and a neighbor struck
up a conversation about diesel engines. The conversation naturally led to a
discussion about Volkswagen's massive diesel emissions scandal. I mentioned to
my neighbor how infuriated I was that <a href="http://www.latimes.com/business/autos/la-fi-hy-vw-hearing-20151009-story.html">Volkswagen executives blamed developers</a>
for the scandal. Prior to that news story, I naively assumed that executives
took ultimate responsibility for the successes, and failures, of their
organizations.</p>
<p>As the sun set, I wrapped up my work and came back inside to see <a href="https://www.engadget.com/2017/10/03/former-equifax-ceo-blames-breach-on-one-it-employee/">this story from Engadget</a>
wherein former Equifax CEO blamed IT staff for the failure. The Equifax breach
was made possible because of an out-of-date Apache Struts dependency.</p>
<p>Setting aside for a moment that personal-identifying information should <em>never</em>
be a single vulnerability away from exposure. Setting aside for a moment that
the majority of the Equifax business relies on <strong>trust</strong>, and should have
therefore been subject to vigorous and regular third-party security audits.
Setting aside for a moment that information security relies on defense in
depth, which is an organization-wide practice. The former CEO blamed
underlings, rather than leadership for the systemic failures of Equifax to
secure highly sensitive personal information.</p>
<p>Make no mistake, if something goes wrong, <strong>they will blame you.</strong></p>
<hr />
<p>Before I dropped out of college, while I was still pretending to study
Computer Engineering, I took an Engineering Ethics course. We discussed Space
Shuttle disasters, bridge failures, and other calamities, at length. One
recurring theme from many of the incidents was management ignoring or covering
up expert advice, or concerns, by engineering staff. The conclusion drawn, for
the auditorium of young engineering students, was that it was our
responsibility as "Professional Engineers" to ensure the safety and quality of
our work, and make sure that we had solid documentation for any safety concerns
we raise, otherwise we could be held liable.</p>
<p>I am starting to believe that, before the decade is over, we will start to see
developers and systems administrators held civilly liable for failures in
systems we create and for which we are responsible.</p>
<p>It is up to you to advocate for good patch lifecycle management practices. It
is up to you to build systems which prevent poor-quality code deployments. It
is up to you to advocate for well-designed products which defend user privacy
and personally-identifiable information. Because make no mistake, if something
goes catastrophically wrong, they will blame you.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/iDAb0SwA2VE" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/10/04/they-will-blame-you.htmlDon't get water on the leaves2017-10-01T00:00:00-07:00http://unethicalblogger.com/2017/10/01/dont-water-the-leaves<p>"For vegetables, your best bet is to get some drip lines 'cause you don't want
to get water on the leaves" said the helpful employee at a local farm supply
store. I have heard this "advice" <em>numerous</em> times over the past few years, and
it gets a little deeper under my skin each time I hear it. Like most advice
handed out in this fashion, there's a kernel of truth hiding somewhere behind
layers of indirection associated with such old wive's tales.</p>
<p>As I mentioned in my <a href="/2017/09/13/growing-tomatoes.html">last gardening related
post</a>, I am certainly not an expert, but I'm
also not a novice. Therefore take what I'm about to tell you as nothing more
than a pile of supposition from lots of reading and years of experimentation.</p>
<p>The nugget of fact behind "don't get water on the leaves!" comes down to, at
it's most basic level, avoiding scenarios which might promote fungal or mildew
growth on plant leaves. That said, <strong>leaves are meant to get wet</strong>, in fact,
most leaves helpfully channel water to the root system of the plant. This past
season, I marveled at how perfectly the ridges in the okra leaves bowed and
dripped water directly into the root zone. However, if leaves <em>remain</em> wet,
that can promote the growth of crop-destroying fungi and mildews.</p>
<p>The most common affliction most home-gardeners will likely recognize would be the
ever-spiteful <strong>powdery mildew</strong>. Powdery mildew can spread from leaf to leaf
and demolish an entire crop. At the end of the summer season, my friend's
pumpkin patch has been nearly entirely obliterated by powdery mildew. The
blight has destroyed leaves up and down the vines and even spread to some
pumpkins in the patch. Words cannot quite describe the nauseating sight of a
20lb pumpkin which been engulfed with the chalk-like green of the mildew.</p>
<p>I can state with confidence that my friend definitely was spraying water all
over the leaves of that pumpkin patch. I can also state with confidence that
simply spraying "water on the leaves" was <em>not</em> the cause of his mildew
problem.</p>
<p>Some key contributing factors to fungal and mildew growth can be:</p>
<ul>
<li><strong>Splash-back</strong>: spores can remain dormant in the soil for <em>years</em>, and using
a high-pressure hose which splashes water and soil <em>up</em> onto the plant, can
be a big contributing factor to growth. In my experience, this usually takes
the form of mud splashing back onto the bottom of leaves, giving the spores a
nice hiding spot to germinate and start ruining things.</li>
<li><strong>Tainted soil</strong>: if a patch, or an adjacent patch, becomes contaminated with
spores, the next season you simply cannot plant the same plants there.
Plantings should be rotated anyways, but if an area with squash/gourds becomes
contaminated with any fungus, I wouldn't plant squash/gourds anywhere near it
for at least a few years.</li>
<li><strong>Low-wind/stagnate air</strong>: areas where the soil stays moist, with stagnate
air, can also foster ideal growing conditions for mildews. Anecdotally
speaking, I have <em>only</em> ever seen mildew in garden plots which have
little-to-no cross-wind. Plots whose air is especially stagnate during the hot
summer months which have low-wind conditions. The stagnate air means the soil
is going to dry-out slower and the air above the soil will remain more humid;
a perfect environment for mildew.</li>
<li><strong>Keeping leaves wet overnight</strong>: as the air cools, it's ability to accept
moisture lowers. In essence, it takes much longer for water to evaporate at
night than during the day. Generally this is why many people will water their
plants at night, but allowing leaves to remain wet for long periods of time can
also be risky. In west Sonoma county, due to on-shore flow, it's typically
more humid at night which can make evaporation that much slower. The longer
the leaves remain wet, the more vulnerable they can be to fungal growth.</li>
<li><strong>Specific plants</strong>: as alluded to before, squashes/gourds (summer squash,
zucchini, cucumber, pumpkin, and other gourds) can be particularly
susceptible to powdery mildew. Tomatoes can also suffer from a number of
leaf-curling blights. Depending on the conditions of your garden, some plants
might not have what it takes to survive in a specific spot, or the location
in general. This doesn't just come down to
likelihood of blights, fungi, and mildews, but also pollinators, soil quality,
wind, and sun.</li>
</ul>
<p>Much of gardening is simply providing an environment in which the plant you're
growing will have it's best success. Unsurprisingly, most plants want to live.
Your job as a gardener is to ensure the most suitable conditions for the plant
to succeed, without enabling other naturally occurring organisms (fungus,
mildew, weeds, etc) an opportunity to themselves succeed.</p>
<p>It's not just as simple as "don't get water on the leaves!" Which, said alone,
is such simplistic advice you might as well treat it as a pleasantry like "have
a nice day!"</p>
<p>Smile, nod, and on your way out the door respond with a hearty "you too!"</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/4Wc-25LO2EE" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/10/01/dont-water-the-leaves.htmlReplacing Coastguard2017-09-22T00:00:00-07:00http://unethicalblogger.com/2017/09/22/replacing-coastguard<p>I have tremendous difficulty with decommissioning electronics. I only recently
stopped using my <a href="https://en.wikipedia.org/wiki/Galaxy_nexus">Galaxy Nexus</a>, an
almost five year old cell phone. Earlier this year I recycled a 32-bit
x86-based <a href="http://www.thinkwiki.org/wiki/Category:T41">Thinkpad T41</a>, only
because its overheating issues made it impractical to continue running
workloads. And up until today, the lowest powered device actively running a
Unix in my office, was a 266Mhz AMD Geode-based <a href="http://soekris.com/">Soekris</a>.</p>
<p>The little Soekris, <code>coastguard</code>, was given to me by my friend Dave who had
himself decommissioned it <em>years</em> ago. I cannot exactly remember when I started
using <code>coastguard</code> to act as a FreeBSD-based
(<a href="https://www.pfsense.org/">pfSense</a>) router, but it was easily over five or
six years ago.</p>
<p>Unfortunately my traffic requirements have since exceeded the capabilities of
the little device. Between my inability to discard computers, and more
electronics sprouting network capabilities, a total of ten devices may be using
the network at any given time. If that wasn't troubling enough for the little
tin can, streaming video has become very important. In aggregate those
ten devices are more frequently maxing out the uplink connection, and fighting
for traffic priority.</p>
<p>In it's stead, I have installed <code>strawberry</code>, a <strong>much</strong> more powerful
<a href="https://freebsd.org">FreeBSD</a> 11.1 machine which is running a very simple
gateway and <a href="https://www.freebsd.org/doc/handbook/firewalls-pf.html">packet filter</a>
configuration. All said and done, it probably took me about 30 minutes to copy
and paste the right configurations into place. What makes the "replacement"
comical to me is that I mentally procrastinated on replacing <code>coastguard</code>
because "pfSense is so easy" and I didn't want to sink a bunch of time
fiddling with FreeBSD to make it work for my needs.</p>
<p>Either FreeBSD has made things much easier, or I have gotten smarter.
Regardless, I'm sad to see <code>coastguard</code> make it's way into the bin which
eventually will go to the e-waste recycler.</p>
<p>Based on my performance recently, it is probably going to be a few years before
I can part with my first generation Raspberry Pis, which now will occupy the
"slowest computer in use" slot in my home office.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/dcoPWCWYRxs" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/09/22/replacing-coastguard.htmlI am working with a management coach2017-09-19T00:00:00-07:00http://unethicalblogger.com/2017/09/19/management-coaching<p>Practically every professional developer can name a great, and a terrible,
manager they have worked with in the past. Good Engineering Managers are kind
of like the bass line in a song, you might not notice them when they're there,
but something will definitely sound wrong if they're absent. For one reason or
another, I have somehow ended up leading a team or acting as an Engineering
Manager at each of the four companies I have worked for over the past decade.
As time has progressed, I have become increasingly aware of "management" as a
skill, rather than some intristic talent. A skill which can be practiced,
honed, and improved upon.</p>
<p>Most "new" Engineering Managers in startups seem to be individual developers
who are promoted (or demoted depending on how you feel about it) into the
position because of their technical acumen. I think this is how I have ended
up, against better judgement, in the position of Team Lead or Engineering
Manager in the past.</p>
<p>Being technically skilled doesn't mean <strong>jack shit</strong> with regards to becoming a
good Manager. <em>However</em>, being a good developer <em>does</em> hold some correlation
to management potential. In case you've missed the memo, a "good developer" is
someone who:</p>
<ul>
<li>Understands, and can internalize, the problems which need to be solved.</li>
<li>Is proficient with the technologies being used to solve those problems.</li>
<li>Capable of brokering consensus among a team of other developers to solve
aforementioned problem.</li>
<li>Able to communicate and document the problem and architecture for the
solution.</li>
<li>Can collaborate outside of the team with others in the organization to ensure
the solution(s) are in line with other projects in development.</li>
</ul>
<p>I think the list could probably be longer, but you get the point. Skilled
development is less about writing code, and more about collaboration with
other people.</p>
<p>The problem with that promotion from Developer to Manager is, unfortunately,
that being a good Manager is all of those qualities cranked up to eleven, with
additional responsibilities like: career growth, budgets, and other boring
nonsense thrown in. As if that wasn't bad enough, nobody talks about, or trains
you to be a good Manager. Most of the good Managers I have worked with were
fortunate enough to work with "good Managers" in the past.</p>
<p>Most of the "bad" Managers I have worked with fit into one of two categories:</p>
<ol>
<li>Developers unwilling to take on the Management duties either because of
disinterest or a complete lack of understanding of the role and
responsibility.</li>
<li>Incompetent, and unable to learn/improve/etc.</li>
</ol>
<p>Personally, I don't know if I am a good Manager or not. But I do know that I
have been <em>working</em> at it, and I believe that I am getting better.</p>
<p>The fundamental shift in thinking which I experienced was to understand that:
<strong>Management is a skill just like building software.</strong> If you don't work at it,
you'll never get better at it.</p>
<hr />
<p>One of the hardest parts about being a Manager, or a leader, in
any hierarchical organization is that it can feel "lonely." You become the
"One" in the "One-to-Many" relationship. This can not only be personally
isolating, it also means you might not be learning from your peers in the
organization.</p>
<p>My mind sometimes goes to this famous photo of John F. Kennedy during the Cuban
Missile Crisis:</p>
<center><img src="/images/post-images/management-coaching/loneliest-job.jpg"
alt="The Loneliest Job in the World"/><br/><strong>"The Loneliest Job in the
World"</strong></center>
<p>Spending your day between hearing, mostly justified, complaints from your
direct reports and arguing with Product Managers who want to squeeze every last
bit of Feature out of your developers, consequences be damned, can be extremely
isolating and exhausting. In the past I have felt, at times, besieged by
everybody around me.</p>
<p>The upside of being a Developer is that I can dump problems on my Manager;
downside of being a Manager...</p>
<hr />
<p>It took me a while to fully understand how ill-equipped many of us are
for the role of Engineering Manager. Fortunately over the past 7-8 months, my
employer has been paying for me to work with a Management Coach. My Coach has been
extremely helpful to bounce ideas off of, to discuss situations which I need to
address, and to provide mentorship, all to help me become a better Manager.</p>
<p>Working with a Management Coach alone hasn't been sufficient, but it's
definitely helped frame my thinking. I have also read more books about
organization psychology and structure over the past 8 months than at any point
before. Additionally I have spent probably more time than at any time in my
career proof-reading emails, mentally playing out scenarios, or having
arguments with myself, attempting to provide both perspectives.</p>
<p>Unintentionally I have also started <em>talking</em> with my fellow managers more than ever
before. Rather than only talking with my peers about projects, deliverables, or
misbehaving reports, we're talking about Management itself.</p>
<p>Setting aside whether I'm actually a good Manager or not, all of a sudden, I'm
able to employ many of the same techniques that made me a good developer. A
little bit of research, mentorship, and experimentation.</p>
<hr />
<p>Management is difficult. Management is having uncomfortable discussions.
Management is bridging the gap between policy and understanding. Management is
helping people succeed. Management is growing others. Management is recognizing
each instrument must collaborate for the orchestra to work.</p>
<p>For me, the catalyst for my mindset shift was internalizing that: <strong>Management
is a skill</strong>.</p>
<p>Find a mentor, find literature, find peers to bounce ideas off of.</p>
<p>Nobody deserves a bad Manager.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/QkTgI2zHoM0" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/09/19/management-coaching.htmlWhat I have learned about growing tomatoes2017-09-13T00:00:00-07:00http://unethicalblogger.com/2017/09/13/growing-tomatoes<p>To say that I'm an expert gardener would be an extraordinary stretching of the
truth; capable, yes, expert, not even close. While I tend to focus on what
crops fail outright, or produce lower-than-desired yields, my neighbors and
some of the folks I know online seem to be impressed with my results.</p>
<p>One of the crops I have grown each season since I started gardening has been
tomatoes. As fickle as tomatoes can be, I seem to have consistently produced
decent-or-better yields. In this post I would like to share what I have found
to work, and not work, with growing tomatoes. The following is based on my own
experimentation, and observations made of neighbors and friends.</p>
<h2>Location</h2>
<p>This is the number one thing I notice other new gardeners mess up. It's common
enough to where I have vocally lamented the placement of tomato plants in a
neighbor's yard while walking the dog. I have also heard the complaints of
friends, living in San Francisco, or somewhere equally overcast and foggy,
complain about their weak tomato plants.</p>
<p>Tomatoes need an absurd amount of sunlight. I genuinely don't think there is
such a thing as "too much sun" for tomatoes.</p>
<p>The location I have chosen for my tomatoes receives sunlight from dawn until
dusk, which means at the high point of the summer the plants will be receiving
more than 14 hours of sunshine.</p>
<p>Delicious sunshine which they turn into delicious tomato.</p>
<p>I think, without any scientific data to back this assertion up, that if your
location doesn't guarantee 10+ hours of direct sunlight a day, tomatoes
probably aren't suitable for that location.</p>
<p>Assuming you do get 10+ hours, the next subject becomes very important.</p>
<h2>Soil</h2>
<p>When I first started gardening, I was fortunate enough to have absolutely
<strong>garbage soil</strong> in my backyard. This forced me to "build it from scratch", so
I purchased many cubic feet of planting soil and manure. I still vividly recall
that first season, the big, obviously a little slow, guy who worked at the
garden supply store giggling as he loaded bags of manure into the back of my
nice VW.</p>
<p>As time has gone on, I have learned <strong>so much more</strong> about soil and soil
health. I fancy myself an organic farmer, not because I have an aversion to
chemicals and chemistry, but because I have a fondness for bugs and biology.</p>
<p>Removing the tomato plants at the end of this 2017 season, I was absolutely
astonished with how <em>strong</em> and <em>dense</em> the root systems were for the plants.
Most certainly the strongest plants I have grown to date, with the most
biologically active soil I have worked with to date.</p>
<p>Soil health, primitively speaking, boils down to three areas:</p>
<ol>
<li>Nutrition</li>
<li>Moisture</li>
<li>PH</li>
</ol>
<p><strong>Nutrition</strong></p>
<p>Planting soil by itself is not sufficient. One mistake I have noticed some
folks make, will be buying some of that potting soil mix, plopping some tomato
starts into it, and hoping for the best. Setting aside what I think of potting
soil mixes (they suck) for a moment, planting soils provide some of the basic
ingredients for success but are <strong>inert</strong>.</p>
<p>Good soil must be <strong>alive</strong>.</p>
<p>One of the mistakes I made early this season was just mixing manure into the
soil. Putting manure into the soil isn't enough, good compost is necessary.</p>
<p><strong>Compost provides the biology your plants need to be successful</strong>.</p>
<p>What I tried in the 2017 was layering roughly 2-3 inches of compost over the
top of the soil partway through the season. The plants looked undernourished,
which is subjective to say the least, but they simply looked weak and had
little branches and leaves. I demand bushiness from my tomatoes!</p>
<p>The compost seemed to really help kickstart the ecosystem in the garden box,
and the tomato plants subsequently started to grow stronger more rapidly than
they had previously.</p>
<p>I would estimate that for my tomato bed there is probably 1/2 cubic yard of
"planting soil", with 3-4 cubic feet of chicken manure, and approximately 5-6
cubic feet of compost layered over the top mid-season.</p>
<center>
<a data-flickr-embed="true"
href="https://www.flickr.com/photos/agentdero/34128361464/in/album-72157683158804366/"
title="North crop"><img
src="https://farm5.staticflickr.com/4225/34128361464_22c3c1561d.jpg"
width="500" height="375" alt="North crop"></a>
</center>
<p><strong>Moisture</strong></p>
<p>A lesson learned from the 2016 season in Santa Rosa was that the tops of the
soil will <em>cook</em> during the harsh mid-day summer sun. While those 14 hours of
sun were helping the plants, the tops of the soil crusted and dried out the
soil very rapidly.</p>
<p>One of the things I learned recently about healthy and alive soil is that it
retains water much better than "planting mix" does by itself. Inert soil acts
like a dirty sieve for water to pass through, whereas <em>alive</em> soil behaves
much more like a sponge, soaking up the water. This in turn makes it much more
available for the plants.</p>
<p>What I did this season was layer straw mulch over <strong>every inch</strong> of exposed
soil for every single bed, not just tomatoes. This practice combined with the
addition of plenty of compost made for beds which provided adequate moisture to
the tomatoes as they grew.</p>
<p>Last season I lost a number of tomatoes to blossom-end rot which can be caused
by poor soil nutrition and/or the water demands of the plant not being met.</p>
<p>This year I didn't see a single tomato with blossom-end rot.</p>
<p>One side benefit I noticed about the straw mulch is that it allowed a very
active insect ecosystem to develop. I recall a number of times when I caught
birds hopping through my beds, grabbing a delicious cricket or pill bug to eat.
If you cannot <strong>see</strong> the life in your soil, it's probably not "alive" enough,
and needs some help!</p>
<p><strong>PH</strong></p>
<p>A lesson I learned mid-way through this season is that soil pH is important.
It's not necessarily be-all-end-all for soil health, but it is a good
approximate measure of whether the soil is acidic enough for the plants to
properly access the nutrients they need like Calcium, Magnesium, Phosphorus,
and Nitrogen.</p>
<p>This year, I purchased a meter (~$40) which tells me the soil pH, moisture
level, and temperature, and immediately started testing. I was shocked to find
out how far off almost every single one of my beds was from a "good" pH level.
I spread some agriculture dolomite (limestone basically) along with my
application of compost, which brought the pH back into "good" range within a
week or so.</p>
<hr />
<p>Setting aside any of the fancy tools you can buy, the number one thing I
recommend looking for in your soil is whether there are insects. If there
aren't bugs in your soil, then there probably aren't the little critters that
bugs eat, and there aren't the bacteria which the little critters eat. And if
those bacteria aren't there, there's nothing working symbiotically with your
tomatoes to help them grow.</p>
<p>Good soil is <strong>alive</strong>.</p>
<h2>Trellising</h2>
<p>I grow my tomatoes in the classic tomato cages, which works fairly well. I do
not however pay much attention to <em>how</em> they grow in those cages. During the
growth phase I will guide them up-and-out as necessary, but not much past that.</p>
<p>I have never bothered pruning "suckers" from plants, but have read that some
people see good results with it. I largely try to ensure that the vines are
always supported and that any dead leaves are pruned immediately to allow other
leaves to receive sunlight.</p>
<p>My tomato plants look messy, which frankly, I'm okay with. I want them getting
as much of that delicious sunlight as they can get!</p>
<center><a data-flickr-embed="true"
href="https://www.flickr.com/photos/agentdero/35500651481/in/album-72157683158804366/"
title="IMG_20170630_081658"><img
src="https://farm5.staticflickr.com/4278/35500651481_5aecf89a21.jpg"
width="500" height="375" alt="IMG_20170630_081658"></a>
</center>
<h2>Weather</h2>
<p>One thing I learned in the 2017 season is how much the weather can affect the
productivity of tomatoes. The biggest challenge this year has been dealing with
the heat and aggressive sun.</p>
<p>Due some exceptionally hot days this summer, I had a number of tomatoes develop
thicker skins than I would like them to have. From my research, the best way I
can defend against this in the future is with the use of shade cloth for the
high points of the day to help reduce the temperature of the tomatoes
themselves.</p>
<p>We'll see how this goes next year, but it's worth keeping in mind that tomatoes
won't "automatically" be delicious, there may be some week-to-week changes and
management you need to perform to addresss the changing weather in your area.</p>
<hr />
<p>The tips above are anecdotal at best. Take them with a grain of salt. For best
success in your location, I strongly recommend keeping a log in a notebook of
conditions, changes you make, and harvest over time. Referring back to this log
at the beginning of the following season will help your plants improve with
each successive year.</p>
<p>If there's one thing you take to heart however, let it be this: <strong>good soil is
alive</strong>.</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/kqCdanpilIU" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/09/13/growing-tomatoes.htmlWhy bother with Docker (on FreeBSD)?2017-09-06T00:00:00-07:00http://unethicalblogger.com/2017/09/06/why-bother-with-docker<p>Yesterday I participated in a very fun and productive <a href="https://wiki.freebsd.org/DockerHackDay2017">Docker Hack
Day</a>, wherein a few folks (myself
included) spent the day hacking on porting <a href="https://github.com/freebsd-docker">Docker to
FreeBSD</a>. After which, I had a nice relaxing
beer (or two) on my boat/train rides home and enjoyed one of my favorite
past-times: shit-posting on Twitter.</p>
<p>As it usually happens when I get sassy on Twitter, somehow real productive discussions
started to occur; much to my chagrin.</p>
<p>In one round of discussion, Kamil asked the following:</p>
<blockquote><p>I'm trying to get my head around the value prop here. What am I getting that
jails don't already get me on FreeBSD?</p>
<p><a href="https://twitter.com/kchoudhu/status/905304696411348992">@kchoudhu</a></p></blockquote>
<p>A very good question! One thing that Docker, as a technology, has done very
well at is becoming more valuable than the collection of its parts. Namely, one can
accomplish much of what Docker does with iptables, LXC, chroots, and some of
the clever union filesystem patterns Docker utilizes underneath the hood.</p>
<p>Similarly, to support Docker on FreeBSD, we will need ZFS, Pf, and some clever
use of Jails. But would FreeBSD/Docker really be more valuable than the sum of
its parts?</p>
<p><strong>Most definitely</strong>.</p>
<p>To explain the benefits of Docker for the FreeBSD environment, I'm going to
muddy the waters between what Docker can do today, and what FreeBSD/Docker
<em>could</em> do tomorrow. Underneath the covers, the overarching theme for me is
<strong>portability</strong>. FreeBSD/Docker would be able to take advantage of FreeBSD's
"Linuxulator" support and run Linux binaries, alongside FreeBSD binaries,
meaning every container already in existence would be usable.</p>
<p>That said, there are some other benefits!</p>
<h3>Packaging</h3>
<p>The packaging format for an image alone is a really useful feature. The
"container" metaphor (shipping container) as it's typically understood refers
to the <strong>image</strong> which could be published to, and downloaded from, <a href="https://hub.docker.com">Docker
Hub</a>. This standard format for schlepping an application
around is really helpful. Packaging an application into a container means it
can be immediately deployed into Kubernetes, or one of a half dozen other
container runtimes, without modification.</p>
<p>From a service delivery standpoint, infrastructure I am responsible for must
fit into a container, or I'm not going to waste my time on it.</p>
<p>(Unfortunately Docker suffers from nomenclature overload and stupidity here, the
"image" and "container" terms seem like they're reversed, but that's another
ranty blog post.)</p>
<h3>Networking</h3>
<p>One of the things I find most useful with Docker is the support for
container-based networks. I will typically make use of this with
<code>docker-compose</code>, adding a <code>docker-compose.yml</code> to a project which describes a
series of services necessary to run a specific application. For example, if my
web application requires LDAP and Redis to be running, <code>docker-compose up</code> will
stand up a container for: my webapp, ldap, and redis, placing them all on the
same network, allowing me to link them together.</p>
<p>Rather than running these all locally, I can have <code>docker-compose</code>
automatically create a specific network for those services to talk to each
other. Meaning I could simultaneously have multiple Redis containers running on
my machine, for all kinds of different web application projects, without any
problems.</p>
<p>Whilst I <em>could</em> do this myself with iptables or pf, it would be such a huge
pain, that nobody would ever do it.</p>
<p>Additionally, since I have this topology defined in <code>docker-compose.yml</code>, other
developers working on the project from their Mac OS X, Linux, or FreeBSD
workstations would be able to stand up the exact same stack of containers.</p>
<h3>Isolation</h3>
<p>For me the "isolation" of the processes running in a container is interesting
but not actually a requirement for my work. I have never really trusted Docker
"isolation" for anything security related.</p>
<p>The isolation does become interesting from a filesystem perspective however.
For example, while I technically can create chroots and jails myself, having
those "enabled by default" for a Docker container, with optional volume mounts
for specific directories is <em>immensely</em> useful for local development.</p>
<p>For example, sometimes I need a quick Jenkins instance to test/verify something,
typically reproducing a bug:</p>
<pre><code>docker run --rm -ti -p 8080:8080 jenkins/jenkins:lts-alpine
</code></pre>
<p>When that container executes, it receives its own little isolated file system,
which disappears entirely when I stop the container (<code>--rm</code>). There are other
times however, when I need to inspect the contents of the <code>JENKINS_HOME</code> for
bad data, in which case, I would volume mount a specific directory through:</p>
<pre><code>docker run --rm -ti -p 8080:8080 -v $PWD/jenkins_home:/var/jenkins_home jenkins/jenkins:lts-alpine
</code></pre>
<p>From the perspective of the running process (<code>java</code>), the file system looks
completely normal, as it would expect. From my perspective as the user however,
anything the process writes to <code>/var/jenkins_home</code> is available on my host
machine under <code>$PWD/jenkins_home</code>, ready for inspection.</p>
<p>For development, or testing, this is immensely useful!</p>
<hr />
<p>Many of these patterns are already very useful to me via Docker on Linux.
However, I fundamentally believe that FreeBSD is a superior development OS.
Between DTrace, ZFS, Ports (via pkgng), and numerous other helpful features
that come from a wholistically packaged and distributed operating system.</p>
<p>Personally, the lack of Docker on FreeBSD has been <strong>the</strong> major impediment for
my own usage of FreeBSD as daily development system. By my own choice, I
<strong>must</strong> have Docker to do my work, and will not work without it.</p>
<p>Bringing Docker to FreeBSD is, to that end, a selfish endeavour; wish me luck!</p>
<img src="http://feeds.feedburner.com/~r/UnethicalBlogger/~4/Wloda6KGsn8" height="1" width="1" alt=""/>http://unethicalblogger.com/2017/09/06/why-bother-with-docker.html