Code-sparksTechnically yet another blog regarding code, technology, all new and old.
http://darienmt.com/
Thu, 22 Feb 2018 19:51:06 +0000Thu, 22 Feb 2018 19:51:06 +0000Jekyll v3.6.2Local Kubernetes<p>If you are wondering for a while what <a href="https://kubernetes.io/">Kubernetes</a> and how I get it locally, this could be a good place to start.</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Kubernetes_logo.svg/798px-Kubernetes_logo.svg.png" alt="Kubernetes logo" /></p>
<p>Well, no really, if you want to start with Kubernetes, I would recommend reading a bit about it before getting to your local single-node cluster. Here are some free resources to get you started:</p>
<ul>
<li><a href="https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x">Introduction to Kubernetes</a>: Great introductory course is walking through all basic concepts. <a href="https://www.edx.org/">EdX</a> hosts it with chapter’s knowledge checks and a final exam. Nothing to be scared of it, but it is nice to check if you understand everything correctly. You can finish the course in around 10 hours.</li>
<li><a href="https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615">Scalable Microservices with Kubernetes</a>: Hosted by <a href="https://www.udacity.com/">Udacity</a>, it is tough in part by <a href="https://github.com/kelseyhightower">Kelsey Hightower</a> from Google. This course is a bit fast talking about more general concepts than just Kubernetes. It is excellent, but it will leave you with a hunger for more information.</li>
<li><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way">Kubernetes the hard way</a>: Written by Kelsey Hightower, It a GitHub-repo tutorial very oriented to learn and understand everything about Kubernetes in details. It has a strong dependency on <a href="https://cloud.google.com/">Google Compute Cloud</a>, but it worth trying if you want to understand more than the basics.</li>
</ul>
<p>After reviewing some material, you are ready to start creating your local cluster, and, basically, you don’t need the rest of this page, but there are always some tweaks here and there that could be useful. We have at least two options to install a local single-node Kubernetes cluster:</p>
<ul>
<li><a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a>: It is easy to use and widely supported way to install it. There is a lot of documentation just by google-ing it. It will create a VM and install Kubernetes on it. It can be used with VirtualBox as the hypervisor; so, we are all happy. It has the complexity of a VM, meaning it is not localhost, but I would recommend it.</li>
<li><a href="https://www.docker.com/docker-mac">Docker for Mac(Edge)</a>: The Edge version of Docker for Mac can enable a Kubernetes cluster. If you have it, it is straightforward to activate, and it will be accessible from localhost. The only problem is that it is more involved in the sense that it is not as easy to use as Minikube. For example, the Kubernetes dashboard needs to be installed manually. <a href="https://rominirani.com/tutorial-getting-started-with-kubernetes-with-docker-on-mac-7f58467203fd">Here</a> is a good tutorial on how to set it up.</li>
</ul>
<p>As an example, I will be describing here the steps to set up a <a href="https://www.influxdata.com/time-series-platform/">TICK stack</a> on your local single-node Kubernetes cluster on a Mac with <a href="https://brew.sh/">Homebrew</a>.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>You need to have installed on your Mac:</p>
<ul>
<li><a href="https://www.virtualbox.org/">VirtualBox</a></li>
<li><a href="https://www.docker.com/docker-mac">Docker for Mac</a>(Stable or Edge)</li>
</ul>
<h2 id="step-1-install-minikube-and-kubectl">Step 1: Install minikube and kubectl</h2>
<p>In addition to minikube, you need <a href="https://kubernetes.io/docs/reference/kubectl/overview/">kubectl</a> to send commands to the cluster. This is how you brew it:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ brew update
$ brew install kubectl
$ brew cask install minikube
</code></pre></div></div>
<p>To check the installation:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ minikube version
minikube version: v0.25.0
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b",
GitTreeState:"clean", BuildDate:"2018-02-09T21:51:54Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
</code></pre></div></div>
<p>Start minikube with the following command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ minikube start --vm-driver=virtualbox
</code></pre></div></div>
<p>You need to be connected to the Internet as it will be downloading the iso needed to create the VM. It will take a while. Especially, if you are at the office. When it is done, you can check the cluster status with:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
</code></pre></div></div>
<p>The cluster is ok, now run the dashboard for the first time:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>minikube dashboard
</code></pre></div></div>
<h2 id="step-2-install-helm-and-some-charts">Step 2: Install helm and some charts</h2>
<p>Why install this when we can configure everything with kubectl? Sure, we can do that, but it is a long road. <a href="https://helm.sh/">Helm</a> is a package manager for Kubernetes, and it will make our life easier. We just brew it like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>brew install kubernetes-helm
</code></pre></div></div>
<p>Helms “packages” or application are called charts. We can get some charts from a Github repo until we have our own:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/kubernetes/charts.git
</code></pre></div></div>
<p>Change directory to charts/stable. Each directory there is a chart for us to use.</p>
<h2 id="step-3-installing-the-tick">Step 3: Installing the TICK</h2>
<p>We need to install four pods to have the TICK stack running: Influxdb, Telegraf, Chronograf, and Kapacitor.</p>
<p><img src="https://2bjee8bvp8y263sjpl3xui1a-wpengine.netdna-ssl.com/wp-content/uploads/Tick-Stack-Complete.png" alt="TICK" />
[Source: https://www.influxdata.com/time-series-platform/]</p>
<p>We have the charts on the repo we just clone, but we need some adjustment to them first:</p>
<ul>
<li>On the file stable/kapacitor/values.yaml, find the line starting with influxURL and uncomment it.</li>
</ul>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-# influxURL: http://influxdb-influxdb.tick:8086
+influxURL: http://data-influxdb.tick:8086
</code></pre></div></div>
<ul>
<li>On the file stable/telegraf/values.yaml, do the following modification:</li>
</ul>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- urls: []
- # - "http://data-influxdb.tick:8086"
+ urls:
+ - "http://data-influxdb.tick:8086"
</code></pre></div></div>
<p>Now we are ready to create the pods with helm.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pwd
&lt;WHERE_YOU_PUT_IT&gt;/charts/stable
$ helm install --name data --namespace tick ./influxdb/
$ helm install --name polling --namespace tick ./telegraf
$ helm install --name alerts --namespace tick ./kapacitor/
$ helm install --name dash --namespace tick ./chronograf/
</code></pre></div></div>
<p>Check the pods are running:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl get pods -n tick -w
NAME READY STATUS RESTARTS AGE
alerts-kapacitor-5d948b499f-pdcd5 1/1 Running 0 26s
dash-chronograf-65bff774dd-lhpzn 1/1 Running 0 20s
data-influxdb-5596c9b8b4-tghr5 1/1 Running 0 1m
polling-telegraf-ds-zg7zs 1/1 Running 0 36s
</code></pre></div></div>
<p>If something goes wrong with some of them, you can delete the pods with the following commands:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ helm del --purge polling
$ helm del --purge dash
$ helm del --purge alerts
$ helm del --purge data
</code></pre></div></div>
<p>All the pods are created, but they are not accessible from you host because there is no ingress or port forward. It is better to try a port forwarding at this point with this command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl port-forward dash-chronograf-65bff774dd-lhpzn 8888:8888 -n tick
</code></pre></div></div>
<p>The exact name of the pod won’t be the same. You should see what is the real name on your system for the pod <code class="highlighter-rouge">dash-chronograf-****</code>.</p>
<p>You can get the IP address of your minikube with:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>minikube ip
</code></pre></div></div>
<p>Open a browser to that IP and the port 8888 and you should see the dashboard.</p>
<p>Congratulations! You have your very own Kubernetes running…. ok… sort of… (wink)</p>
Thu, 22 Feb 2018 00:00:10 +0000http://darienmt.com/kubernetes/2018/02/22/local-kubernetes-first-encounter-with-the-pilot.html
http://darienmt.com/kubernetes/2018/02/22/local-kubernetes-first-encounter-with-the-pilot.htmldocker,kubernetes,k8s,tick,minikube,homebrewkubernetesUdacity's Self-Driving Car Nanodegree - Term 3<p>This is it! The <a href="https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013">Udacity’s Self-Driving Car Nanodegree</a> is done after nine months.</p>
<p><img src="/images/2018-01-08/Graduation_Certificate.png" alt="Verified Certificate of Completion" /></p>
<p>For the last nine month, I have been staying late or waking up early in the morning to study. Finally, the term three is done, and I graduated from the nanodegree. This last term was more complicated than the rest, but it does close the loop of how a self-driving car system is organized.</p>
<h1 id="lessons-and-projects">Lessons and projects</h1>
<p>The term could be organized into four sections. The first and last two sections are mandatory for graduation, but only one of the elective is required. As I was here for learning as much as possible: I did all of them.</p>
<h2 id="path-planning">Path planning</h2>
<p>This part consists of four lessons and one path planning project. The lessons start by explaining algorithms to search the shorter path between two points on a map. Algorithms like <a href="https://en.wikipedia.org/wiki/A*_search_algorithm">A*</a> and <a href="https://en.wikipedia.org/wiki/Dynamic_programming">dynamic programming</a> are explained. The second lecture explains different approaches to behavioral prediction and the difference between data and model-driven approaches. The third lecture is about behavioral planning using <a href="https://en.wikipedia.org/wiki/Finite-state_machine">finite state machines</a> and cost functions to plan what the car would do. This lecture contains a couple of slides showing the interactions between all the modules of a self-driving car. This is the time when the big picture of all the modules on a self-driving car is presented. Behavioral planning is the slower and top module, and the actuator module is the faster and lower module: it was fascinating. The final lecture shows different strategies for trajectory generation. For example, <a href="http://blog.habrador.com/2015/11/explaining-hybrid-star-pathfinding.html">hybrid A*</a> and polynomial trajectory generation. Usually, self-driving cars have more than one algorithms to generate trajectories based on the situation where the car is.</p>
<h3 id="project-1---path-planning">Project 1 - Path planning</h3>
<p>In this project, we need to implement a path planning algorithms to drive a car on a highway on a simulator provided by Udacity. The simulator sends car telemetry information (car’s position and velocity) and sensor fusion information about the rest of the vehicles in the highway (Ex. car id, velocity, location). It expects a set of points spaced in time at 0.02 seconds representing the car’s trajectory. The communication between the simulator and the path planner is done using <a href="https://en.wikipedia.org/wiki/WebSocket">WebSocket</a>. The path planner uses the <a href="https://github.com/uNetworking/uWebSockets">uWebSockets</a> WebSocket implementation to handle this communication. Here is how the car looks in the simulator.</p>
<p><img src="/images/2018-01-08/path_planning_simulator.png" alt="Path planning simulator" /></p>
<p>This project is implemented in C++. My solution could be found <a href="https://github.com/darienmt/CarND-Path-Planning-Project-P1">here</a>.</p>
<h2 id="advanced-deep-learning">Advanced deep learning</h2>
<p>This section is one of the electives, but it is tough to opt out of it. It consists of four lessons and one project. Most of the lessons are dedicated to <a href="https://www.quora.com/How-is-Fully-Convolutional-Network-FCN-different-from-the-original-Convolutional-Neural-Network-CNN">fully convolutional neural networks</a> applied to semantic <a href="https://en.wikipedia.org/wiki/Image_segmentation">segmentation</a>. Especially the article <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf">Fully Convolutional Networks for Semantic Segmentation</a> is examined in details. Also, there is an excellent lecture about inference optimization with <a href="https://www.tensorflow.org/">TensorFlow</a>. Some of the explained techniques are fusion, quantization, and reduced precision. There is a lot to learn about how to run the neural network on inference after the training is done.</p>
<h3 id="project-2---semantic-segmentation">Project 2 - Semantic Segmentation</h3>
<p>The object of this project is to label the pixels of a road image using the Fully Convolutional Network (FCN) described in the <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf">Fully Convolutional Networks for Semantic Segmentation</a> by Jonathan Long, Even Shelhamer, and Trevor Darrel. More or less the idea is to reproduce the results obtained by the authors. Here is an image showing the result of the segmentation.</p>
<p><img src="/images/2018-01-08/segmentation_sample.png" alt="Semantic Segmentation sample" /></p>
<p>This was a very interesting project where the need for <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPU</a> is a most. I tried to use the GPU on my Mac, and the model could not fit into its memory. There is instruction on the materials on how to create an <a href="https://aws.amazon.com/ec2/spot/">AWS EC2 Spot Instance</a> to run the project without breaking your wallet. The following is another interesting image I got on the project. A <a href="https://en.wikipedia.org/wiki/Violin_plot">violin plot</a> to show the distribution of the network loss:</p>
<p><img src="/images/2018-01-08/loss_epoch_12.png" alt="Violin plot of network loss" /></p>
<p>This project is written in Python with <a href="https://www.tensorflow.org/">TensorFlow</a>. Take a look at my solution <a href="https://github.com/darienmt/CarND-Semantic-Segmentation-P2">repo</a>. The repo has more pictures!</p>
<h2 id="functional-safety">Functional safety</h2>
<p>This section is one of the more complicated parts of the nanodegree. It is an introduction to <a href="https://en.wikipedia.org/wiki/Functional_safety">Functional Safety</a>. I think it was complicated because of the high level of details you need to take into account to understand functional safety under the <a href="https://en.wikipedia.org/wiki/ISO_26262">ISO 26262 standard</a>. Even when it is an elective, I would recommend any software professional to take. There are many aspects of software we never take into account on most common projects because the most common project could not injure a person or cause more tragic consequences. This section will cover the creation of the following documents: safety plan, hazard analysis and risk assessment, functional safety concept, technical safety concepts and software requirements. It was great, but I was happy when it was finished.</p>
<h3 id="project-3---functional-safety">Project 3 - Functional Safety.</h3>
<p>This project consists of creating the documentation for the functional safety of a Lane Assistance system under the umbrella of the <a href="https://en.wikipedia.org/wiki/ISO_26262">Road vehicles - Functional safety: ISO 26262</a>. ISO 26262 is an international standard for functional safety on an electrical or electronic system in production automobiles defined by the International Organization for Standardization (ISO). The Lane Assistance system is part of an <a href="https://en.wikipedia.org/wiki/Advanced_driver-assistance_systems">Advanced Driver Assistance System(ADAS)</a> with the following functionalities:</p>
<ul>
<li>Lane departure warning: When the driver drifts toward the edge of the lane, the steering wheel vibrates to alert the driver.</li>
<li>Lane keeping assistance: When the driver drifts toward the edge of the lane, the steering wheel is turned toward the center of the lane to keep the car in its current lane.</li>
</ul>
<p>The documents I created could be found <a href="https://github.com/darienmt/CarND-Functional-Safety-P3">here</a></p>
<h2 id="robot-operating-system-ros">Robot Operating System (ROS)</h2>
<p>In this section, David Silver provides an overview of the autonomous vehicle architecture, and the last part of the term is an excellent introduction to <a href="http://www.ros.org/">Robot Operating System (ROS)</a>. It covers its command line interface, packages, catkin, and different types of nodes and service provided by ROS. Even when the installation is not explained, a virtual machine disk image is provided, and all the examples are hands-on. You will create some nodes async and sync interactions to be prepared to the final projects.</p>
<p><img src="/images/2018-01-08/ros_kinetic.png" alt="ROS Kinetic Kame" /></p>
<h2 id="project-4---system-integration-project---putting-everything-together">Project 4 - System integration project - Putting everything together</h2>
<p>Let’s drive! This project consists of creating the set of nodes needed to control a self-driving car. The nodes are implemented in Python, but C++ is available as well. The modules are:</p>
<ul>
<li>
<p><em>Waypoint updater</em>: It receives the current position and produces a set of waypoints the car have to follow informing the velocity needed at each waypoint. Some waypoints need to have zero speed due to a red light, and this module should suggest the “stop trajectory.” The module receives all the possible waypoints one time at the beginning of the execution, and the waypoint where it needs to stop.</p>
</li>
<li>
<p><em>Drive-by-Wire (DBW)</em>: It is responsible for sending throttle, breaks and steering messages to the DBW logic to control the car. It is suggested to use a <a href="https://en.wikipedia.org/wiki/PID_controller">PID</a> to execute the control and a <a href="https://en.wikipedia.org/wiki/Low-pass_filter">low-pass filter</a> to smooth the control actions.</p>
</li>
<li>
<p><em>Traffic Light Detection</em>: This module receives the image from the camera and needs to detect the traffic sign shown to inform the <em>Waypoint updater</em> node where to stop the car.</p>
</li>
</ul>
<p>Neither to say this project is tough. It is done on a team of up to five students. Each team has a team leader, and the project is sent to Udacity by him. It is not just that you are working with ROS, there are a few places where things could be wrong, and at least I got lost and frustrated multiple times. The solution was to go back and try with smaller steps. My part was to work on the <em>DBW</em> and <em>Waypoint updater</em> nodes. It was an opportunity to work on the control part that I think I won’t see too much after I finish the nanodegree. It was hard, but we got the project submitted, and it drove Carla, Udacity’s test self-driving car. Take a look at the following picture to see Carla in motion:</p>
<p><img src="/images/2018-01-08/carla_driving.gif" alt="Driving Carla" /></p>
<p>You can take a look at our team solution on <a href="https://github.com/joejanuszk/CarND-Capstone">Joe’s repo - our team lead</a>(where all the fun happened) or on my fork <a href="https://github.com/darienmt/CarND-Capstone">here</a>.</p>
<h1 id="conclusions">Conclusions</h1>
<p>The last nine months were full of excitement, new ideas, new toys, and some long nights. The term three was vital to my understanding of the whole architecture of an autonomous vehicle. I know this is just the beginning and we could not even approach all the complexities of a self-driving car, but when I started a self-driving car was part of science fiction: now it is a near future reality. Thank you very much to Udacity for bringing this great courses to us. Keep it up!</p>
Mon, 08 Jan 2018 20:21:10 +0000http://darienmt.com/self-driving-cars/2018/01/08/self-driving-car-nanodegree-term-3.html
http://darienmt.com/self-driving-cars/2018/01/08/self-driving-car-nanodegree-term-3.htmlself-drivingcars,search,A*,hybridA*,pathplanning,behavioralplanning,trajectorygeneration,fullyconvolutionalneuralnetworks,fcnn,semanticsegmentation,tensorflow,ROS,functionalsafetyself-driving-carsUdacity's Self-Driving Car Nanodegree - Term 2<p>The second term of <a href="https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013">Udacity’s Self-Driving Car Nanodegree</a> is over. It was an open window to a new set of ideas, theories and capabilities: Sensor fusion, Localization, Control and C++. If you have some years of experience developing software, you might be surprised to see C++ in the same sentence as new ideas. It was a big surprise to me as well.</p>
<p>I have met a few people with opinions about C++ regarding it as an old and difficult language. For me, the biggest win of this term was my rediscovering of C++ and how much it has evolved since I use it 20 years ago for the first time. At that time, I use C++ only to speed up my <a href="https://www.mathworks.com/products/matlab.html">Matlab</a> computation on a Pentium machine, and some <a href="https://en.wikipedia.org/wiki/Microsoft_Foundation_Class_Library">MFC</a> projects later on. It was not a great experience, and it was not by accident I found myself moving away from those project and jumping into other languages like Java, C# and more recently Scala and Python. When I started this term, there was a voice deep inside my mind saying: the nightmare is back. And here it comes, C++ is the new red! Its evolution over these years is incredible. While <a href="https://en.wikipedia.org/wiki/C%2B%2B11">C++11</a> introduced significant features, its evolution continued with <a href="https://en.wikipedia.org/wiki/C%2B%2B17">C++17</a> and so on. Each standard is adding new ideas. Here is a nice article regarding C++17 <a href="https://www.oreilly.com/ideas/c++17-upgrades-you-should-be-using-in-your-code">features</a>. Is <a href="https://en.wikipedia.org/wiki/Type_inference">type inference</a> a dream in some language (Ex. Java)? Not in C++11. There, it is a reality.</p>
<p><img src="/images/2017-08-30/cpp_logo_small.png" alt="C++" /></p>
<h1 id="lessons-and-projects">Lessons and projects</h1>
<p>Let’s go back to the Udacity’s Self-Driving Car Nanodegree - Term 2, it consists of fourteen lectures, five projects and a C++ “checkpoint.”</p>
<h2 id="sensor-fusion">Sensor fusion</h2>
<p>The first few lessons are on <a href="https://en.wikipedia.org/wiki/Sensor_fusion">sensor fusion</a>. The starting point is to understand what is the problem of estimating the system state(Ex. the car position and velocity) base on a set of noisy measurements(Ex. <a href="https://en.wikipedia.org/wiki/Radar">Radar</a> and <a href="https://en.wikipedia.org/wiki/Lidar">Lidar</a> measurements). By “fusing” the measurements, the resulting state has less uncertainty than using a single measurement. The technics introduced were: <a href="https://en.wikipedia.org/wiki/Kalman_filter">Kalman filter</a>, Extended Kalman filter and Unscented Kalman filter. These lessons are also the transition from Python to C++. In the middle of them, there is a C++ “checkpoint” to make sure you are comfortable with C++. There is also a link to <a href="https://www.udacity.com/course/c-for-programmers--ud210">Udacity’s C++ For Programmers</a> course. It is a good idea to refresh/acquire C++ before facing the projects.</p>
<h3 id="project-1---extended-kalman-filter">Project 1 - Extended Kalman Filter</h3>
<p>This is the first C++ project of the course, and it could be intimidating, but it doesn’t have to. There is a complete lesson explaining what needs to be done. The project consists of implementing an Extended Kalman filter(EKF) to estimate a car position based on Radar and Lidar noisy measurements provided by Udacity’s simulator. This is how the simulator looks like:</p>
<p><img src="/images/2017-08-30/ekf_simulator.png" alt="Extended Kalman filter simulator view" /></p>
<p>The simulator and your EKF communicate over <a href="https://en.wikipedia.org/wiki/WebSocket">WebSocket</a>, and the EKF use <a href="https://github.com/uNetworking/uWebSockets">uWebSockets</a> implementation. That part is already done for you on the seed project provided my Udacity, but it is a cool framework to see working. Take a look at my solution <a href="https://github.com/darienmt/CarND-Extended-Kalman-Filter-P1">here</a>.</p>
<h3 id="project-2---unscented-kalman-filter">Project 2 - Unscented Kalman Filter</h3>
<p>This project is pretty much the same setup as the first project, but implementing an Unscented Kalman Filter(UKF). Why is called unscented? <a href="https://www.linkedin.com/in/sebastian-thrun-59a0b273/">Sebastian</a> explained the creators of this filter believed the EKF stinks due to the linearization applied there. A sense of humor is always good! <a href="https://github.com/darienmt/CarND-Unscented-Kalman-Filter-P2">Here</a> is a link to my solution.</p>
<h2 id="localization">Localization</h2>
<p>There are three lessons on about localization, one on vehicle models(Bicycle Model), and one on Particle filter. Before these lessons, I was sure GPS was enough to know where I was. Now, not so much. The technics shown are <a href="https://en.wikipedia.org/wiki/Recursive_Bayesian_estimation">Bayesian filter</a> for 1D localization in C++, <a href="http://code.eng.buffalo.edu/dat/sites/model/bicycle.html">Bicycle Motion Model</a>, <a href="https://en.wikipedia.org/wiki/Particle_filter">Particle filter</a> for 2D position estimation. This part of the course was really interesting to me, and I felt I just scratched the surface with half a nail. Hopefully, I will have time to learn more about it in the future.</p>
<h3 id="project-3---kidnapped-vehicle">Project 3 - Kidnapped Vehicle</h3>
<p>In this project, you need to implement a <a href="https://en.wikipedia.org/wiki/Particle_filter">Particle Filter</a> applied to a <a href="https://en.wikipedia.org/wiki/Kidnapped_robot_problem">Kidnapped robot(car) problem</a>. Udacity’s simulator will send you noisy landmark observation from a car, and you need to estimate car position. This is how the simulator looks on a successful execution:</p>
<p><img src="/images/2017-08-30/particle_filter.png" alt="Particle Filter simulator view" /></p>
<p><a href="https://github.com/darienmt/CarND-Kidnapped-Vehicle-P3">Here</a> is the link to my solution.</p>
<h2 id="control">Control</h2>
<p><a href="https://en.wikipedia.org/wiki/Control_theory">Control theory</a> is a broad subject. I remember a lot of <a href="https://en.wikipedia.org/wiki/Laplace_transform">Laplace transform</a>, <a href="https://en.wikipedia.org/wiki/Nyquist_stability_criterion">Nyquist stability</a>, and other advance math at university. It was surprised again with these part of the lessons where the basic control concepts could be express with simpler expressions to make them more understandable. There is one lesson for <a href="https://en.wikipedia.org/wiki/PID_controller">PID controller</a>, one for vehicle kinematic and dynamic models(more advanced than the one on the localization lessons), and one lesson <a href="https://en.wikipedia.org/wiki/Model_predictive_control">Model Prediction Control</a>.</p>
<h3 id="project-4---pid-control">Project 4 - PID Control</h3>
<p>This time we need to implement a <a href="https://en.wikipedia.org/wiki/PID_controller">PID controller</a> to control the car receiving a cross-track error from the simulator and responding with a steering angle and throttle to control the car. The challenging part here is to not go too fast in the sense that the car could go to the lake on the simulator very easily. It is better to go step by step until the car goes all the way modifying the proportional, integral and differential parameters by hand. Here is an image of the simulator:</p>
<p><img src="/images/2017-08-30/pid_simulator.png" alt="PID Controller simulator" /></p>
<p>The fun part is to see the car running. There are a few videos inside my <a href="https://github.com/darienmt/CarND-PID-Control-P4">solution repo</a>. The videos are on the … <a href="https://github.com/darienmt/CarND-PID-Control-P4/tree/master/videos"><code class="highlighter-rouge">/videos</code></a> path. It is interesting to see how the proportional, integral and differential parts of the PID influence on the car behavior.</p>
<h3 id="project-5---model-prediction-control">Project 5 - Model Prediction Control</h3>
<p>In this project, we need to apply the cars kinematic model to predict the car trajectory and based on a set of waypoints sent to us by the simulator apply the Model Prediction Control to drive the car on the optimal trajectory. Model Prediction Control converts the control problem to an optimization problem with a set of objective functions, not just one. To solve this non-linear-optimization problem, the <a href="https://projects.coin-or.org/Ipopt">Ipopt</a> package is suggested to be used. Here is a view of the simulator for this project:</p>
<p><img src="/images/2017-08-30/mpc_simulator.png" alt="MPC simulator" /></p>
<p>As the PID project, the fun part is to see the car running. The final video of the car on the simulator is on my <a href="https://github.com/darienmt/CarND-MPC-Project-P5">solution repo</a> on the <code class="highlighter-rouge">/videos</code> directory. This is a small animated gif of that video:</p>
<p><img src="/images/2017-08-30/mpc_video.gif" alt="MPC driving car" /></p>
<h1 id="conclusions">Conclusions</h1>
<p>It was a great term! I enjoyed every second of it. Looking forward to the final term where everything will come together, and the code will be driving Carla(Udacity’s real-physical car.) I can wait!</p>
Wed, 30 Aug 2017 20:21:10 +0000http://darienmt.com/self-driving-cars/2017/08/30/self-driving-car-nanodegree-term-2.html
http://darienmt.com/self-driving-cars/2017/08/30/self-driving-car-nanodegree-term-2.htmlself-drivingcars,Kalmanfilter,extendedKalmanfilter,unscentedKalmanfilter,localization,Markovlocalization,particlefilter,PID,modelpredictioncontrolself-driving-carsFunctions - FaaS<p>It was a sunny morning last Friday in Toronto. Instead of enjoying the sun doing some outdoor activities I choose to join a group of people to talk about functions as a service at the Functions17 event:</p>
<p><img src="/images/2017-08-28/logo.png" alt="Functions17 event" /></p>
<p>Back in 2014, I noticed a new offering from AWS: <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> At the time, I was playing with <a href="https://www.docker.com/">Docker</a>. The idea of not having a VM but just the important parts I need to run my application was very appealing to me, but what if I didn’t need to have even a container. In the end, what I need is something to run my code in response to an event that could be an HTTP request or something else like a message on a queue. This code will have some side effects for sure; so, we are not talking about “pure” functions here. We are talking about on how to organize your application with enough flexibility to scale at a function level.
The time went by, and this serverless/function-as-a-service[FaaS] space start growing. I had the pleasure to learn more about how things are moving in the direction of having less infrastructure. This is a fundamental concept and a bit of misunderstanding regarding Function-as-a-Service. It is not serverless. The servers will continue to be there, but we are going to use another level of abstraction on top of them.</p>
<p>In the beginning, we had a physical server, and we need to deal with everything there. Even when developers sometimes didn’t need to work directly with those servers, there were/are teams dedicated just to maintain the server. Things don’t run by them self (or they do until they break). The second step was virtualization. We continue to have a physical server on-premise or on a cloud provider, but we don’t work directly with the hardware. There is an abstraction layer, the hypervisor, but we continue to handle them more or less as a physical server. The third step was/is containers. We can optimize the hardware by using just the amount of OS we need to run our app. There is more density to gain with this approach and less old headaches (and new problems as usual). People realize a single container is not your system and the interaction between all these containers could be tricky. You still need to manage them. Orchestration needs to happen, and platforms like <a href="https://kubernetes.io/">Kubernetes</a> and <a href="https://docs.docker.com/engine/swarm/">Docker Swarm</a> emerge. Nevertheless, it is an improvement over a barebone server. The fourth step could be Function-as-a-Service. With this approach, developers can focus even more on the task at hand. It could be broken down into smaller tasks that will be deployed independently and scale on their own needs. If it sounds interesting to you here is the link to the conference play list <a href="https://www.youtube.com/watch?v=0MaAnQGj5u8&amp;list=PLNoTOsTRYfvjgYXgrqHwu7w7kUVC4s4tu">#Functions17</a>.</p>
<p>We had seven live presentations. Here are few comments on each of them:</p>
<ul>
<li>
<p><a href="https://www.youtube.com/watch?v=0MaAnQGj5u8">FaaS tooling - Where we’re at &amp; signs of change</a> by <a href="https://www.linkedin.com/in/nodebotanist/">Kassandra Perch</a>. She presents an evolution on the tooling around FaaS with <a href="https://www.iopipe.com/">IO Pipe</a>, an application performance monitoring for <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>, highlighting some of the challenges found when you have such a loosely coupled architecture.</p>
</li>
<li>
<p><a href="https://www.youtube.com/watch?v=wpwqkuyAPFY">Developer Velocity - Introduction to Function-First Development</a> by <a href="https://www.linkedin.com/in/keith-horwood-92b76062/">Keith Horwood</a>. This presentation put a lot of emphasis on productivity and how it is possible to iterate faster using a FaaS approach featuring <a href="https://stdlib.com/">StdLib</a>, a FaaS library on <a href="https://nodejs.org/en/">Node.js</a>.</p>
</li>
<li>
<p><a href="https://www.youtube.com/watch?v=OmhNwSz_V00">Building Serverless Applications on Azure</a> by <a href="https://www.linkedin.com/in/joeraio/">Joe Raio</a>. Presenting <a href="https://azure.microsoft.com/en-ca/services/functions/">Microsoft Azure Functions</a> in a clear and concise matter, Joe’s presentation shown how Microsoft had changed in the last few years to be able to adapt to new technology trends like FaaS. It was apparent the tooling and integration there will lead to a good productivity as long as you stay within Azure at least.</p>
</li>
<li>
<p><a href="https://www.youtube.com/watch?v=19SCqWGqtto">Twelve Factor Serverless Applications</a> by <a href="https://www.linkedin.com/in/chrismunns/">Chris Munns</a>. Chris’s presentation explains how the [Twelve-Factor App] principles are mapped to a FaaS implementation. It was amazing to see how well they are aligned but not too surprising as these are emerging ideas to overcome the problems we all have when those principles are not applied.</p>
</li>
<li>
<p><a href="https://www.youtube.com/watch?v=HwqJC0U0gD0">Spring Cloud Function &amp; Infrastructure models</a> by <a href="https://www.linkedin.com/in/adibsaikali/">Adib Saikali</a> and <a href="https://www.linkedin.com/in/stuart-charlton-b6a5a2/">Stuart Charlton</a>. <a href="https://pivotal.io/">Pivotal</a> and <a href="http://projects.spring.io/spring-cloud/">Spring Cloud</a> could not be out of ideas in this field. This presentation proposes different models of virtualization and FaaS as one of them. This time with support to multiple cloud providers and Java as the language of choice.</p>
</li>
<li>
<p><a href="https://www.youtube.com/watch?v=1SQ5KUQEZVA">Building serverless applications with Apache OpenWhisk</a> by <a href="https://www.linkedin.com/in/krook/">Daniel Krook</a>. <a href="https://www.ibm.com/cloud-computing/bluemix/">IBM Bluemix</a> is a cloud provider some people would oversee, but in this presentation, it is evident they are moving fast in the cloud arena. <a href="https://openwhisk.incubator.apache.org/">Apache OpenWhisk</a> is an interesting open source FaaS cloud platform where the event driven approach is your programming model.</p>
</li>
<li>
<p><a href="https://www.youtube.com/watch?v=vM8M0ikfXRY">Serverless Peanut Butter and Jelly - GCP and Firebase</a> by <a href="https://www.linkedin.com/in/dineshsandeep/">Sandeep Dinesh</a>. Even when <a href="https://cloud.google.com/functions/">Google’s Cloud Functions</a> is still in beta, this could be a beta as Gmail was in beta for years but working without problems. It was an excellent presentation on how that platform is working right now in combination with other parts of Google Cloud. In this case: <a href="https://firebase.google.com/">Firebase</a> and <a href="https://cloud.google.com/dlp/">Data Lost Prevention API</a>.</p>
</li>
</ul>
<p>I enjoyed this conference. It was very well organized and I appreciate the time and effort dedicated to it by its organizers:</p>
<ul>
<li><a href="https://techmasters.chat/">Tech Masters</a></li>
<li><a href="https://www.meetup.com/full-stack-to/">Full Stack Toronto Meetup</a></li>
<li><a href="http://www.devto.ca/">DevTO</a></li>
<li><a href="https://www.meetup.com/torontojs/">Toronto JavaScript</a></li>
<li><a href="https://www.meetup.com/FunctionalTO-meetup/">FunctionalTO</a></li>
</ul>
<p>I hope they continue to put together conferences like this one. This is just the beginning of FaaS. It is a new approach to what we do every day. Hopefully, the challenges of today will be part of the past soon.</p>
<p>Happy Journey!</p>
Mon, 28 Aug 2017 00:21:10 +0000http://darienmt.com/faas/2017/08/28/functions-faas-the-next-frontier.html
http://darienmt.com/faas/2017/08/28/functions-faas-the-next-frontier.htmlfaas,functions,AWSlambda,io-pipe,faasTensorFlow with GPU on your Mac<p>As part of the <a href="https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013">Udacity’s Self-Driving Car Nanodegree</a>, I had the opportunity to try a GPU-powered server for <a href="https://github.com/darienmt/CarND-TrafficSignClassifier-P2">Traffic Sign Classifier</a> and the <a href="https://github.com/darienmt/CarND-Behavioral-Cloning-P3">Behavioral Cloning</a> projects in Term 1. It was not a painful experience(as I was expecting) to use this hardware because Udacity provided an AIM with the necessary software already installed, and I didn’t need to install anything else. The only problem I encounter was to update the NVIDIA driver, and it was done easy. During that process, I read a bit about <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPUs</a>, <a href="https://en.wikipedia.org/wiki/CUDA">CUDA</a> and <a href="https://developer.nvidia.com/cudnn">cuDNN</a>. It was awesome to see this development and the application of these platforms to Deep Learning. My Mac had a NVIDIA video card; so, I was up for local adventures too!</p>
<p><img src="/images/2017-06-08/CUDA_Preferences.png" alt="CUDA Preference" /></p>
<p>To use GPU-powered TensorFlow on your Mac, <a href="https://www.tensorflow.org/install/install_mac">there are multiple system requirements and libraries to install</a>. Here are a summary of those system requirements and steps:</p>
<ul>
<li><a href="https://developer.nvidia.com/cuda-toolkit">CUDA Toolkit 8.0</a></li>
<li>NVIDIA driver associated with CUDA Toolkit 8.0</li>
<li><a href="https://developer.nvidia.com/cudnn">cuDNN v5.1</a></li>
<li>GPU card with CUDA Compute Capabilities 3.0 or higher. (<a href="http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capability">Compute Capabilities version</a> identify the <a href="http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities">features</a> supported my the GPU.)</li>
</ul>
<p>When all of that is installed and checked, TensoFlow with GPU support could be installed. I don’t know about you, but this is a long list to me. Nevertheless, I could see great improvements on performance by using GPUs in my experiments. It worth trying to have it done locally if you have the hardware already. This article will describe the process of setting up <a href="https://en.wikipedia.org/wiki/CUDA">CUDA</a> and <a href="https://www.tensorflow.org/">TensorFlow</a> with GPU support on a <a href="https://conda.io/">Conda</a> environment. It doesn’t mean this is the only way to do it, but I just want to let it rest somewhere I could find it if I needed in the future, and also share it to help anybody else with the same objective. And the journey begins!</p>
<h1 id="check-you-have-a-cuda-gpu-card-with-cuda-compute-capability-30-or-higher">Check you have a CUDA GPU card with CUDA Compute Capability 3.0 or higher.</h1>
<p>First, you need to know your video card. Go to “About This Mac,” and get from there:</p>
<p><img src="/images/2017-06-08/About_This_Mac.png" alt="About This Mac" /></p>
<p>In my case, it is NVIDIA GeForce GT 750M. Then you need to see if the card is supported by CUDA by finding you card <a href="https://developer.nvidia.com/cuda-gpus">here</a>:</p>
<p><img src="/images/2017-06-08/card_supported.png" alt="Card supported" /></p>
<p>Now you have hardware support confirmed, let us move forward and install the driver.</p>
<h1 id="install-the-cuda-driver">Install the CUDA Driver.</h1>
<p>There are options to install the driver when you install the <a href="https://developer.nvidia.com/cuda-toolkit">CUDA Toolkit 8.0</a>, but I preferred to install the driver first, to make sure I have the latest version. Go to <a href="http://www.nvidia.com/object/mac-driver-archive.html">this URL</a> and download the latest version. At this time, it is 8.0.83:</p>
<p><img src="/images/2017-06-08/cuda_driver.png" alt="CUDA driver" /></p>
<h1 id="install-cuda-toolkit-80">Install CUDA Toolkit 8.0</h1>
<p>You can find the installation steps for Mac OS X <a href="http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x">here</a>. There are some system requirements:</p>
<ul>
<li>a CUDA-capable GPU(you make sure you have it in the previous sections.)</li>
<li>Mac OS X 10.11 or later (In my case, I have v10.12.5)</li>
<li>the Clang compiler and toolchain installed using Xcode.</li>
<li>the NVIDIA CUDA Toolkit.</li>
</ul>
<p>The first two requirements are met at this point; lets get to the last two.</p>
<h2 id="install-xcode-and-native-command-line-tools">Install Xcode and native command line tools</h2>
<p>I didn’t have to install Xcode because I have it installed already, but <a href="https://www.moncefbelyamani.com/how-to-install-xcode-homebrew-git-rvm-ruby-on-mac/">here</a> is a tutorial on how to do it. The tutorial also cover the installation of the command-line tools. In my case, I just need to install them with <code class="highlighter-rouge">xcode-select --install</code>. It is always good to verify that you installed it by using <code class="highlighter-rouge">/usr/bin/cc --version</code>. You should see something similar to this:</p>
<script src="https://gist.github.com/2120a7aeddc1cf6f33032a8da2ee9653.js"> </script>
<h2 id="download-cuda-toolkit-install">Download CUDA Toolkit install</h2>
<p>Go to <a href="https://developer.nvidia.com/cuda-downloads">this URL</a> to download the toolkit for the appropriate OS, architecture, and version:</p>
<p><img src="/images/2017-06-08/cuda_download.png" alt="CUDA Toolkit download" /></p>
<p>Optionally, verify the download was correct with md5 checksum: <code class="highlighter-rouge">openssl md5 &lt;THE_FILE_YOU_DOWNLOAD&gt;</code>.</p>
<p>Double-click the file, and follow the installation wizard. On the package selection, un-check the CUDA Drivers because they were installed before. When the installation finished, add the following to your .bash_profile:</p>
<script src="https://gist.github.com/127858ed4ca9c585dec7c5679bf8afc2.js"> </script>
<p>It is always good to verify the driver is running:</p>
<ul>
<li>Open a new Terminal.</li>
<li>Check the driver is correctly installed by checking the CUDA kernel extension (/System/Library/Extensions/CUDA.kext) with the command: <code class="highlighter-rouge">kextstat | grep -i cuda</code>.</li>
</ul>
<p>You should see something similar to this:</p>
<script src="https://gist.github.com/1a04a9fce3bf78c69568e0d5de08ffc0.js"> </script>
<h2 id="compile-samples">Compile samples</h2>
<p>Now everything CUDA related should be installed correctly, but we can have some fun compiling and running CUDA samples to verify even more everything is indeed installed properly:</p>
<ul>
<li>Open a new Terminal.</li>
<li>Move to where the samples are: <code class="highlighter-rouge">cd /Developer/NVIDIA/CUDA-8.0/samples/</code></li>
<li>Try to make one of them: <code class="highlighter-rouge">make -C 0_Simple/vectorAdd</code></li>
</ul>
<p>And the following error happens!!!</p>
<script src="https://gist.github.com/35f6933d5d2e2591f81b3507abf16eda.js"> </script>
<p>After google-ing it, this is an issue described <a href="https://github.com/arrayfire/arrayfire/issues/1384">here</a>. Following the steps suggested by mlloreda, downgrading to CLT 8.2 should work:</p>
<ul>
<li>Log in to https://developer.apple.com/downloads/ (the version here is always the latest, we want a previous version)</li>
<li>Go here https://developer.apple.com/download/more/ and find “Command Line Tools (macOS 10.12) for Xcode 8.2)</li>
<li>Install CLT</li>
<li>Run sudo xcode-select –switch /Library/Developer/CommandLineTools</li>
<li>Verify that clang has been downgraded via clang –version</li>
</ul>
<p>Done all that, The ‘80100’ error is gone, but a new error arrived:</p>
<script src="https://gist.github.com/7972f09a5c619607c2685efec32775ad.js"> </script>
<p>It turns out where the samples, there is no write permission to them. You need to make a writable copy of the samples:</p>
<ul>
<li><code class="highlighter-rouge">cd /Developer/NVIDIA/CUDA-8.0/bin</code></li>
<li><code class="highlighter-rouge">sh ./cuda-install-samples-8.0.sh ~/Documents/Projects/</code> (this is a script that is part of the installation just to do the sample copy. I was not the first person with this problem I guess.)</li>
<li><code class="highlighter-rouge">cd ~/Documents/Projects/NVIDIA_CUDA-8.0_Samples</code></li>
<li><code class="highlighter-rouge">make -C 0_Simple/vectorAdd</code></li>
</ul>
<p>And this time, it works!!!</p>
<script src="https://gist.github.com/abc5c254f366f212c39b642dca1fd9a2.js"> </script>
<p>Now that we are at it, why not to compile a few more:</p>
<ul>
<li><code class="highlighter-rouge">make -C 0_Simple/vectorAddDrv</code></li>
<li><code class="highlighter-rouge">make -C 1_Utilities/deviceQuery</code></li>
</ul>
<p>Lets run <code class="highlighter-rouge">deviceQuery</code> and see what happens:</p>
<script src="https://gist.github.com/16dafebdee7b8432ea96b196251aabfa.js"> </script>
<p>All good so far. Let us go and compile all the samples with <code class="highlighter-rouge">make</code>. This takes a while to finish. There are a lot of samples there. Very interesting stuff. I just ran one more: bandwidthTest</p>
<script src="https://gist.github.com/b74fad37920fbad2838716c9b754bae9.js"> </script>
<p>It is good to play with this, but we need to keep going to get to the TensorFlow part. CUDA is done, next cuDNN.</p>
<h1 id="install-cudnn-v51">Install cuDNN v5.1</h1>
<p>To download cuDNN, you need to create a developer account <a href="https://developer.nvidia.com/cudnn">here</a>, and then proceed to the download part:</p>
<p><img src="/images/2017-06-08/cuDNN_download.png" alt="cuDNN download" /></p>
<p>I created a directory <code class="highlighter-rouge">~/cudnn</code> and untar the download files there. After that is done, add the following to your .bash_profile:</p>
<script src="https://gist.github.com/a675bdbfe2d06980ae1dc43b07ebe682.js"> </script>
<p>This was an easy step!</p>
<h1 id="creating-a-conda-environment-and-installing-tensorflow">Creating a Conda environment and installing TensorFlow</h1>
<p>Even when the Anaconda is not officially supported, the installation worked quite well:</p>
<ul>
<li>Create a new environment: <code class="highlighter-rouge">conda create --name=IntroToTensorFlowGPU python=3 anaconda</code></li>
<li>Activate it: <code class="highlighter-rouge">source activate IntroToTensorFlowGPU</code></li>
<li>Install TensorFlow: <code class="highlighter-rouge">pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.1.0-py3-none-any.whl</code></li>
</ul>
<script src="https://gist.github.com/631ac91c0b82f5d146e8bf0dc720c4d3.js"> </script>
<p>Everything is set. Lest verify the installation running the TensorFlow code suggested on the <a href="https://www.tensorflow.org/install/install_mac">Validate your installation</a>:</p>
<script src="https://gist.github.com/c72c516019620ea8dfaf3d5416652b05.js"> </script>
<p>Great! Everything looks like is working. It was a long journey, but it was fun! There are a lot of things to learn and a lot of different weird messages on this scripts. It is just the beginning. There is an Udacity free course looking good: <a href="https://www.udacity.com/course/intro-to-parallel-programming--cs344">Intro to Parallel Programming</a>. It could be interesting to see the difference between this “lower” level compared to other platforms based on CPUs. I certainly like more chickens than oxes(reference to the course trailer video)!</p>
<p>Enjoy!</p>
Wed, 07 Jun 2017 00:21:10 +0000http://darienmt.com/self-driving-cars/2017/06/07/tensorflow-with-gpu-on-your-mac.html
http://darienmt.com/self-driving-cars/2017/06/07/tensorflow-with-gpu-on-your-mac.htmlcuda,gpu,tensorflow,python,c++,c,conda,nvidiaself-driving-carsUdacity's Self-Driving Car Nanodegree - Term 1<p>Sometimes at the beginning of the year, I ran into a link to Udacity’s Self-Driving Car Nanodegree. I didn’t know too much about this subject. It looks like a mix between Computer Science, Mechanical, Electrical and Electronic Engineering with a lot of different magical areas. The subject was interesting enough to make me apply to enter the course. It was not cheap. There are three terms $800 USD each, but knowledge on a new field seldom come cheap. My application was accepted, and I finished the first term last week. It was a great experience.</p>
<p>The course consists of a set of lectures eighteen lectures and five projects. The first term is dedicated to machine learning with an emphasis on deep learning and computer vision. Even when I passed an MSc on signal and image processing more than fifteen years ago, the course broad me back to that world and show me how much the field have changed in the pass years with libraries like <a href="http://opencv.org/">OpenCV</a>. Machine learning was new to me, and even when I study neural networks on my Master degree, at that time, they were a “new” subject only for the few people crazy enough to try it. I am very pleased to see how much development and research is done on that field and the appearance of high-level frameworks like <a href="https://www.tensorflow.org/">TensorFlow</a> and <a href="https://keras.io/">Keras</a>. Even when I am not a gamer, I thanks all the gamers for making the need of more accessible GPUs other fields like deep learning can use without having to spend tons of money on super expensive computers.</p>
<h2 id="lectures-and-projects">Lectures and projects</h2>
<h2 id="project-1---finding-lane-lines-on-the-road">Project 1 - Finding Lane Lines on the Road</h2>
<p>The lectures were extremely interesting and very practical. Almost since you open the course, you already have an assignment. The project first project is to create a pipeline able to identify lane lines on a video stream. Here is an example of the detected lane lines:</p>
<p><img src="/images/2017-05-30/LaneLineFinder.jpg" alt="Lane lines finder" /></p>
<p>On the introductory lessons, you learn how to setup a <a href="https://www.python.org/">Python</a> development environment with <a href="https://conda.io/miniconda.html">Miniconda</a> and <a href="http://jupyter.org/">Jupyter notebook</a>. I’ve never used Python this way, but it was easy enough to have the environment setup and running at no time. The project was a bit challenging you to need to understand a couple of image processing techniques like <a href="https://en.wikipedia.org/wiki/Canny_edge_detector">Canny edge detector</a> and <a href="https://en.wikipedia.org/wiki/Hough_transform">Hough transform</a>, but also you need to get an introduction to <a href="http://www.numpy.org/">Numpy</a> and <a href="http://opencv.org/">OpenCV</a>. Having to digest a few things could be challenging and fun, and that is what this is all about: having fun! Take a look at my solution for the first project at <a href="https://github.com/darienmt/CarND-LaneLines-P1">this repo</a>.</p>
<h2 id="project-2---traffic-sign-classifier">Project 2 - Traffic Sign Classifier</h2>
<p>The next few lectures are introduction to <a href="http://neuralnetworksanddeeplearning.com/">neural networks</a>, <a href="https://www.tensorflow.org/">TensorFlow</a>, <a href="https://en.wikipedia.org/wiki/Deep_learning">Deep learning</a>, and a particular type of neural networks called <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">convolutional neural networks</a>. For me, this was the more interesting part of the term. In the beginning, you feel like you are going deeper into dark water with TensorFlow: too many parameters and new concepts. In the end, you realized you learn a few things, and it is ok to swim in those waters, but also you realize how deep the water could be. <a href="https://github.com/darienmt/intro-to-tensorflow">Here</a> is my TensorFlow playground. On the notebooks there you can find examples of TensorFlow concepts that could be useful to have on hand just in case you forgot about them.</p>
<p>After four lessons, you are ready for the second project. It consists on classifying traffic sign images. You receive an image data samples to train and test your neural network and then you need to find five new traffic sign image on the web and apply classify them. Here are some samples from that dataset:</p>
<p><img src="/images/2017-05-30/traffic_signs.png" alt="Traffic signs" /></p>
<p>On the lectures, the suggested neural network is <a href="http://yann.lecun.com/exdb/lenet/">LeNet</a>, but you can do some modification on that architecture to improve your results. Important parts of this project are to realize how sensitive the classifier is regarding its parameters and the amount of data you use to train. Techniques like <a href="https://www.techopedia.com/definition/28033/data-augmentation">data augmentation</a> and image pre-processing are needed to have some results on this project.</p>
<p>Another most-have it a GPU enables machine. GPUs are expensive, and certainly, you don’t want to buy one for this course, if you don’t need to. Depending on the size of your training dataset and the neural network architecture, you could finish your project with CPUs, but it is slow. As part of the term, you receive a 50 dollars credit to AWS and the information about how to use a a g2.xlarge EC2 instance to run your project there. There is a tutorial on the lectures on how to ask AWS for this type of machine, and there is an AMI created my Udacity to make your journey a walk in the park. With that type of hardware, you can have a nice number of epochs for training. That is not necessary a good thing because you can overfit your model, but it is a good thing to see how the system behaves without needing to wait for too long. Here is an example of the classifier accuracy per epoch:</p>
<p><img src="/images/2017-05-30/p2_training.png" alt="Traffic signs classifier accuracy per epoch" /></p>
<p>Here are the five images I found on the web and its classification:</p>
<p><img src="/images/2017-05-30/p2_webimages.png" alt="Web image classification" /></p>
<p>The classifier made a “mistake” on the center image. That is ok for the project; it doesn’t have to be perfect. My solution for this project could be found on <a href="https://github.com/darienmt/CarND-TrafficSignClassifier-P2">this repo</a>.</p>
<h2 id="project-3---behavioral-cloning">Project 3 - Behavioral Cloning</h2>
<p>After having so much fun with TensorFlow, the lectures move us a bit away from it to enter on <a href="https://keras.io/">Keras</a> world. Keras provides a higher level of abstraction to build deep neural networks. With a few lines of code, you can code your network. Here is LeNet with Keras:</p>
<script src="https://gist.github.com/c7c6e90d9236b0cdb014542edc84a376.js"> </script>
<p>This time, the project is not related to image processing… not really. Udacity provides a car simulation. More or less like a race car game. On the game, the car has three cameras. You need to drive the car and store the images of those cameras and, in conjunction a dataset available on the lectures, train your neural network drive the car. It is a cool project. All the plumbing is done for you to make that happen. You need to create the model and save it to a file, and there are a couple of other files you use to make that model available for the game. In this case, the lectures introduce another neural network architecture created by the <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/">NVIDIA Autonomous group </a>. Here is a visual representation of how that network looks like:</p>
<p><img src="/images/2017-05-30/p3_nVidia.png" alt="NVIDIA model" /></p>
<p>This type of end-to-end experience is rewarding because you can see your car driving there and it is like cheering for your team: Go go go… no!!! No, the lake again!!!! and yes, the car goes to the lake at the beginning, but no fear that just means you have to train your network better.</p>
<p>For this project, you need to have as many training images as you can. I spent some time driving the car and getting used to the simulator(as I am not a gamer, it was challenging for me). There are two different tracks. The first one is the one the project is evaluated on and the second one is more challenging, but it could be used. All these training images need to be <code class="highlighter-rouge">scp</code> to your EC2 instance with GPU because the training on CPU is extremely slow. I tried to train locally, and the estimation time for a single epoch was more than 1000 seconds. On AWS, the same epoch was finish in 170 seconds. I decided to try a differ approach this time to fight overfitting, instead of modifying the NVIDIA team network architecture with <a href="https://en.wikipedia.org/wiki/Dropout_(neural_networks)">dropout regularization</a> or other techniques, I tried to keep my training epochs really small, only three of them. With that approach and data from both tracks going back and forward and using the car’s three cameras, the model was able to drive the car all the way on both tracks without problems. Here the mean squared error loss graph for the training:</p>
<p><img src="/images/2017-05-30/p3_error_loss.png" alt="NVIDIA model mean squared error loss graph" /></p>
<p>My solution for this project could be found on <a href="https://github.com/darienmt/CarND-Behavioral-Cloning-P3">this repo</a>. If you want to see the car driving, you can take a look at these two videos <a href="https://github.com/darienmt/CarND-Behavioral-Cloning-P3/blob/master/video.mp4">first track</a> and <a href="https://github.com/darienmt/CarND-Behavioral-Cloning-P3/blob/master/video_second_track.mp4">second track</a>.</p>
<h2 id="project-4---advanced-lane-lines-finder">Project 4 - Advanced Lane Lines Finder</h2>
<p>After all those neural networks, it is time to take a machine learning break. Without any intermediary lectures, you go back to the first project idea of finding the lane lines on a video stream. This time, we learn more about computer vision. The project consists of improving the first lane line finder to try to find lanes with curved roads. There is a lot of work to be done here:</p>
<ul>
<li>How to calibrate your camera to eliminate distortions created by the lenses and other parts.</li>
</ul>
<p><img src="/images/2017-05-30/p4_camera_calibration.png" alt="Camera calibration" /></p>
<ul>
<li>How to do <a href="https://en.wikipedia.org/wiki/3D_projection">perspective transformation</a> to see the road from above and be able to adjust a polynomial representation of the lanes.</li>
</ul>
<p><img src="/images/2017-05-30/p4_perspective.png" alt="Perspective transformation" /></p>
<ul>
<li>Understand better what transformation are inside <a href="https://en.wikipedia.org/wiki/Canny_edge_detector">Canny edge detector</a> to use those derivatives in a better way to find the lane lines. <a href="https://en.wikipedia.org/wiki/Sobel_operator">Sobel</a> gradient is introduced.</li>
</ul>
<p><img src="/images/2017-05-30/p4_gradients.png" alt="Gradients" /></p>
<ul>
<li>Different <a href="https://en.wikipedia.org/wiki/Color_space">color spaces</a> were introduced to find a better way to extract the lane lines.</li>
</ul>
<p><img src="/images/2017-05-30/p4_color_space.png" alt="Color space" /></p>
<ul>
<li>A histogram approach was introduced to find the points belonging to each lane line to be able to do a polynomial adjust and find the clear line.</li>
</ul>
<p><img src="/images/2017-05-30/p4_histogram.png" alt="Histogram" /></p>
<ul>
<li>Everything is put together to project the lanes on an image at first, but later on, the same technique is used on a frame by frame processing of a video stream. In this case, we have more information as we have the history of frames and that information is used to improve the line finding.</li>
</ul>
<p><img src="/images/2017-05-30/p4_lanes.png" alt="Lanes" /></p>
<p>For me, this project was the most challenging. Even when the individual operations were not that complex, there were many of them. My solution for this project could be found on <a href="https://github.com/darienmt/CarND-Advanced-Lane-Lines-P4">this repo</a>. The pipeline applied to the project video is <a href="https://github.com/darienmt/CarND-Advanced-Lane-Lines-P4/blob/master/video_output/project_video.mp4">here</a>.</p>
<h2 id="project-5---vehicle-detection">Project 5 - Vehicle Detection</h2>
<p>We are going back to machine learning again at this point with a more classic approach. Deep learning is a buzz word these days, but there are other more classical techniques used before it was popular. There are three lectures regarding machine learning on the term:</p>
<ul>
<li>
<p>The first one in an introduction to machine learning where <a href="https://en.wikipedia.org/wiki/Scatter_plot">scatter plots</a>, <a href="https://en.wikipedia.org/wiki/Decision_boundary">decision boundary</a>, <a href="https://en.wikipedia.org/wiki/Naive_Bayes_classifier">Naive Bayes and Gaussian Naive Bayes classifier</a> are introduced.</p>
</li>
<li>
<p>The second lecture introduces <a href="https://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machine</a> supervised learning models.</p>
</li>
<li>
<p>The third lesson introduces <a href="https://en.wikipedia.org/wiki/Decision_tree_learning">Decision tree</a> classifiers.</p>
</li>
</ul>
<p>It is great to see these techniques with practical examples using <a href="http://scikit-learn.org/stable/">Scikit-learn</a>.</p>
<p>These lectures took me to the last project. The objective is to create a pipeline able to detect cars in a video stream. It can use any classifier, but the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html">Linear Support Vector Machine</a> was suggested on the lectures and it good to try new things in addition to neural networks. The fitting of the classifier is done over a dataset provided by Udacity. For each image, you need to extract features to train the classifier on. There are of different features to use. I use spacial, histogram, and <a href="https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients">Histogram Oriented Gradients</a>. The techniques to do that are explained on the lectures with enough example to be understandable.</p>
<p>After the classifier is trained, you need to apply it to small portions of the images to see if a car is on the portion or window. Then you need to draw a box on that window to show the car is there:</p>
<p><img src="/images/2017-05-30/p5_car_on_box.png" alt="Car on box" /></p>
<p>There are multiple boxes over the car. A technique using heat maps is recommended to combine them and eliminate false positives by using a threshold on them.</p>
<p><img src="/images/2017-05-30/p5_car_heatmap.png" alt="Car on box" /></p>
<p>Even with that technique, on the video stream, some false positives are found. Another technique to average heat maps on consecutive frames was suggested on the lectures with even better results. My solution for this project could be found on <a href="https://github.com/darienmt/CarND-Vehicle-Detection-P5">this repo</a>. The pipeline applied to the project video is <a href="https://github.com/darienmt/CarND-Vehicle-Detection-P5/blob/master/video_output/project_video.mp4">here</a>.</p>
<h1 id="conclusions">Conclusions</h1>
<p>This course was a lot of fun. Multiple new techniques explained and understood… to some extend at least for me. This is just the tip of the iceberg on this field. It was a great experience, and I am looking forward to next term starting next week. It was a lot of work(more than the 10 hours per week forecasted by Udacity) but it worth every cent.</p>
Tue, 30 May 2017 20:21:10 +0000http://darienmt.com/self-driving-cars/2017/05/30/self-driving-car-nanodegree-term-1.html
http://darienmt.com/self-driving-cars/2017/05/30/self-driving-car-nanodegree-term-1.htmlself-drivingcars,AI,machinelearning,neuralnetworks,deeplearning,computervision,cv2,tensorflow,kerasself-driving-carsLocal Docker with CentOS 7<p>We all like computers, and sometimes we have computers at home we don’t use. A good way to reuse those machines is to install Docker on them and use it to run some personal project we might have. The instructions to set it up are available on the Internet, but they are on different URLs, and it is complicated to go back and forward to get the right script to run or the correct command to execute. In this blog, I will describe how I was able to reuse one of these not-so-used machines as a Docker host. I will try to summarize here the following:</p>
<ul>
<li>Procedure to install Docker in CentOS.</li>
<li>Configure Docker daemon to enable Docker Remote API to access it from outside in an insecure but easy way.</li>
</ul>
<p>I am assuming you are using Mac on the local station and you have installed Docker locally <a href="https://docs.docker.com/engine/installation/mac/">already</a>. It is possible to do it from a <a href="https://docs.docker.com/engine/installation/windows/">Window machine</a> as well, but I haven’t try that yet.</p>
<h1 id="installing-centos-7">Installing CentOS 7</h1>
<p>There are multiple ways to install CentOS, but I downloaded the DVD from <a href="https://www.centos.org/download/">here</a>. In particular, this was the image <a href="http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso">CentOS-7-x86_64-DVD-1511.iso</a> I used. On the installation, I choose “Minimal” to save some space, and one thing to remember is to enable the Network and write down the IP of the server. During the installation write down root password too(at home, we don’t need to create another user due to the security problems we might have by using root all the time; so, we will be using root to access the server.)</p>
<p>After the installation is completed, you can use the IP and the root password to access the server with ssh, and the journey begins:</p>
<script src="https://gist.github.com/7400497caae9a85d823a8db151da0e83.js"> </script>
<p>(the server should be accessible with ssh without any changes)</p>
<h1 id="installing-docker">Installing Docker</h1>
<p>Now, we start following instructions on different pages. The first one is to install Docker on CentOS <a href="https://docs.docker.com/engine/installation/linux/centos/">https://docs.docker.com/engine/installation/linux/centos/</a>. Here is the commands summary:</p>
<script src="https://gist.github.com/6c7d2efc3942ebf6f96d3556290e6b57.js"> </script>
<p>With that, we update all our packages and add the docker Yum repo to the server. Next, we install docker:</p>
<script src="https://gist.github.com/a22ff98f726be326a1f9bbf7f0006de5.js"> </script>
<p>With Docker installed and Docker daemon running, we are ready to test that is working:</p>
<script src="https://gist.github.com/21f63a2298daf93d6059337e3f585628.js"> </script>
<p>Great! We have Docker installed and running, but we cannot access it from our local station yet.</p>
<h1 id="configuring-docker-daemon">Configuring Docker daemon</h1>
<p>By default, Docker daemon listen on <a href="https://docs.docker.com/engine/reference/api/docker_remote_api/">unix:///var/run/docker.sock</a>, we need to configure it to listen on a TCP port we can access outside the server. We need to follow the instruction on another page: <a href="https://docs.docker.com/engine/admin/#configuring-docker-1">https://docs.docker.com/engine/admin/#configuring-docker-1</a></p>
<p>Here is the command summary:</p>
<script src="https://gist.github.com/87f1bc86ba85cc5094fe40f086e6ef66.js"> </script>
<p>Add the following to that file:</p>
<script src="https://gist.github.com/1d0ffec383911c3bd7e1800be36a28ff.js"> </script>
<p>Now, we need to restart the daemon:</p>
<script src="https://gist.github.com/9d329e3836a3d7ef9db9d6242f00a794.js"> </script>
<table>
<tbody>
<tr>
<td>(If there is any error, it is possible to see them in details with journalctl -u docker</td>
<td>tail -100)</td>
</tr>
</tbody>
</table>
<p>What we did here is certainly no good for any other place than our local network at home. There is no security in place. Everybody on the network will have root access to our server, but it is just to play with whales at home, right? On other environments, we need to secure this as recommended <a href="https://docs.docker.com/engine/admin/#configuring-docker-1">here</a></p>
<p>So far, if we try to access Docker Remote APIs, we could not do it because the local firewall is not allowing connections to the port 2376 where we configured Docker to listen. We need another page to do that: <a href="http://ask.xmodulo.com/open-port-firewall-centos-rhel.html">http://ask.xmodulo.com/open-port-firewall-centos-rhel.html</a>
Here is the command summary:</p>
<script src="https://gist.github.com/8fda7ff927d409712671d0de046d198e.js"> </script>
<h1 id="accessing-docker-from-local-station">Accessing Docker from local station</h1>
<p>Finally, everything is set! On our local station, we need to tell the docker client where is the host:</p>
<script src="https://gist.github.com/942b5e4e6b9ef7622f0f58c7589b7780.js"> </script>
<p>You should see CentOS Linux 7 operating system as part of the “docker info” output.</p>
<p>It was a long journey to get here, but I could convert a not-too-used machine to a docker host, and start deploying to it. I hope this will be useful to somebody in the same situation.</p>
<p>Enjoy!</p>
Sat, 17 Dec 2016 20:21:10 +0000http://darienmt.com/docker/2016/12/17/local-docker-with-centos-7.html
http://darienmt.com/docker/2016/12/17/local-docker-with-centos-7.htmldocker,centosdockerAdvent of Code - 2016: Day 1<h1 id="first-puzzle">First puzzle</h1>
<p>December 2016 is here, and we need to start helping Santa again. We are in a city, and we need to follow directions to get to a point. The directions are in the for of “R|L#”. For example, R2 means turn Right and walk two blocks. On the other hand, L10 means turn Left and walk ten blocks. We are walking on the nodes of a mesh, and the solution to the puzzle is the distance from the starting point (0, 0) to and the last node we rich after following all the instructions. As you might think, the instructions are the puzzle’s input. The instructions also specified we are facing North, to have a starting direction to do Rights or Lefts.</p>
<p>Here is the minimum input parsing:</p>
<script src="https://gist.github.com/da4e7e8abc1c9ccbb18d7b8e0bfaf470.js"> </script>
<p>The current position on the mesh could be a tuple with the x,y coordinates of the node, and the directions on each axis we are facing. I chose the following:</p>
<ul>
<li>(_, _, 0, 1) =&gt; North</li>
<li>(_, _, 0, -1) =&gt; South</li>
<li>(_, _, 1, 0) =&gt; East</li>
<li>(_, _, -1, 0) =&gt; West</li>
</ul>
<p>To calculate the next node position, we need to pattern match on the current direction (North, South, East, West), and the movement direction (R or L):</p>
<script src="https://gist.github.com/11a26ee41600caaeed1c83ab821aec39.js"> </script>
<p>To calculate the final position, we need to fold the directions starting with (0,0,0,1), and then calculate the distance to the center. In this case using, we use <a href="https://en.wikipedia.org/wiki/Taxicab_geometry">Taxicab geometry</a>:</p>
<script src="https://gist.github.com/2b474783134ef02178acee0070248634.js"> </script>
<h1 id="second-puzzle">Second puzzle</h1>
<p>The second puzzle of the day is to find the first node visited twice. This change the approach, because you don’t need to calculate the final position anymore. A recursive solution is preferred, instead of folding over the directions. The only trick part is that “all” the nodes needed to be calculated and analyzed not just the nodes when the movement due to the direction ends. Ex. If we are at (0,0,0,1) and the direction is “R2”, the nodes in the path from (0,0) to (0,2) needs to be analyzed:</p>
<script src="https://gist.github.com/60e83633ef80197df58963ec8f9b8563.js"> </script>
<p>You can find this code along with my input and puzzle answers at <a href="https://github.com/darienmt/advent-of-code-2016/blob/master/src/main/scala/Day01.sc">here</a>.</p>
Sat, 03 Dec 2016 20:21:10 +0000http://darienmt.com/advent-of-code-2016/scala/2016/12/03/advent-of-code-day-01.html
http://darienmt.com/advent-of-code-2016/scala/2016/12/03/advent-of-code-day-01.htmladvent-of-code-2016scalaAirplane adventures<h1 id="introduction">Introduction</h1>
<p>From time to time, family and relative flight on vacations to come to visit us. Most of the time, I used to check their flight status on the airport website, and I was happy enough to see their arrival time. One day, my wife showed me a place where I could see the plane trajectory and where they a lot of details regarding a fight: <a href="http://flightaware.com/">FlightAware.com</a> You can go there with the flight number, and it shows a lot of details:
<img src="/images/2016-11-04/FlightAwareWebSite.png" alt="FlightAwareWebSite" /></p>
<p>This company is aggregating a lot of information regarding the flights around the word in a friendly user interface, but what was more interesting to me is that you can build hardware to feed data to this company. They have a lot of these “feeders,” and they aggregate their information as well. The information on how to do that is <a href="http://flightaware.com/adsb/piaware/build">here</a>.</p>
<p>My university background in Telecommunication Engineering makes me feel all the time attraction to electronics and especially on communications and radio-wave propagation. As a Software Engineer, I don’t get to touch physical things anymore. One day, I saw a deal on a Software Defined Radio (SDR) USB stick(<a href="https://www.amazon.ca/gp/product/B00PAGS0HO">this one</a>), and that was the beginning of this journey to create a FlightAware feeder with a Raspberry Pi I was not using.</p>
<h1 id="first-iteration---the-discovery">First Iteration - The discovery</h1>
<p>The SDR USB stick was an interesting way to receive radio transmissions. Using a simple software like <a href="http://www.rtl-sdr.com/rtl-sdr-quick-start-guide/">SDR#</a>, I was able to receive FM stations, search the radio spectrum and have a beautiful waterfall and frequency graphics. It was cool, but I was aiming to see what could be done with the Raspberry Pi and FlightAware software. Flowing the <a href="http://flightaware.com/adsb/piaware/install">instructions</a>, I was able to have it running and start receiving airplanes positions near my area. The software they provide has to parts PiAware (used to send information to their systems) and <a href="https://github.com/antirez/dump1090">dump1090</a>. I used the FightAware’s dump1090 version to have less moving pieces. As part of this setup, there is a web application you can access on the Raspberry Pi to show the airplanes around your area:
<img src="/images/2016-11-04/FlightAware_dump1090.png" alt="Local dump1090" />
To set up the feeder, you need to have a free FightAware account, and they upgrade you to an “Enterprise” account that has some <a href="http://flightaware.com/commercial/premium/">benefits</a>. They provide a page for your feeder as well(<a href="http://flightaware.com/adsb/stats/user/darienmt">here is mine</a>.)</p>
<h1 id="second-iteration---going-a-bit-further">Second Iteration - Going a bit further</h1>
<p>After playing with my simple setup for a while, I was even more interested as I read a few <a href="https://discussions.flightaware.com/ads-b-flight-tracking-f21/">forums discutions</a>, and I decided to buy a better antenna tuned to 1090 MHz, a better coaxial cable and put everything outside. Here is the list of items:</p>
<ul>
<li>Antena: 1090MHz ADS-B Antenna - 66cm / 26in (<a href="https://www.amazon.ca/dp/B00WZL6WPO">here</a>)</li>
<li>Transmission line (coaxial cable): TRENDnet TEW-L406 LMR400 N-Type Male to N-Type Female Weatherproof Cable, 6M, 19.6-Feet (<a href="https://www.amazon.ca/gp/product/B000ERCO0I">here</a>)</li>
<li>Band pass filter: ADS-B 1090MHz Band-pass SMA Filter (<a href="https://www.amazon.ca/gp/product/B010GBQXK8">here</a>)</li>
<li>MCX Male to SMA Female RG316 Low Loss Pigtail Adapter Cable 21cm/8.3in (<a href="https://www.amazon.ca/gp/product/B00K85HFR8">here</a>)</li>
<li>The connector from SMA Female to N-Type Female. This is the trickest part because there are different types of connectors and they all look similar. If you are interested in buying one, go to an electronic, radio or hobby store to with the parts and find help there. I went <a href="http://sayal.com/zinc/index.asp">here</a>.</li>
<li>Weatherproof box(this is Canada) <a href="https://www.amazon.ca/gp/product/B006EUHRK6">here</a></li>
</ul>
<p>Here are some pictures:</p>
<ul>
<li>Antenna:
<img src="/images/2016-11-04/Antenna.png" alt="Antenna" /></li>
<li>The Box from outside:
<img src="/images/2016-11-04/BoxClose.png" alt="The box close" /></li>
<li>The Box open:
<img src="/images/2016-11-04/BoxOpen.png" alt="The box close" /></li>
</ul>
<p>With this setup, I was able to receive the transmission from airplanes 150 miles away.</p>
<h1 id="third-iteration---the-beginning">Third Iteration - The Beginning</h1>
<p>More time pass, my old Raspberry Pi dies due to the “extreme” heat in the summer and the lack of ventilation on the box. I bought another one, this time a Rasberry Pi 3, and continue to receive airplanes positions.
Reading more forums, and be able to see only the airplanes when they are over my area, I started searching for ways to use the information provided by the feeder and do other analysis or presentation on that data (<a href="http://flightaware.com/adsb/piaware/about">feeder diagram</a>.) There are a few data streams we could use to do that, but that is another post, this one is too long already.</p>
Fri, 04 Nov 2016 20:21:10 +0000http://darienmt.com/2016/11/04/airplane-adventures.html
http://darienmt.com/2016/11/04/airplane-adventures.htmlFlightAware,raspberrypi,dump1090Downloading Jenkins Logs<p>Recently, I encountered a problem on one of the integration test run by Jenkins. This particular test was failing “sometimes”. The problem was that sometimes, the Selenium integration was timing out because a page was too slow, but it was hard to find which part of the test was the one failing. I needed some statistical information on how the test was running, but the one we have on Jenkins didn’t expose that information. As an alternative to change Jenkins configuration, I could analyze Jenkins test logs. Jenkins provides an RSS feed with all the run information, including a URL to a gzip-ed file containing the logs I need. In this article, I will describe the code I create to download this logs files to do further analysis locally.</p>
<p>First, I need the simplest framework I could find just to make a GET request to get the RSS feed, and then to get the gzip-ed log file. In the past, I used <a href="https://github.com/scalaj/scalaj-http">scalaj-http</a> for simple REST-full service consumption on scripts. It is a simple-blocking-wrapping of the good-old-java HttpUrlConnection. That will do for this problem. The following is the build.sbt I created for the project:</p>
<script src="https://gist.github.com/f59b045d265c7ac40f35d7cb58a6b176.js"> </script>
<p>The last line on the build.sbt is using assembly plugin. This plugin needs to be configured by adding the file /project/assembly.sbt as described <a href="https://github.com/sbt/sbt-assembly">here</a>.</p>
<p>To facilitate the HTTP communication, a http.Util object was created containing functions for Basic authentication and GET operations:</p>
<script src="https://gist.github.com/ce169cf87479758f7ea8f46a4e3621e0.js"> </script>
<p>Using this functions, the main object JenkinsReader is created:</p>
<script src="https://gist.github.com/4f389c02d43513a738f2b79f68b0cea9.js"> </script>
<p>The code expects to be executed passing Jenkin’s username/password and the output directory as parameters (lines 17 - 19). This code use Lightbend(Typesafe) Config to configure the following data:</p>
<ul>
<li>Jenkins RSS URL (line 14).</li>
<li>The file URI pattern to find gzip-ed log file from the URL on the RSS feed. Ex. artifact/theFile.gz. (line 15)</li>
<li>A regular expression to find the Jenkins run identifier. Ex. “SomeJob #([0-9]+) (.*)”. (line 16)</li>
</ul>
<p>The code is very straightforward, here is some description of it:</p>
<ul>
<li>Function currying to get a function with authentication already configured. (lines 27 - 28)</li>
<li>Get RSS feed as a string. (line 30)</li>
<li>Map the string to a Seq of string pairs with the first element as the text on the run and the second one the URL to the run. (lines 31 - 36)</li>
<li>Map the pairs to a tuple using the regular expression to extract id and if it is a failure or not. (lines 37 - 42)</li>
<li>Filter the failures only. (line 43)</li>
<li>Iterate over the URLs. (line 51)</li>
<li>Get the gzip-ed logs. (line 53)</li>
<li>Unzip the files and stored them in the desired location. (lines 54 - 63)</li>
</ul>
<p>This code could be better as some errors could be handled (Ex. the code could not write to a particular path), but I hope it will be useful to somebody as it was for me.</p>
<p>Happy coding!</p>
Sun, 03 Apr 2016 20:21:10 +0000http://darienmt.com/scala/2016/04/03/downloading-jenkins-logs.html
http://darienmt.com/scala/2016/04/03/downloading-jenkins-logs.htmlscala,jenkins,httpscala