ValCo Labshttp://www.valcolabs.com
Fri, 21 Oct 2016 15:11:47 +0000en-UShourly1https://wordpress.org/?v=4.6.1111937268Nomenclature Mattershttp://www.valcolabs.com/2016/10/21/nomenclature-matters/
http://www.valcolabs.com/2016/10/21/nomenclature-matters/#respondFri, 21 Oct 2016 15:11:47 +0000http://www.valcolabs.com/?p=2517The word nomenclature comes from the latin word nomenclatura and is defined as the devising or choosing of names for things, especially in science or other discipline. Also know as a body or system of names in a particular field. You’re probably wondering what the heck i’m talking about, but bear with me.

In the last ten years the IT industry has changed drastically, and the pace of innovation is only increasing. As professionals, we’ve always had to evolve to stay relevant in our field, even though in the past, the delta at which the evolution was necessary was much greater. I’ve tried to stay slightly ahead of the curve, at the very least, with the curve. If there was any one thing that I could point to that helped me through each evolution it would be the nomenclature. Learning and understanding new terms has been the single most successful thing that’s helped me when learning something new.

Think back to when you first heard about virtualization, there were a ton of new terms that you probably had never heard of; i certainty hadn’t.

hypervisor

virtual switch

vmkernel

HA

DRS

Virtualization has been around for a while now, most people are very familiar with those terms and how the underlying technology works. The pace of innovation has increased since the rise of virtualization which has brought us to cloud, containers, software defined networking, DevOps, IaaS, PaaS, OpenStack, hyperconverged…to name a few. New concepts to understand, and with it, more nomenclature:

endpoint groups

virtual private cloud

application centric infrastructure

microservices

service catalog

dockerfile

CI/CD

containers

security groups

redshift

distributed logical router

images

merge, pull, push

auto-scaling groups

multi-tenancy

Of course there are more, I could write pages full of new terms that have emerged in the past few years. OK, so you have some new terms, new nomenclature to learn and understand. On top of all that, companies decide they want to use overlapping terms that mean different things. The term container is a great example; container means five different things to five different companies.

I know this entire post is pretty obvious to most people; words are important. What I hope you will take away from this, is while things may be changing rapidly, at a vigorous pace, try not to be completely overwhelmed. When you sit down to read about a new technology or concept that is foreign to you, start with the nomenclature. In my experience it will give you the best chance for success for understanding and applying whatever it is you’re trying to learn.

]]>http://www.valcolabs.com/2016/10/21/nomenclature-matters/feed/02517Random thoughts – VMware Cloud on AWShttp://www.valcolabs.com/2016/10/14/random-thoughts-vmware-cloud-on-aws/
http://www.valcolabs.com/2016/10/14/random-thoughts-vmware-cloud-on-aws/#respondFri, 14 Oct 2016 17:37:31 +0000http://www.valcolabs.com/?p=2515As you may or not be aware, today VMware unveiled their newly minted partnership with AWS.There’s been one article by Frank Denneman that takes a closer look at this partnership. There has also been an oped? piece published by our favorite cloud parody account, @cloud_opinion.

Señor _Opinion states in his article that AWS has blinked, that they are feeling pressure from Azure and that they have basically had a knee jerk reaction and partnered with VMware in order to address the enterprise arena, tactically. I agree with some of his/her opinions, especially about Azure having a much better story in the enterprise than AWS, but I don’t agree with all of it. Case in point:

“Customers that buy into this partnership will end up wasting money and time and will not be moving to Cloud, while paying the Cloud premium.

I don’t agree. We don’t have nearly enough information of what this looks like to make this sort of statement. Could this happen? Sure. If this turns out to be vCloud Air, but in AWS datacenters, then this will absolutely fail and be a wasted effort for customers. Which, if you read Frank’s article, it looks like it may be just that, vCloud Air in AWS datacenters; more on that in a minute.

Again, disagree. AWS is still focused on customer pain, and are directing some of that focus towards a particular market segment; enterprise. Many enterprises are already using public cloud, and the amount varies, but it’s almost all new development. New development is great, but what about everything else? In a magic land where all the Fortune xxx companies rewrite all their apps to take full advantage of public cloud, then their isn’t any need for VMware Cloud on AWS, but [most of us] we don’t live in said land. Enterprises are still looking for new ways to scale their applications that they’ve been using for years without always having to buy new hardware and go through 3-5 year refresh cycles with <insert your favorite vendor here>.

Now, Dhr. Denneman’s article outlines, from a high level, what VMware Cloud on AWS looks like. Leveraging bare metal servers inside of AWS datacenters, VMware will deploy a full SDDC stack (Cloud Foundations) on top of that, allowing the ability to move workloads private -> public, public -> private and public -> public. Outside of the fact that this is sitting in an AWS facility, it’s basically vCloud Air. We all know how well that worked (it didn’t). Hold up, here’s what is intriguing to me;

“Another strength it the ability to pair current workloads with the advanced feature set of AWS. As a result, IT teams will be able to extend their skill set discovering the vast catalog of services AWS has to offer.”

That’s from Denneman’s article. That statement tells me they are going to offer a way to use the vast breadth of AWS services with “traditional” workloads that are being migrated to AWS; think ELB, S3, etc.. Also the ability to access them programmatically, not only through the vSphere API, but the AWS API as well. Denneman has more articles on VMware Cloud on AWS, I am hopeful those will have more detail on what else this partnership will mean for customers.

There are two key things that VMware needs to get right for this to work, outside of this not being a duplicate of vCloud Air; pricing and extensibility. No one, and I do mean NO ONE, is going to pay twice the VMware licensing to run on-prem and in the public cloud, they just won’t. I’m glad to see they they are packing everything up and selling it, as opposed to paying for everything separately. Customers hate complex licensing and invoicing. There also needs to be good extensibility to existing and future AWS services that will allow enterprises to start using what AWS has to offer and explore refactoring pieces of their business to take advantage of all the benefits cloud can offer, while still offering on-demand consumption for their “traditional” workloads.

]]>http://www.valcolabs.com/2016/10/14/random-thoughts-vmware-cloud-on-aws/feed/02515Cloud Field Day 1- Druvahttp://www.valcolabs.com/2016/10/03/druva/
http://www.valcolabs.com/2016/10/03/druva/#respondTue, 04 Oct 2016 03:07:48 +0000http://www.valcolabs.com/?p=2483Cloud Field Day 1, or CFD1 as it’s called, was my very first field day event. It was awesome.

One of the companies that intrigued me was Druva. This is actually Druva’s second field day event, in 2011 they burst onto the Tech Field Day scene in Tech Field Day 5. Now they have returned for CFD1; lets dig into what they’ve been up to the last five years.

Druva’s mantra is Unified Data Protection, Born in the Cloud.

Druva’s flagship product is INSYNC, not to be confused with NSYNC, which has been around since they launched. In addition to INSYNC, they have a newish product called Phoenix.

This post will only focus on INSYNC.

Both of these products are hosted in the cloud, that means physical or virtual appliances are required to host them and the packing and pricing for the software include any platform costs (AWS, Azure, etc) so you get one bill and don’t have to worry about paying multiple invoices. There is an option to get a caching appliance that will keep some data on-premises, but it is not required.

INSYNC

INSYNC focuses on three areas of data; Protect, Preserve, Discover. Those areas mean exactly what you think, protect the data, preserve the data (think legal hold) and discovery of the data via indexed searching, visualizations, and more.

Protect

There are two main pillars of protection; endpoint backup and cloud application backup and archive.

Endpoint backup has multiple platform and cross-platform support, including Windows, Mac, iOS, Android and Linux and offers a centralized point of management, no matter the platform. One of the mantras for Druva’s Endpoint backup is ease of use, which not only makes it easy to deploy, but also allows for self-service automated restores.

Cloud application backup and archive is one of the things I really like about the INSYNC platform. This is the ability to backup and archive data that resides in a cloud service (think Box, Office 365) and backup/archive that data to a cloud provider, such as AWS or Azure. A lot of people think that because they are paying SaaS providers for a service, that their data is safe, and protected; it’s not. SaaS providers don’t protect you from data corruption, departing employees, overwritten documents, etc…

Preserve

The preservation capabilities within the INSYNC product are fantastic:

Provides direct integration to certain ediscovery platforms without having to go through a middle-man

Legal Hold Management API

Allows partners a way to perform legal hold automation; Exterro and Approved are two.

REST API

Pre-ingestion Culling

Provides the ability to narrow down what data is being brought over to the ediscovery platform

This list of features is very impressive, and represents a differentiator in the backup/archive space.

Discover

Having the ability to see where you data is, control what data goes where, and a way to ensure industry specific compliance (PCI, PII, etc…) is becoming ever more important in a world of data breaches, ransomware and general buffoonery. INSYNC offers more than a few ways to protect against this:

Full-text indexing. Not only on Endpoints, but also any data in the cloud

Data regionalization

Ability to store different types of data in different locations. Say you have certain information that must remain in Europe, you can create a policy to ensure that data stays within Europe

Remediation actions also exist within the platform. This means that you can quarantine data to prevent users from downloading or restoring a particular type of data.

To wrap up, I really hadn’t heard of Druva prior to CFD1, but to say that I was impressed is an understatement. Druva’s INSYNC product has a lot of unique capabilities and when you combine that with what they are cooking up with their other product, Phoenix, you’ll see a very compelling offering. I’ll cover the Phoenix platform in another post.

]]>http://www.valcolabs.com/2016/10/03/druva/feed/02483The Inauguration of Cloud Field Dayhttp://www.valcolabs.com/2016/09/11/the-inauguration-of-cloud-field-day/
http://www.valcolabs.com/2016/09/11/the-inauguration-of-cloud-field-day/#commentsMon, 12 Sep 2016 03:27:07 +0000http://www.valcolabs.com/?p=2476I’m excited (and honored) to have been invited to participate in the very first Cloud Field Day, referred to as CFD1 for the remainder of the post, taking place September 14th-15th. If you’re familiar with other * Field Days then you know the format, if not, check out the Tech Field Day site. This will be my first appearence at any field day event, I expect a high level of snark, a ton of new information, and a low-to-medium level of hazing; hoping for low.

The Delegates

There are a total of 13 delegates at CFD1, including myself, coming from all parts of the globe. The list includes Tech Field Day alums such as Nigel “let’s get crackin’ with Docker” Poulton and Justin “JP” Warren. A full list of delegates is at the aforementioned CFD1 webiste. This is definitely a lively crew of people so it should be very dynamic, and fun.

The Presenters

Five companies are presenting at CFD1 and are broken up into a data day and a cloud day.

Data Day:

SanDisk

Druva

Cisco

Cloud Day:

Scality

Docker

I’m familiar with a few of these companies and their products, but CFD1 is going to be me a great opportuniy to learn even more, and hopefully, give you all the chance to learn as well.

The CFD1 presentations are streamed live and will be available on the Tech Field Day website afterwards. If you’re watching the live stream and have a question for one of the presenters, go ahead a tweet the question with the hashtag #CFD1, and me, or one of the other delegates will see it and gladly ask the question. I don’t plan on pitching softballs; bring the hard questions.

]]>http://www.valcolabs.com/2016/09/11/the-inauguration-of-cloud-field-day/feed/12476Blueprints as Code in vRealize Automation 7 Part 2 – Adding Blueprints to Github and Setting up Jenkinshttp://www.valcolabs.com/2016/06/24/blueprints-as-code-in-vrealize-automation-7-part-2-adding-blueprints-to-github-and-setting-up-jenkins/
http://www.valcolabs.com/2016/06/24/blueprints-as-code-in-vrealize-automation-7-part-2-adding-blueprints-to-github-and-setting-up-jenkins/#commentsFri, 24 Jun 2016 13:23:22 +0000http://www.valcolabs.com/?p=2451Welcome to part 2 of this 3-part series on blueprints as code in vRealize Automation 7 (vRA 7). In part 1, we walked through how to export blueprints from vRA 7. In this post we’ll discuss putting those exported blueprints into a source control system (Github) and setup our Jenkins server. Here are links to the full series:

Blueprints as Code in vRealize Automation 7 Part 3 – Creating a New Jenkins Job

The first thing we’ll need to do is setup a server for Jenkins to run on. In this example i’m using a CentOS 6.6 machine, however, Jenkins can be installed on Unix/Linux or Windows. Jenkins does require Java 7 or above and Servlet 3.1. I will detail the instructions for RedHat distributions, but if you’re running something other than Redhat check out the Jenkins download page.

Installing Jenkins

To begin, lets add the Jenkins repos and install Jenkins. Type the following commands:

Lets ensure Jenkins is up and running. Navigate to http://<ip or hostname>:8080

You should now see a screen that asks you for a password. Copy the password from the Jenkins machine located at /var/lib/jenkins/secrets/initialAdminPassword

cat /var/lib/jenkins/secrets/initialAdminPassword

Copy and paste the password into the Jenkins login screen in our browser. A ‘Getting Started” screen should now appear. Go ahead and click the Install suggested plugins button. Of course you don’t HAVE to do this, but we will for the purposes of this walk-through. Once it completes create the first admin user and click the Save and Finish button. Alternatively, you can just choose to continue as admin, which is what I’ve done. Click the Start using Jenkins button.

[This part of the article assumes a basic knowledge of Github]

Adding the Blueprint to Source Control

Now that we’ve got Jenkins installed, lets get the blueprint we exported from part 1 of the series into our source control system. I’ve created a repo called blueprints in my organization. The first thing we need to do is initialize our local branch; i’m using a Windows 7 system for this.

Unpackage the .zip file you generated in part 1 of the series to a folder. In this example i’ve unpackaged the files to blueprints/Win2K8R2. I’ve also already initialized the blueprints folder by running the git init command. Once you’ve unpackaged the the blueprint lets add it the local repo and push it to our repo:

That’s it! You’ve set up a Jenkins server and exported blueprint the blueprint to Github. In the third and final part of the series we will setup a new Jenkins job which will take the any changes made to the blueprint and automatically update the existing blueprint within vRealize Automation.

The company I work for uses Resource Guru as a central resource scheduling tool for our engineers. Project Managers love it because they have a consolidated view of all of the engineers’ schedules, but for the engineers, it’s ANOTHER tool they have to log into. The main complaint I hear is: “Why should I have to manage another schedule, when I already have an Outlook calendar.” I agree.

Resource Guru doesn’t currently have an integration to add schedule additions/updates/deletes to Outlook. There are a few options that get us pretty close, but were lacking in some way.

Zap with Zapier!

Zapier is a really cool task service that allows you create ‘Zaps’ to connect various online apps and services. There are hundreds of things you can do like, create a Google calendar entry from an Evernote task list, or share new tweets from Twitter into Slack. The list goes on, but one of the coolest features is the ability to create an action that executes an AWS Lambda function.

Webhooks?

Resource Guru allows you to create Webhooks. These Webhooks can POST the data generated from Resource Guru to an external website. Zapier let’s you create a ‘receiving URL’ that accepts the JSON data being sent from the webhook. Once we get that data into Zapier, we can take action on it.

Why not use the existing Calendar Zap actions?

Zapier can already create calendar events though, can’t it? Believe me, I really wanted to use the built in Gmail and Zapier SMTP functions. Not only could I have kept everything within Zapier, but also because using Lambda means custom code. It’s not terrible, but supportability becomes a problem.

I ran into two main issues trying to get this to work natively. The problem isn’t getting the data into Zapier, it was getting it into the calendar invite. There are (3) integrations I tried.

Gmail Calendar: Even the detailed event wouldn’t allow me to use values passed from the Resource Guru Webhook. I wanted to customize the notification and details the engineer would see in both the email and invite.

Gmail & Zapier SMTP: These looked promising, but you aren’t able to create the RAW email necessary to embed the attachment. Also, I wasn’t able to create a file object on the fly within the <code> action that I could pass in as the attachment. This is important because we want to control the Calendar UID. This lets us automatically update calendar invites when an update happens in Resource Guru.

Getting it done with AWS Lambda

There are a few reasons I wanted more control over the actual invite. First, it meant I could add any of the information sent from Resource Guru to the email/calendar invite. Engineers could see the details of customer, project, notes, etc., at a glance. Second, it meant I could control the calendar UID. By using the value of the schedule entry ID (payload_Id) I could make all updates/deletes to the existing calendar entry use the same ID. The result is, as project managers update schedules, changes are automatically reflected on the engineers schedules.

The basic workflow looks like this:

Project Manager makes/updates schedule entry in Resource Guru

Resource Guru sends update to Zapier via Webhook

Zapier sends the update to AWS Lambda

Lambda generates the custom message and calendar invite

AWS Simple Email Service send the invite to the engineer.

So what did it take to get this to work?

Create a Zapier Webhook Trigger and set it up to ‘Catch a hook’. Then copy down the URL on the ‘View Webhook’ dialog.

Setup the Webhook in Resource Guru. If you haven’t done this, check out the following: Resource Guru Webhooks. Here is an example of my Webhook. We are sending all ‘Booking’ changes to the ‘Receiving URL’.

Setup an AWS Lambda and Simple Email service (SES) Account in the AWS Identity and Access Management Console. Key thing here is you will need to create an account that has access to AWS Lambda and SES. I have given my account more rights than it probably needs.

Create a Lambda function called ‘resourceGuruCalendarUpdate’. The code can be found on Github: resourceGuruCalendarUpdate_clean.js Note, you will have to update a few of the values (email addresses, and access ID) to get it to work in your environment.

Create Lambda action in Zapier. During the setup we configure Zapier to use the AWS account we create previously.

Now we need to tell the Lambda Action what values we want to pass to AWS. Setup the ‘Edit Template’ option in the Lambda Action by adding the following values:

Setting up email with AWS Simple Email Service

The final step is to enable SES to send email messages to the engineers. The Lambda function imports the SES methods from the aws-sdk and should take care of all the heavy lifting. We do however, have to ‘authorize’ the email address OR the domain in SES.

NOTE: If you own the domain and can edit the DNS entries, you should be able to do this for the single @yourdomain.com you are sending to. In my case, I don’t so I have to have each individual approve receiving the email.

Navigate to the AWS SES Admin console and select ‘Email Addresses’, then click ‘Verify a New Email Address’

Each user will have to click the AWS approval link that gets sent to their email.

Common Issue:

The most common issue I have found is users have not clicked the verified/approval message that gets sent to them. An easy place to look is the CloudWatch logs for the Lambda function within AWS. You will see a very clear message indicating that that SES is not authorized to send to the recipient.

Test It!

Now you should be able to test. You can simply create a Resource Guru schedule entry that should activate the process.

Simplify Testing in your own Resource Guru Environment

I didn’t want to mess with my organization’s Resource Guru account, so I just created a temporary account. You can use the same receiving Webhook URL for your Zapier Zap. Now you can make all the changes you need without effecting the production calendar.

Future thoughts:

Duration Logic: Resource Guru provides a child element in the JSON response called Payload.Durations. This is an array of hash entries. Here is a sample:

What I haven’t been able to figure out, is how to get the child items (date, duration, start_time, etc., into Lambda. When Zapier passes them, they come in as some funky csv type value. Anyway, if you can get that value in, you can update the logic so it will actually do partial day calendar invites.

Crude Calendar Creation: I mention it in the script. I am building the calendar attachment in the script, but it isn’t very clean. There is a ics.js module you may be able to use to make it better.

I am new to Application Services, and had some trouble successfully preparing the template. To prepare the template, a few pieces of software (including an agent) have to be installed in the template for vRA to be able to interact with the VM once it is deployed.

Here are a few resources I found that helped me successfully create/deploy a blueprint with a simple software component.

Preparing the Template

NOTE: There is a note in the article that threw me off. I read it as if I didn’t need the Guagent install because I am using a certain license. I was wrong, and without pulling down the ‘GuestAgentInstall_64.exe’ file from the vRA server, I wasn’t able to execute any of the PowerShell actions in the Software Component.

What to do if it is still not working:

Troubleshooting this can be extremely time-consuming and frustrating because it has to go through the entire provisioning process every time you test. Furthermore, after the machine is provisioned the software component magic begins. However, if there is a problem, vRA waits 30 minutes before it errors out and disposes of the VM.

Below are a few places to look to see if the agent is working. If it’s not, I don’t wait. I go on troubleshooting and let the request (eventually) fail.

Check out the ‘c:\opt\vmware-appdirector\agent\logs\agent_bootstrap.txt’ log file to see whats going on. Because I didn’t have the Guest agent installed, I saw the following repeated in the file.

I also noticed, ‘logs’ was the only directory that existed in the ‘c:\opt\vmware-appdirector\agent’ folder until the Agent was working correctly.

Reduce the complexity of what you are trying to run (PowerShell in my case), and validate the agent is working to begin with. I used a simple PowerShell Script that would simply write the value of a property to the ‘c:\temp\’

Create a property in the software component called ‘test_property1

Create an action to log the value of the property

Once the machine is deployed, verify the vraAgentTest.log we created in the action exists.

.NET 3.5 install. I have seen a few articles that show .NET is required on the template to get this to work.

Thanks!

]]>http://www.valcolabs.com/2016/05/26/vrealize-7-application-services-windows-agent-install/feed/02404vRealize Automation 7 – Custom Email Template XaaS Cleanuphttp://www.valcolabs.com/2016/05/25/vrealize-automation-7-custom-email-template-xaas-cleanup/
http://www.valcolabs.com/2016/05/25/vrealize-automation-7-custom-email-template-xaas-cleanup/#respondWed, 25 May 2016 08:35:57 +0000http://www.valcolabs.com/?p=2392A customer recently contacted me to help resolve a few issues with vRA 7’s custom email templates. Here are some of the changes I had to make.

Multiple XaaS (anything as a Service) workflows are displayed in the Approval notice. This may be the intent, but in this case the customer just wanted to see the ‘Virtual Machine’ properties of the request.

So once you follow the KB above and install the current templates for v7, you will end up with the following default directory on the vRA server

/vcac/templates/email/html/extensions/defaults/

Today I learned that there is yet another language that exists. The templates are written in Apache Velocity. For my purposes, syntactically it wasn’t too bad.

Only Looping over the ‘Virtual Machine’ component

This is a screenshot of the vRA 7 design view. Notice in addition to the VM blueprint, we also have a XaaS workflow attached.

To handle the XaaS issue where the customer only wanted to see the ‘Virtual Machine’ properties, I modified the ‘componentInfo.vm’ template. The template grabs a handful of properties like (Machine Name, Machine Type, Lease time, etc) and then generates the associated HTML to display it. Because, each XaaS workflow is it’s own component, a loop in the template iterates over all of them.

This screenshot shows the default template displaying data for all components (XaaS and VM)

To fix this:

Add a condition to filter for the ‘Virtual Machine’ component type. This should be added right below the last ‘#set’ statement. Of course this could be any condition you want.

Add the ‘#end if’ statement just above the ‘##For loop’ comment at the end of the template.

Here are a couple of videos from this year’s Sirius Madness event where @virtualtacit and I demonstrated using Amazon Web Services and Alexa (Amazon Echo) to flip a Raspberry Pi controlled drone. Below you can find some of the background on how we built it.

Oh, and aside from the fact that voice controlled drones are awesome!, our goal was to show people how easy and inexpensive it can be to accomplish some very complex tasks, with very little infrastructure…FREE. Amazon provides access to AWS free for the first year.

This video shows a little bit of the Heads Up Display (HUD) interaction. A node.js web server on the Raspberry Pi is used to process IoT changes and show the corresponding HUD page (Dashboard, HUD, System Status)

We were a little concerned that there wasn’t enough head room to carry out the flip maneuver. You can see the drone clips the ceiling slightly. Fortunately it recovers.

Inspiration: AWS re:Invent 2015

We saw some really cool hacks in the Amazon AWS re:Invent conference’s makerspace. A few projects included the:

The Goal – What do want need

What services/hardware do we need

We decided to build a voice-activated control system called uberJARVIS. You can see a breakdown of the services and devices we used below.

The flow of the control system can be seen below.

The Amazon Echo accepts the commands and passes them to the Alexa Skills Kit (ASK)

(ASK) breaks down the commands into simple JSON and passes the output to Amazon’s Lambda service. Check out the ASK/JSON Sample Below.

Lambda is running a node.js script to process the JSON output, and update a ‘thing’ we created in AWS IoT.

The thing, which is a Raspberry Pi, is subscribed to the ‘thing’ shadow in the AWS IoT repository, and processes changes like:

Deploy mark 42 – Deploys the drone

Power On: – Powers the system on (fog light and relays)

Move Up|Down|Left|Right – Moves the drone

When the RaspberryPi detects a change, it will execute the necessary response. In the case of ‘deploy mark 42’, it will execute the ar-drone npm library to wirelessly launch the drone. Big shoutout to the developers of this module: ar-drone npm module

Finally, we added a small web server to the Raspberry Pi to provide visual feedback in a webpage. Similar to how we launch the drone, we can tell uberJARVIS to show a specific page on in the browser.

Raspberry Pi with Relay mounted. I took apart this inexpensive RF controller and connected the GPIO from the Pi to control the on/off functions. These are activated by the node.js script running on the Pi turn turn on a fog machine.

Alexa Skills Kit (ASK) and JSON make it Simple

Here is an example of what the ASK request/response looks like. In this example, we are sending a simple command ‘Tell Jarvis to power on’ which the ASK parses and generates a very easy to handle response that gets sent to Lambda. Take a look at the ASK interaction model if you are interested in building your own skill for Amazon’s Echo. uberJARVIS is the skill we created, and has an invocation phrase of ‘jarvis’. To invoke it, we simply tell the Echo ‘Alexa, tell jarvis to do something‘

This service simulator allows you to test your skill without having to talk to your echo. Very helpful for troubleshooting and testing during development.

uberJARVIS Github Repo

Full disclosure. I am more of a pinch-hit developer, so don’t laugh too hard at some of the spaghetti code.

Purpose

I am working on the steps to grab the firmware from the Amazon IoT button and flash it onto the $5 DASH Buttons (think Tide/Cottenelle/Ziploc). There are a few really good existing articles that detail the steps, but I had some difficulty getting started. To be fair, the articles are great, but I am a complete newbie to things like:

So, with that said, here are the steps I used to get everything setup to successfully flash the firmware on the $5 buttons…

Why snag the firmware?

The Amazon Web Services (AWS) IoT firmware version of the DASH button lets you interface with Amazon Web Services like DynamoDB, Lambda, etc. During the configuration, the firmware allows you to upload a public/private key set that enables communication with the AWS IoT service. The $5 version runs the same v1.0 hardware, so if we can get the IoT firmware, we should be able to make the $5 versions act like the more expensive $20 IoT button.

There have been a lot of really great articles detailing the tear down and internals of Amazon’s DASH button as well as flashing the firmware. These should get you up to speed with what were are doing here:

The Adafruit article discusses the solder and prototyping wire sizing. Just needs to be small enough to work for how tiny the connections are.

UPDATE: As Bjorn noted below in the comment section, you can bypass the solder steps with this fancy breakout board: http://circuitmaker.com/Projects/BCF37BFD-E524-41E2-B370-649701462F82

Check out Adafruit’s article on soldering the connections. Once you have that done, it should look something like this:

Now you are ready to get OpenOCD and the ST-Link utility going.

BATTERY NOTE: I couldn’t find any clear documentation, but I have to DISCONNECT the battery to successfully connect with OpenOCD.

VCC 3.3 NOTE: Although the ST-Link v2 programmer has a 3.3v pin and ground, I found references stating that it would mess up the SWCLK. So, I only used (3) of the pins on the programmer (shown below) and connected and external 3.3v source:

Programming Tools

Windows ST-Link Utility

Newbie Note: Copy firmware from/to device

This tool provides a great visual indication that things were working. It also allowed me to upload/download the firmware once connected to the DASH

Download: You will need to register for a free account, but this will give you access to the utility, and also the latest ST-Link v2 programmer firmware. I had to update it to version V2J27S6

ST-Link Connectivity Notes:

Newbie Note: Connects to device, then lets you telnet to OpenOCD and interact with the device.

This tool can do what the ST-Link utility did, but most importantly, has the ‘stm32f2x unlock 0’ firmware command to unlock the DASH firmware.

Ubunutu NOTE: I had to download and install OpenOCD 0.9 to get this to work. The version installed with ‘apt-get install openocd’ was version 0.7. Unfortunately, it seems that some of the stm32 commands aren’t supported. I received this error: invalid command name ‘jtag_ntrst_delay’

Now you can establish a telnet session to the OpenOCD process which will let you interact with the device

Here is a successful connection attempt with OpenOCD

Telnet to the OpenOCD process to access the On-Chip Debugger.

This will let you interact with the DASH microcontroller

Launch a separate terminal window. Now that openocd is running (see connecting above), you can ‘telnet localhost 4444’.

Here is a successful telnet connection

Now run the following commands to view the device details

Run ‘flash banks’ command

Run ‘stm32f2x unlocknum’ to unlock flash contents

Reboot for the unlock to take effect. To reboot run ‘reset init’

Once the device reboots, OpenOCD should automatically reconnect. You may have to relaunch the telnet session.

Unlocked Firmware Status

Dump Firmware Using OpenOCD

Run the following:

flash list

flash probe 0

flash banks

Note the size is 0x00100000 (That’s 1024KB, the size of the flash)

Run:

dump_image dash_fw.bin 0x0 0x100000.

I also tried to use the location 0x08000000.

Empty Firmware Dump: In both cases, the firmware I dump was empty. This may be due to disabling the flash protection. If you know how to disable the protection bit and recover the flash, please leave a comment.

This dumps the firmware to the local directory you ran the openocd command from.

Unlock STM32 Flash (This unlocks the flash, but may wipe the firmware)

Next Steps and challenges

[Need to unlock firmware without overwriting] During the STM32 flash unlock process, I found the source firmware unreadable. In the ST-Link utility it appears as ASCI character 152 (ÿ). I have not been able to find a way to disable the memory protection, and also save the firmware. It is possible (see the Cottenelle firmware retrieved here: https://github.com/dekuNukem/Amazon_Dash_Button/)

[3D Printers] Ok, so after soldering about (4) buttons, I am getting better at it. The problem is, it’s time consuming and likelihood of turning it into a paper weight is high. I want to create a 3D printable model where you can insert header pins that will make contact with the (5) pins required to flash the DASH. Checkout Adafruit’s article above for more detail, but the basic pins are (SWCLK,SWDATA,Vcc 3.3, Reset, Ground). Here is a link to the image.(https://learn.adafruit.com/assets/27092) Also, not sure why, but someone created a fake DASH button on Thingiverse. We may be able to use this model to start: http://www.thingiverse.com/thing:766551/#files

Help!

Ok, so I need your help. In order to make Part 2 of this series, I need someone to continue where I left off. If anyone has successfully pulled the firmware and cares to share, please leave a comment below, or hit me up @ubergiek.