Plugging it in WrongMy name is Taylor Lloyd, and I love fiddling with development tools and optimizing day-to-day work. I'm currently working on a Masters Thesis at the University of Alberta, looking into compiler optimizations for GPU computing. In my free time (hah!) I dabble in Android and Home Automation.
http://taylorlloyd.ca/
Wed, 11 Jan 2017 15:22:40 -0700Wed, 11 Jan 2017 15:22:40 -0700Jekyll v3.1.6Understanding the Pascal GPU Instruction Pipeline<p>There is lots of literature on instruction pipelines in CPUs, but GPUs remain poorly understood.
GPUs execute instructions in a wildly different manner, and many common compiler transformations that are effective for CPUs – such as strength reduction, or partial dead code elimination – can actually hurt GPU performance. Through the lens of NVidia’s latest series of GPUs, let’s take a look at how instructions are actually executed and how that affects us as programmers and compiler developers.</p>
<h3 id="the-nvidia-gp100">The NVidia GP100</h3>
<p>The latest GPU from NVidia is downright impressive. Each GPU contains 56 SMs, each capable of issuing 4 warp-instructions per cycle, for a monstrous theoretical 7168 instructions per cycle. This theoretical limit is sadly, not reachable due to a relatively meager set of functional units behind the massive pipeline front-end. Each SM is divided into 2 symmetric halves, each containing 2 warp dispatchers, 8 memory load/store units, 16 double-precision floating-point units, 8 special function units (SFUs), and 32 general-purpose cores.
Due to unit saturation, the actual maximum IPC is only 5376.</p>
<p>So why set up a GPU this way? The answer lies in the architecture, which can only rarely push instructions maximally through the front of the pipeline. Previous CUDA architectures often left many functional units idle, so the Pascal architecture reduced their number, instead adding additional SMs.</p>
<h3 id="mapping-the-grid">Mapping the Grid</h3>
<p>Before we can discuss instruction execution, we need to understand what happens when a CUDA kernel is invoked. When you invoke a kernel such as <code class="highlighter-rouge">kernel&lt;&lt;&lt;gridDim, threadDim, sMem, stream&gt;&gt;&gt;(d_ptr)</code>, the following things happen:</p>
<ol>
<li>If the stream provided is a blocking stream, we will wait for the stream to become idle before continuing</li>
<li>The CUDA binary is selected as follows:</li>
<li>If there is a cached binary for this kernel and compute architecture on the GPU, it is used.</li>
<li>If there is a binary included in the executable for this kernel and compute architecture, it is sent to the GPU cache, and used.</li>
<li>If there is PTX for this kernel in the executable, it is assembled by the CUDA runtime into a binary, sent to the GPU cache, and used.</li>
<li>A compute work-unit is appended to the work queue for each unique grid index in gridDim</li>
<li>Streaming Multiprocessors (SMs) each accept up work units until a resource limit is hit. Possible limits include number of threads(2048), blocks(32), registers(65536), or available shared memory(64K).</li>
<li>The Streaming Multiprocessor assigns all required resources for all accepted threads, including registers and shared memory assignments. This allows for zero-overhead context switching, used later.</li>
</ol>
<h3 id="the-pascal-pipeline">The Pascal Pipeline</h3>
<p>At this point, each SM is ready to execute instructions. SMs manage threads in <em>warps</em> of 32, and decode/issue/execute those groups simultaneously.</p>
<p><img src="/img/gpu/pascalsm.png" alt="GP100 Streaming Multiprocessor. Source: NVidia Pascal Architecture Whitepaper" /></p>
<p>Instructions flow down through the instruction cache, to the instruction buffer, are scheduled to a dispatch unit, which then executes in the appropriate CUDA core for the instruction.
Similarly to many modern CPUs, Pascal GPUs contain a dedicated instruction cache, with extremely high hit rate. Unlike CPUs, when there is an instruction cache miss, the latency can often be hidden by other instructions already present in the instruction buffer.</p>
<p>The instruction buffer is where the SM first partitions. At initialization time, threads are assigned to one half of the SM or the other. NVidia has never published exactly how this split is done, but we can assume something simple like even/odd warp IDs is used. The instruction buffer does not just hold instructions, but rather (instruction, warpID, threadMask) tuples. The warpID is used to calculate the offset into the register file for each thread within the warp, and the threadMask is used to specify threads that should not execute the current instruction.
Within the instruction buffer, entries are divided into ready and not-ready sets, where instructions in the ready set have all data available and are ready for execution.</p>
<p>For the moment, let’s step over the warp scheduler. It’s important, and we’ll get back to it, but we need to understand some other parts first. Let’s talk about dispatch. The dispatcher is responsible for ensuring threads get executed. It takes the given instruction, the warp ID, and the thread mask, and calculates the thread IDs that will actually execute the instruction. Using these thread IDs, it calculates absolute register addresses for each thread. Then, the dispatcher sends each thread-instruction to a free, applicable functional unit. If there are insufficient functional units, then the dispatch will queue the remaining thread-instructions for the following cycle.</p>
<p>So what do we have for functional units? The pascal architecture defines 4 different types of addressable functional unit: Load/Store memory units, Double precision floating-point units, Special functional units (used for approximate transcendental functions, such as sqrt, sin, etc), and “cores”, responsible for all remaining operations.</p>
<p>It’s at this point we reach the Warp Scheduler. The job of the warp scheduler is simple: Each cycle, try to fill both dispatch units. If you can’t, then stall one or both dispatch units. Complicating this job are a some requirements:
1. Sometimes, one or both of the dispatch units will still be busy.
1. Don’t dispatch an instruction that we have no free functional units for.
1. If we dispatch 2 instructions, they must come from the same warp.</p>
<p>Each cycle, the warp scheduler recieves the number of available functional units, and the number of free dispatchers, as well as the instructions in the ready set.
If both dispatchers are free, the warp scheduler preferentially selects a pair of sequential independent warp instructions. (Assuming both instructions have some free functional units)
Failing that, it will stall one of the dispatchers, and selects an instruction with functional units available. Notably, the warp scheduler does not attempt to maximize functional unit usage.</p>
<p>So far, we’ve discussed how thread masks are used, but not how they are generated. Thread masks are generated by CUDA cores when a conditional branch is executed, and threads within the warp evaluate the conditions differently. You can imagine a <em>stack</em> of thread masks, where each subsequent divergent condition produces an additional thread-mask in the set. When some threads in a warp take a conditional branch, a mask is generated and put on top of the stack, executing the not-taken branch first. Masked execution continues until a compiler-injected <em>merge point</em> is reached. Then, the complement of the mask is generated and execution begins again from the taken branch target. When the merge point is reached again, the mask is popped off the stack.</p>
<h3 id="key-differences-from-cpus">Key Differences from CPUs</h3>
<p>Some of the above sounds a little bit odd, but it’s worth pointing out how this system differs from traditional CPUs. The key idea behind the GPU architecture is clear: <em>We don’t care about instruction latency</em>.
GPUs can execute an instruction from any thread, and each functional unit is fully pipelined (within reason, the memory subsystem has a finite request buffer). This means GPUs don’t have to wait for an instruction to complete before moving on, they just do something else while they wait. As long as there’s other work to be done, the GPU is working.</p>
<p>Pascal GPUs do not attempt to do register renaming, or pass results backwards. There’s no branch prediction, and most of the addressable functional units are multipurpose, to reduce the required complexity on the warp scheduler and dispatcher. Memory access is SLOW, with cache hit times upwards of 50 cycles, and cache misses climbing above 500 cycles. The goal of the GPU is to always be doing something else to hide latency, rather than trying to reduce it.</p>
<h3 id="takeaways">Takeaways</h3>
<p>Aside from learning about a downright odd computing architecture, I hope you take some things away from this post. Here’s what I learned while writing this:</p>
<ol>
<li>In order to write performant code on the Pascal GPUs, the level of both thread-level and block-level parallelism must be extremely high.</li>
<li>Branch divergence within a warp can halve performance because of the need to travel both paths, but phrasing the problem as arithmetic may not help if it uses more functional units.</li>
<li>Using instructions that target previously unused functional units can actually unlock hidden additional performance</li>
<li>Even as GPUs emphasize massive thread-level parallelism, instruction-level parallelism is still required for full performance.</li>
</ol>
<p>If you made it this far, you’re probably way too interested in GPU architecture. I hope you learned something! Feel free to comment below, or subscribe to my RSS feed. Thanks!</p>
Sat, 07 Jan 2017 23:00:00 -0700http://taylorlloyd.ca/gpu,/pascal,/cuda/2017/01/07/gpu-pipelines.html
http://taylorlloyd.ca/gpu,/pascal,/cuda/2017/01/07/gpu-pipelines.htmlgpu,pascal,cudaConnecting a Dumb TV to Google Home<p>For Christmas, I received a Google Home and some Philips Hue lightbulbs. Within 15 minutes, I had every light in my apartment responding to voice controls!
Naturally rather than accepting that life is awesome and that I live in the future, I was immediately annoyed that I still had to use a remote to change inputs on my TV.
Here’s how I went about fixing that.</p>
<p><img src="/img/ir-home.jpg" alt="Google Home" /></p>
<p>The Google Home is an interface to the Google Assistant, which currently only supports a integration through <a href="https://developers.google.com/actions/">Actions on Google</a>. Currently the only way to integrate as a random developer is to implement a Conversation Action, which means controlling my TV would sound a little bit like this:</p>
<p>“Ok Google, I’d like to talk to my TV” - Me</p>
<p>Home - “Here’s your TV!”</p>
<p>“Let’s play Wii” - Me</p>
<p>Home - “Setting your TV to Wii”</p>
<p>“Exit” - Me</p>
<p>Yikes, that’s not going to fly. Fortunately, Google has allowed IFTTT to implement Direct Actions, which let a developer specify special phrases to trigger an IFTTT recipe.
Even more fortunately, IFTTT <em>finally</em> implemented a Maker channel, which allows arbitrary web requests. With that part sorted, let’s look at how we can control the TV.</p>
<p>I ordered a USB IR blaster from <a href="http://www.iguanaworks.net">IguanaWorks</a>, and jumped at the chance to use one of my fancy new Raspberry Pi 3s. I loaded up a fresh copy of Raspbian, installed LIRC from the repository, copied in an LG remote profile from this <a href="http://lirc-remotes.sourceforge.net/remotes-table.html">list</a>, and tried my first command:</p>
<p><code class="highlighter-rouge">$ irsend send_once lg KEY_POWER</code></p>
<p><code class="highlighter-rouge">irsend: hardware does not support sending</code></p>
<p>Hmm. It turns out there’s quite a process left here, as detailed in the IguanaWorks <a href="http://www.iguanaworks.net/wiki/doku.php?id=usbir:gettingstarted">Getting Started</a> page. I had to build LIRC from source including the iguanaIR driver, and configure my hardware.conf. However with that finished, and the end of my IR blaster artfully taped over my TV’s IR sensor, I could toggle the TV power!</p>
<p>So let’s take stock here: We have a raspberry pi that can trigger IR commands on my TV, and the ability to call a website with a voice command. Clearly we’re missing some glue here.At this point there are many ways to solve the problem, the easiest of which is to open a hole in my router, and let IFTTT call my raspberry pi directly. Since I’m a glutton for punishment though, and have been interested in AWS for a while, lets try to solve this problem with AWS IoT.</p>
<p>The plan is simple. I’ll set up an AWS IoT device to represent my TV, and have my raspberry pi subscribe to the device state, and update the actual TV whenever the state changes on AWS. Then I can have Amazon API Gateway trigger a lambda function to trigger updates to the AWS IoT device.</p>
<h4 id="aws-iot">AWS IoT</h4>
<p>Configuring devices in AWS IoT is actually deceptively simple for home use. Amazon pre-defines tons of <em>types</em> with pre-defined attributes, but for my purposes, I really just want to track the current input and the current power state, so I created my own <em>Thing</em> without a type. I created two attributes for input and power.</p>
<p><img src="/img/ir-home/aws-iot-attributes.png" alt="IoT configuration" /></p>
<p>Once the <em>Thing</em> (I called mine TV) is created, it will be visible on the AWS IoT dashboard.</p>
<p><img src="/img/ir-home/aws-iot.png" alt="IoT dashboard" /></p>
<p>AWS IoT manages the online state of objects through what it calls <em>Shadows</em>. The shadow represents the most recent online state, which the device will update itself to reflect at the next online check-in.
In order to update an IoT shadow from a lambda function, you must grant the lambda function a special IAM permission. I used the following inline IAM code to allow my lambda function to publish to my Amazon IoT devices:</p>
<div class="highlighter-rouge"><span class="p">{</span><span class="w">
</span><span class="nt">"Version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2012-10-17"</span><span class="p">,</span><span class="w">
</span><span class="nt">"Statement"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nt">"Effect"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Allow"</span><span class="p">,</span><span class="w">
</span><span class="nt">"Action"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"iot:Publish"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nt">"Resource"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"*"</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></div>
<p>And the following python code is my lambda function itself. It contains a mapping from devices to inputs, and updates the input and power state accordingly. In addition, this lambda function takes a password key, that it requires before updating any IoT state. This lambda function is going to be publicly accessible, and I don’t really want strangers fiddling with my TV!</p>
<div class="language-python highlighter-rouge"> <span class="kn">from</span> <span class="nn">__future__</span> <span class="kn">import</span> <span class="n">print_function</span>
<span class="kn">import</span> <span class="nn">boto3</span>
<span class="kn">import</span> <span class="nn">json</span>
<span class="k">print</span><span class="p">(</span><span class="s">'Loading function'</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">respond</span><span class="p">(</span><span class="n">err</span><span class="p">,</span> <span class="n">res</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="k">return</span> <span class="p">{</span>
<span class="s">'statusCode'</span><span class="p">:</span> <span class="s">'400'</span> <span class="k">if</span> <span class="n">err</span> <span class="k">else</span> <span class="s">'200'</span><span class="p">,</span>
<span class="s">'body'</span><span class="p">:</span> <span class="n">err</span><span class="o">.</span><span class="n">message</span> <span class="k">if</span> <span class="n">err</span> <span class="k">else</span> <span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">res</span><span class="p">),</span>
<span class="s">'headers'</span><span class="p">:</span> <span class="p">{</span>
<span class="s">'Content-Type'</span><span class="p">:</span> <span class="s">'application/json'</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">}</span>
<span class="k">def</span> <span class="nf">inputMap</span><span class="p">(</span><span class="n">name</span><span class="p">):</span>
<span class="n">m</span> <span class="o">=</span> <span class="p">{</span>
<span class="s">"xbox"</span><span class="p">:</span> <span class="s">"HDMI1"</span><span class="p">,</span>
<span class="s">"chromecast"</span><span class="p">:</span> <span class="s">"HDMI2"</span><span class="p">,</span>
<span class="s">"netflix"</span><span class="p">:</span> <span class="s">"HDMI2"</span><span class="p">,</span>
<span class="s">"youtube"</span><span class="p">:</span> <span class="s">"HDMI2"</span><span class="p">,</span>
<span class="s">"wii"</span><span class="p">:</span> <span class="s">"HDMI3"</span><span class="p">,</span>
<span class="s">"nintendo"</span><span class="p">:</span> <span class="s">"AV1"</span><span class="p">,</span>
<span class="s">"n64"</span><span class="p">:</span> <span class="s">"AV1"</span>
<span class="p">}</span>
<span class="k">if</span> <span class="n">name</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">m</span><span class="p">:</span>
<span class="k">return</span> <span class="s">"HDMI2"</span>
<span class="k">return</span> <span class="n">m</span><span class="p">[</span><span class="n">name</span><span class="o">.</span><span class="n">lower</span><span class="p">()]</span>
<span class="k">def</span> <span class="nf">lambda_handler</span><span class="p">(</span><span class="n">event</span><span class="p">,</span> <span class="n">context</span><span class="p">):</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Received event: "</span> <span class="o">+</span> <span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">event</span><span class="p">,</span> <span class="n">indent</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
<span class="n">payload</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">event</span><span class="p">[</span><span class="s">'body'</span><span class="p">])</span>
<span class="k">if</span> <span class="s">"password"</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">payload</span> <span class="ow">or</span> <span class="n">payload</span><span class="p">[</span><span class="s">"password"</span><span class="p">]</span> <span class="o">!=</span> <span class="s">"SECRET_CODE"</span><span class="p">:</span>
<span class="k">return</span> <span class="n">respond</span><span class="p">(</span><span class="bp">True</span><span class="p">)</span>
<span class="n">client</span> <span class="o">=</span> <span class="n">boto3</span><span class="o">.</span><span class="n">client</span><span class="p">(</span><span class="s">'iot-data'</span><span class="p">,</span> <span class="n">region_name</span><span class="o">=</span><span class="s">'us-east-1'</span><span class="p">)</span>
<span class="n">update</span> <span class="o">=</span> <span class="p">{</span>
<span class="s">"state"</span><span class="p">:</span> <span class="p">{</span>
<span class="s">"desired"</span><span class="p">:</span> <span class="p">{}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">if</span> <span class="s">"input"</span> <span class="ow">in</span> <span class="n">payload</span><span class="p">:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Received input selection "</span> <span class="o">+</span> <span class="n">payload</span><span class="p">[</span><span class="s">"input"</span><span class="p">]</span> <span class="o">+</span> <span class="s">" which maps to "</span> <span class="o">+</span> <span class="n">inputMap</span><span class="p">(</span><span class="n">payload</span><span class="p">[</span><span class="s">"input"</span><span class="p">]))</span>
<span class="n">update</span><span class="p">[</span><span class="s">"state"</span><span class="p">][</span><span class="s">"desired"</span><span class="p">][</span><span class="s">"input"</span><span class="p">]</span> <span class="o">=</span> <span class="n">inputMap</span><span class="p">(</span><span class="n">payload</span><span class="p">[</span><span class="s">"input"</span><span class="p">])</span>
<span class="k">if</span> <span class="s">"power"</span> <span class="ow">in</span> <span class="n">payload</span><span class="p">:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Received power selection "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">payload</span><span class="p">[</span><span class="s">"power"</span><span class="p">])</span> <span class="p">)</span>
<span class="n">update</span><span class="p">[</span><span class="s">"state"</span><span class="p">][</span><span class="s">"desired"</span><span class="p">][</span><span class="s">"power"</span><span class="p">]</span> <span class="o">=</span> <span class="n">payload</span><span class="p">[</span><span class="s">"power"</span><span class="p">]</span>
<span class="c"># Change topic, qos and payload</span>
<span class="n">response</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="n">publish</span><span class="p">(</span>
<span class="n">topic</span><span class="o">=</span><span class="s">'$aws/things/TV/shadow/update'</span><span class="p">,</span>
<span class="n">qos</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">payload</span><span class="o">=</span><span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">update</span><span class="p">)</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">respond</span><span class="p">(</span><span class="bp">False</span><span class="p">,</span> <span class="s">""</span><span class="p">)</span>
</div>
<p>With that set up in AWS Lambda, it’s fairly trivial to set up an API gateway proxy. Just ensure to deploy your API to make sure that your changes are publicly visible!</p>
<h4 id="ifttt">IFTTT</h4>
<p><img src="/img/ir-home/ifttt.png" alt="IFTT" /></p>
<p>From there, I set up an IFTTT applet that listens for the magic words “let’s watch $”, “let’s play $”, or “turn the TV to $”. It fires a request at my API, which updates my IoT shadow.</p>
<h4 id="setting-up-the-pi">Setting up the Pi</h4>
<p>So, we could carefully try to track the current state, and send the right number of power toggles and input selects, but that sounds like a recipe for disaster. Although they aren’t on your remote control, most TVs support idempotent IR commands like <em>POWER_ON</em>, <em>POWER_OFF</em>, and <em>INPUT_HDMI1</em>. The trick is simply finding them!
I downloaded the manual for my tv from LG.com, and read the section on IR codes for my TV. I learned 2 things:</p>
<ol>
<li>My TV definitely supports idempotent IR commands.</li>
<li>There is no way I’ll be able to interpret these codes.</li>
</ol>
<p>Enter the internet. A <a href="http://www.remotecentral.com/cgi-bin/mboard/rc-discrete/thread.cgi?7244">post on RemoteCentral.com</a> contains raw IR codes for all of the commands I need, and <a href="http://www.harctoolbox.org/IrScrutinizer.html">IRScrutinizer</a> can read these codes and output them for LIRC. If you, like me, have an LG TV from 2000-2013, these codes should work for you.</p>
<div class="highlighter-rouge">begin remote
name lg
bits 32
flags SPACE_ENC
eps 30
aeps 100
zero 573 573
one 573 1694
header 9041 4507
ptrail 573
repeat 9041 2267
gap 36000
repeat_bit 0
frequency 38400
begin codes
POWER_ON 0x20DF23DC
POWER_OFF 0x20DFA35C
AV1 0x20DF5AA5
AV2 0x20DF0BF4
COMP1 0x20DFFD02
COMP2 0x20DF2BD4
HDMI1 0x20DF738C
HDMI2 0x20DF33CC
HDMI3 0x20DF9768
end codes
end remote
</div>
<p>With this setup, I can now update my TV to a known state, even if I don’t know the current state. Win.
While I was testing though, I noticed that my TV will only respond to POWER requests while in standby. If I want to change the input, I have to wait until the TV is on. However, if the TV is already on, I’d like to change inputs immediately. So, when selecting for example, HDMI1, I send the following command:</p>
<div class="highlighter-rouge">$ irsend send_once lg HDMI1;
$ irsend send_once lg POWER_ON;
$ sleep 10;
$ irsend send_once lg HDMI1;
</div>
<p>Powering off is easier, I just send <code class="highlighter-rouge">$ irsend send_once lg POWER_OFF</code></p>
<p>Finally, the Pi needs to be connected to AWS IoT.
The AWS IoT system will build you a custom python SDK for each device, complete with access keys. I downloaded that, and then wrote the following small script to actually send the IR codes.</p>
<div class="language-python highlighter-rouge"><span class="k">def</span> <span class="nf">turnOff</span><span class="p">():</span>
<span class="n">return_code</span> <span class="o">=</span> <span class="n">subprocess</span><span class="o">.</span><span class="n">call</span><span class="p">(</span><span class="s">"irsend send_once lg POWER_OFF"</span><span class="p">,</span> <span class="n">shell</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">return_code</span> <span class="o">!=</span> <span class="mi">0</span><span class="p">:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Error turning off TV: "</span><span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">return_code</span><span class="p">))</span>
<span class="k">return</span> <span class="n">return_code</span>
<span class="k">def</span> <span class="nf">setInput</span><span class="p">(</span><span class="nb">input</span><span class="p">):</span>
<span class="n">return_code</span> <span class="o">=</span> <span class="n">subprocess</span><span class="o">.</span><span class="n">call</span><span class="p">(</span><span class="s">"irsend send_once lg "</span> <span class="o">+</span> <span class="nb">input</span> <span class="o">+</span>
<span class="s">"; irsend send_once lg POWER_ON; sleep 11; irsend send_once lg "</span> <span class="o">+</span> <span class="nb">input</span><span class="p">,</span> <span class="n">shell</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">return_code</span> <span class="o">!=</span> <span class="mi">0</span><span class="p">:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Error setting TV to "</span><span class="o">+</span><span class="nb">input</span><span class="o">+</span><span class="s">": "</span><span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">return_code</span><span class="p">))</span>
<span class="k">return</span> <span class="n">return_code</span>
<span class="c"># Custom Shadow callback</span>
<span class="k">def</span> <span class="nf">customShadowCallback_Delta</span><span class="p">(</span><span class="n">payload</span><span class="p">,</span> <span class="n">responseStatus</span><span class="p">,</span> <span class="n">token</span><span class="p">):</span>
<span class="c"># payload is a JSON string ready to be parsed using json.loads(...)</span>
<span class="c"># in both Py2.x and Py3.x</span>
<span class="k">print</span><span class="p">(</span><span class="n">responseStatus</span><span class="p">)</span>
<span class="n">payloadDict</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">payload</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="s">"++++++++DELTA++++++++++"</span><span class="p">)</span>
<span class="k">if</span> <span class="s">"power"</span> <span class="ow">in</span> <span class="n">payloadDict</span><span class="p">[</span><span class="s">"state"</span><span class="p">]:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"power: "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">payloadDict</span><span class="p">[</span><span class="s">"state"</span><span class="p">][</span><span class="s">"power"</span><span class="p">]))</span>
<span class="k">if</span> <span class="s">"input"</span> <span class="ow">in</span> <span class="n">payloadDict</span><span class="p">[</span><span class="s">"state"</span><span class="p">]:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"input: "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">payloadDict</span><span class="p">[</span><span class="s">"state"</span><span class="p">][</span><span class="s">"input"</span><span class="p">]))</span>
<span class="k">print</span><span class="p">(</span><span class="s">"version: "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">payloadDict</span><span class="p">[</span><span class="s">"version"</span><span class="p">]))</span>
<span class="k">print</span><span class="p">(</span><span class="s">"+++++++++++++++++++++++</span><span class="se">\n\n</span><span class="s">"</span><span class="p">)</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">payloadDict</span><span class="p">[</span><span class="s">"state"</span><span class="p">][</span><span class="s">"power"</span><span class="p">]:</span>
<span class="n">turnOff</span><span class="p">()</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">setInput</span><span class="p">(</span><span class="n">payloadDict</span><span class="p">[</span><span class="s">"state"</span><span class="p">][</span><span class="s">"input"</span><span class="p">])</span>
</div>
<p>The full file is available for download: <a href="/data/tvListener.py">tvListener.py</a></p>
<p>And there we have it! An end-to-end system for controlling my dumb LG TV with a Google Home. I’ve been using this for a little over a week, and it’s working great, though there is occasionally a multi-second delay before the TV responds. If you have any recommendations for how I should change things, let me know in the comments below, or contact me.</p>
Fri, 30 Dec 2016 03:23:00 -0700http://taylorlloyd.ca/google/home/automation/ifttt/aws/iot/2016/12/30/ir-blaster.html
http://taylorlloyd.ca/google/home/automation/ifttt/aws/iot/2016/12/30/ir-blaster.htmlgooglehomeautomationiftttawsiot