Just another WordPress.com site

Main menu

Post navigation

I was pretty excited to get my Pioneer AVH-4100NEX head unit, which supports Android Auto. I installed it yesterday, and much to my disappointment, Android Auto doesn’t work. I banged my head against the wall for an hour or so before giving up. Today I called Crutchfield tech support (Crutchfield is awesome, by the way), and they informed me that Google hasn’t yet released the required apps. Apparently Google expected to hold back the release by a week, and that was on the 13th (of March, 2015). So, who knows, maybe I’ll actually be able to play with my shiny new toy on Friday. I’m not holding my breath, though, software schedules being what they are.

So, if you find yourself wondering why your brand new head unit doesn’t provide the one feature you bought the thing for, hopefully this clears things up for you. I know there were no search results when I went looking. 🙂

I am addicted to pinball. There is one pinball app for android, in particular, that I can’t stop playing. Unfortunately, playing on my Nexus 7 really cramps my thumbs. The solution? Well, make a game controller, of course!

Now, the downside to this particular game is that it does not support keyboard or joystick control options, so you are limited to touch events. Injecting touch events from one application to another is a no-no (there are obvious security issues associated with that). So, the only other option for creating a custom controller for this game is to emulate a mouse. Mouse clicks will register as touch events. The downside to this approach is that you can’t click your mouse in two places at once, which means you can’t have both flippers up at the same time… or can you? It turns out that if you are pressing on the left side of the screen (and hence the left flipper is up), then you can slide your finger to the right (which causes a release of the left flipper, and a press on the right), and then slide BACK to the left. When you do this last motion, both flippers will remain in the up position. Well, sometimes they both remain up. It’s actually a bit wonky, so I can only assume this is a bug and not a feature. However, we can definitely try to make that happen; it’s a simple matter of programming (SMOP).😉

To get started, I figured a bluetooth mouse would be best. So, I sourced some parts:

The arcade buttons are SPDT. Initially, I hooked them directly up to the microcontroller’s input pins, and boy did that cause a lot of confusion for my code! Needless to say, debouncing circuits are a must. To give you an idea of what the input looks like to the microcontroller without any debouncing circuitry in place, here is a screen grab from my logic analyzer during a button press:

Channels 1 and 2 are connected to NO and NC on the switch. Those transitions look pretty clean, right? Let’s take a closer look at the transition just after 1s:

Notice how both channels are showing a logic 1 for almost 4 ms! Clearly we need to fix that. Luckily, I stumbled upon this web site, with a very comprehensive explanation of debouncing: http://www.ganssle.com/debouncing-pt2.htm. Following the advice on that page, I constructed an SR Latch using two NAND gates.

NAND SR Latch

The output from the first gate (Q) is wired to the input pin on the microcontroller. After putting this circuitry in place, I no longer saw any bouncing from the arcade buttons. Success! Here is a schematic for the NAND gate:

And here is what the circuit looks like when breadboarded:

The connections on the BlueSMiRF are pretty straight-forward. You have VCC, GND, RX-I, TX-O, RTS and CTS. I wired RTS directly to CTS, since the arduino doesn’t really support hardware flow control. RX-I goes to the arduino’s TX pin, and TX-O goes to the RX pin.

Next up, I had to configure the bluetooth modem. I chose to set it up as a mouse, and use raw packets to send position and button information to the host (my android tablet in this case). To set the device up, I had a rube goldberg-esque configuration of cables. Into the PC, I plugged a USB to serial adapter. From there, I have a Sparkfun serial breakout board (which shifts levels down from the serial port, but not back up from ttl). I connected the output from that into the BlueSMiRF breakout board (which is very tolerant of voltages, by the way). With all that in place, I was able to send serial commands from minicom (a serial communication program) to configure the radio.

keyboard I/O mode (a verification code is displayed on the host, which should match the code displayed on the device)

Secure Simple Pairing (SSP), or just works mode

PIN code mode

Since android was specifically called out in the documentation for keyboard I/O mode, I tried that first. It worked, but only the first time connecting to the device. Subsequent attempts to connect simply didn’t work. Clicking on the device name in the bluetooth settings on the tablet did not do anything. Literally, there was no feedback, no error message, no evidence that I had ever even pushed the button. I tried changing the HID profile of the device and changing the authentication mode to no avail. So, I hooked up adb and checked the logs:

How is that for a cryptic message? It comes from here in the sources, in the off chance that someone happens to be interested. I did not wish to dig into the problem any further at that very moment, so I scrounged through all of my android devices until I found one that worked: the galaxy tab 10.1.

For the enclosure, I wanted it to be roughly the same size as the front-end of a pinball cabinet. The most crucial dimension is the width, which comes in at either 22 or 24″, depending on the machine. I decided to give 24″ a try. The depth of the enclosure has to be large enough to accommodate the hardware, and the height must be such that it can fit the buttons. I settled on a front height of about 4″ and a depth of 8″. Because this is just a prototype to test the size for comfort, I did a real hack job on the enclosure, using whatever scraps I had lying around. It isn’t pretty, but it gets the job done. And, in fact, it is actually comfortable to use

I left an opening in the back to get components in and out. Here you see the arduino and breadboard stuffed in there.

Now, on to the code. I didn’t want to implement a polling mode driver, as it is too easy to miss events, and the code isn’t quite as clean. Instead, I wired the outputs from the SR latches to pins 2 and 3 on my Arduino Uno, which are the external interrupt pins. Any change (press or release) on these pins will trigger an interrupt, and the registered interrupt service routine (ISR) will be called. In the ISR, I simply add the event to a ring buffer, and increment a counter indicating there is work to do in non-interrupt context. Then, inside the loop() function, I check this counter and, if it is non-zero, pull an event off of the ring for processing. In this way, we can ensure that events are processed in the order in which they were received. And given how little code executes in interrupt context, we can be fairly certain that we won’t miss events. One thing to be careful of, though, is disabling interrupts when checking any variable that will be accessed by the ISR.

The prototype works, and is actually very usable. There is some lag, which I’ve grown used to, but new users seem to have a harder time with it. Naturally, I’d like that to go away, so I decided to see where that lag was coming from. In order to track it down, I decided to hook the logic analyzer up to several points in my circuit. First, the NC and NO switch connections, along with the Q output from the SR latch. This will tell how long it takes for a button press to be debounced. Then, I also wanted to see how long it would take to propagate the button press to the bluetooth modem, essentially measuring the overhead of my code. So, I put a probe on the RX pin on the BlueSMiRF breakout board. I also wired up TX, but that proved uninteresting, as the board never sends any serial data back to the microcontroller. This covers everything that I have direct control over. The result of a button press looks like this:

As you can see, I attached an “Async Serial” protocol analyzer to the “RX” channel, which I used to verify that the expected series of bytes is sent to the module. More importantly, though, we can see that it takes 2ms for the button press to make its way through the SR latch (the delay between NO going high and Q going high). After that, it takes a little under 40μs to start sending data to the bluetooth module. After another 1.1ms, the two packets have been sent to the modem, making for a total time of around 3ms for a button press event to be sent to the bluetooth modem. Based on this data, I think I can safely rule out my circuit and code as the source of the lag.

Given that I have no control over the lag (without digging into the bluetooth stack on my tablet, that is), and given that android support for the RN-42-HID seems so spotty, I think the next obvious step would be to implement the controller as a USB HID device. It turns out you can flash the arduino with USB HID firmware, discussed here. That means that the android device will have to support USB host mode, but that’s not a problem for most of the devices I own.

I just received a ball shooter assembly, and it looks like it will fit into the prototype cabinet!

My next update will hopefully see that installed and working. I think I’ll use an IR distance sensor to determine how far the plunger is pulled out, and translate those offsets to mouse click and drag events.

The source code can be found here. Leave a comment if there’s anything I didn’t cover here that you’d like to know (more) about.

I’d like to use a linear distance sensor in an upcoming project, but the commercially available sensors seem a bit overpriced (probably because they are targeted at precision machining applications). Given the parts I have on hand, it seemed worthwhile to try to make one out of just an led, a CdS sensor, a drinking straw, electrical tape and a microcontroller:

Pay no attention to the circuit on the left side of the breadboard; it’s unrelated.

A drinking straw is wrapped in electrical tape (to keep the ambient light out), and then a CdS sensor is inserted into one end:

The led is inserted into the other end of the straw. I had to use a 3mm led, since a 5mm wouldn’t fit. Then, as the led is moved inside the straw, the reading on the CdS sensor changes. A simple Arduino sketch shows very approximate granularity of 48 units per inch (as reported by the ADC).

I followed this guide on the Arduino site for hooking up the CdS sensor.

In the end, I’m not sure whether this implementation will be practical for my use case. I’ll have to wait for more parts to arrive to determine whether the mechanics will work out. It was a fun little experiment, though! You can find more photos of the circuit here.

There’s a command prompt on http://creation.redbullusa.com, which got me wondering what sorts of commands it would accept. So, as any curious hacker would do, I started poking at the sources. That eventually led me to some .swf files, which I decompiled, and ended up finding these gems (in addition to the commands listed in the HELP menu) for you to enjoy:

First, the background. I bought the Nexus S when it first came out. I had come to rely on the notification LED on my previous phone, and this phone’s lack of one was quite annoying. So, I set out to fix my problem by writing NotificationPlus (source here), an Android app that provides recurring notifications via a ringtone and/or the vibrator.

There are a couple of things such an app has to be able to detect. First, it has to detect incoming events, such as SMS, missed calls, voicemail, email, etc. I’ll leave the problems with the Android API in that space for another post. In this post, we’ll focus on the second thing the app has to accomplish, and that is to tell whether or not the user is actively using the phone. Sounds simple, right? In my first take, I just checked to see if the screen was turned on. Surely, if the screen is unblanked, that means someone is looking at the phone, right? Wrong! Let’s explore the variety of ways the screen becomes unblanked, shall we?

the user pushed the power button, either intentionally or not (think butt dialing)

an application decided to unblank the screen, such as:

the phone app, which unblanks the screen for an incoming call

messaging apps, such as GoSMS, wh ich unblank the screen when a message arrives

any other app may do this

As you can see, you can’t infer from the ACTION_SCREEN_ON intent that it is triggered by a user. Thus, it cannot be relied upon for determining when to disable the repeating notification. Many of the complaints in the Android Market comments of the form, “did not work, uninstalled,” I am sure boil down to this problem.

My next crack at fixing the problem was to utilize the very promising ACTION_USER_PRESENT intent. Surely this would be the ticket! Nope. Not even close. ACTION_USER_PRESENT is broadcast only if there is a lock screen enabled. So, if the lock screen preference is set to none, this intent is never broadcast. That sounds minor, as I doubt many people run this way. However, there is another, related problem. How do you determine when the user is no longer present? You would think that this sort of intent would have a complement, right? Like ACTION_USER_ABSENT (;-)) or maybe just ACTION_SCREEN_LOCKED. There is no such intent. Why does it matter? Well, when a lock screen, such as pattern lock or pin, is configured, the user has the option to delay locking after the screen has blanked. So, let’s say you’re reading a web page, and the screen blanks before you finish. You hit the power button, and the phone turns back on without requiring the unlock code (and hence without firing the ACTION_USER_PRESENT intent). If you were relying on a screen blank to tell when the user is no longer present, then you are screwed. Now, this would be ok, so long as there was a way to query the system preferences to tell what the lock delay was set to, but there isn’t.

So, that’s the end of my ranting. This basically means that, without asking the user a bunch of questions about their configuration, there is no way to have a one size fits all solution to this problem. I can write a ton of heuristics, but they are bound to fail for some corner case or another. All of this could be avoided if the android OS just provided a recurring notification option in the settings. Or, you know, they could fix the API.

In my most recent project, I put together a voice controlled iRobot Create using the Android ADK, an iRobot Create and my Nexus S. The Android speech recognition API takes care of listening for speech, determining when to end the speech input, and also sending the resulting recording off to “the cloud” for processing. In the end, what you get back is a list of possible matches (this isn’t an exact science, after all).

There are two ways to incorporate speech recognition into an application. In the first approach, an ACTION_RECOGNIZE_SPEECH intent is broadcast by your application using startActivityForResult. The results are obtained by defining an onActivityResult method in your class. As you can see in the Voice Recognition API demo, it is very simple to write an application using this interface! The problem I had with this approach is that there was too little control over the speech recognition error handling. Also, I really wanted the speech recognition to be running all of the time. So, in the end I decided to use the second approach, using the SpeechRecognizer directly in my code. This actually didn’t make the code all that much more complicated. As an added bonus, your application is not being paused and resumed in order to get the results from the speech recognition activity.

Having the mechanics out-of the way, the next thing I did was to create a list of voice commands. The list of speech recognition matches was compared against the command list. If there was a match, I added the entire list of matches to a hash table, storing the actual command as the value. Thus, any time a close match came up, it would be found in the hash table, with the entry being the (hopefully) intended command.

Now we have the name of a voice command. We could write another if/else statement to perform the appropriate function call for each of the commands, or we could do something a little fancier. Using reflection, I turned the command name into a method call. So, to implement the command “forward,” you simply have to add a method called forward to the class!

Now, it isn’t quite that slick. I still keep an if/else statement in order to get a match on the speech recognition results, and to store close matches in the hash table. I’ll have to experiment with removing that code to see how it fares.

I recently created an instructable on hooking together the Android ADK, an iRobot Create, and (of course) an Android cell phone. The result is a voice-controlled robot, which you can find here. I also just uploaded the code for this project to google’s code repository. You can browse the sources here, or clone a copy using the following command:

hg clone http://adk-moto.googlecode.com/hg/ adk-moto

In future posts, I’ll walk through some of the code, explaining how the voice recognition is done, and why I structured things the way I did.