I recently “completed” a home project that came right down to the wire. This was a fairly unique project, in several respects. For one, the project itself was to build an LED-laden helmet with sophisticated animation; something to resemble the helmets worn by the two members of the music group Daft Punk. Another unique thing about this project is that it had a rigid deadline, not something that usually comes along with home projects. The rigid deadline for this project was set by the fact that I was building this helmet solely to wear during a once-a-year performance of a band that I’m a member of. The whole thing also had to be completely portable.
Here’s a short video of the helmet in operation:
The base helmet is a snowmobile helmet that was purchased on eBay for $20, then spray painted. The main LEDs in the front of the helmet are made from strips of “neopixels.” These are strings of RGD LEDs that are singularly addressable via a single-wire serial interface. The “ears” of the helmet are cut sections of PVC piping, with neopixel rings inserted; these are similar to the strips. Inside the helmet an Arduino Uno is driving the ears, and another board called FadeCandy is beings used for “low level” drive for the strips. The FadeCandy board performs color dithering and interpolation from scene to scene. The FadeCandy board receives new scene information via a USB connection to a Raspberry Pi.
The Raspberry Pi is running Linux, and its only purpose is to generate media content to be displayed on the LED strips. I’m a hardware engineer, and before starting this project I didn’t know the first thing about generating cool-looking content for LED strips. Sure, I could write simple C programs to make the LEDs fade from one color to the next, or do a chase down the strip of pixels, but I wanted the helmet to look more interesting than that! I knew from the beginning of the project that this generation of content would be one of the most important parts, and would also probably be one of the most difficult. This brings me back to the title of this post: Another Example of Where “Fail Quckly” Would Have Helped.
I always knew that in the final configuration, the content generation would be happening on a Raspberry Pi. However, because of the relative difficulty of developing on the Pi (at least from my inexperienced perspective), I decided to do my early development on a PC. Many examples are provided with the FadeCandy project, and I quickly latched on to using the Processing “language” to develop pixel content. Processing allows one to create content very easily. While I was going through the Processing tutorials, I can remember being very impressed that I was able to write a program in about 4 lines that allowed me to open a video in a window and alter individual pixels on the fly.
I researched early on whether or not I could get Processing up and running on the Pi. Indeed, you can, and it’s not even too difficult. However, my big mistake was not actually doing this early in development. Instead, I focused on writing interesting content-generation algorithms. The big problem showed up about 2 weeks before the deadline. I had finally assembled all of the hardware, and was starting to do final integration. I got Processing running on the Pi, and ran one of my programs. Much to my chagrin, the frame-rate of the scene generation was less than one frame-per-second. This made for some very ugly content on the pixel strips. Yes, Processing is impressive in that it allows you to do so much with so little code, but one of the reasons it can do that is because it’s built on layers and layers of abstraction which also have to run when you execute a program. I should have realized this long before I did.
After overclocking the Pi and several other failed efforts, I finally realized that I would have to move to a different solution. Because it was so late in the game, I had to resort to simply trying out all of the examples that came with the FadeCandy board, and pick out something that looked close to what I was after. The Python examples ran at a decent clip, but the example content wasn’t very impressive. What I ended up with, and what you see in the video above, is compiled C. It runs lightning fast compared to the alternatives, and the example content generation looks pretty neat when it’s running.
If I had failed early, meaning if I had attempted to use my notional solution on final hardware, I would have been in much better shape. Since I found out so late that Processing wouldn’t really work, I wasn’t able to port any of the content generation code I had worked on for quite some time. Instead, I was left with only the example code that came with the FadeCandy. I was fortunate to find an example that was close to what I was after; I could have easily ended up with pixels chasing pixels along the strip.
“Fail Quickly” is a term I learned almost a year ago. It’s a great two-word mantra to keep in you mind, and describes a philosophy of design that states that you need to prototype early, and find all the problems you didn’t think about initially. Then you can iterate quickly, and get closer to the ideal solution. For me personally, I will have a new sub-bullet below Fail Quickly, and that will be: Fail Quickly on representative hardware. One other thing that’s great about this term is that it reminds you that you shouldn’t be afraid to fail. In fact, it tells you to go out and fail! But do it quickly, so you can get those failures out of the way early in your design cycle and move on to successes!