Saturday, November 7, 2015

DRSSTC Δt5: MIDI Streaming and Shutter Syncing

Until last week, I hadn't really touched my Tesla coil setup since I moved out to Seattle. Maybe because the next step was a whole bunch of software writing. As of Δt3, I had written a little test program that could send some basic frequency (resonant and pulse generation) and pulse shaping commands to the driver. But it was just for a fixed frequency of pulse generation and of course I really wanted to make a multi-track MIDI driver for it.

The number I had in mind was three tracks, to match the output capabilities of MIDI Scooter. While the concept was cool, the parsing/streaming implementation was flawed and the range of notes that you can play with a motor is kinda limited by the power electronics and motor RPM. So I reworked the parser and completely scrapped and rebuilt the streaming component of it. (More on that later.) Plus I did a lot of preliminary thinking on how best to play three notes using discrete pulses. As it turns out, the way that works best in most scenarios is also the easiest to code, since it uses the inherent interrupt prioritization and preemption capabilities that most microcontrollers have.


So despite my hesitation to start on the new software, it actually turned out to be a pretty simple coding task. It did require a lot of communication protocol code on both the coil driver and the GUI, to support sending commands and streaming MIDI events at the same time. But it went pretty smoothly. I don't think I've written as many lines of code before and had them mostly work on the first try. And the result is a MIDI parser/streamer that I can finally be proud of. Here it is undergoing my MIDI torture test, a song known only as "Track 1" from the SNES game Top Gear.




The note density is so high that it makes a really good test song for MIDI streaming. I only wish I had more even more tracks...


The workflow from .mid file to coil driver is actually pretty similar to MIDI scooter. First, I load and parse the MIDI, grouping events by track/channel. Then, I pick the three track/channel combinations I want to make up the notes for the coil. These get restructured into an array with absolute timestamps (still in MIDI ticks). The array is streamed wirelessly, 64 bytes at a time, to a circular buffer on the coil driver. The coil driver reports back what event index is currently playing, so the streamer can wait if the buffer is full.


On the coil driver itself, there are five timers. In order of interrupt priority:

  • The pulse timer, which controls the actual gate drive, runs in the 120-170kHz range and just cycles through a pre-calculated array of duty cycles for pulse shaping. It's always running, but it only sets duty cycles when a pulse is active. 
  • Then, there are three note timers that run at lower priority. Their rollover frequencies are the three MIDI note frequencies. When they roll over, they configure and start a new pulse and then wait for the pulse to end (including ringdown time). They're all equal priority interrupts, so they can't preempt each other. This ensures no overlap of pulses.
  • Lastly, there's the MIDI timer, which runs at the MIDI tick frequency and has the lowest interrupt priority. It checks for new MIDI events and updates the note timers accordingly. I'm actually using SysTick for this (sorry, SysTick) since I ran out of normal timers.
There are three levels of volume control involved as well. Relative channel volume is set by configuring the pulse length (how many resonant periods each pulse lasts). But since the driver was designed to be hard-switched, I'm also using duty cycle control for individual note volume. And there is a master volume that scales all the duty cycles equally. All of this is controlled through the GUI, which can send commands simultaneously while streaming notes, as shown in the video.


It's really nice to have such a high level of control over the pulse generation. For example, I also added a test mode that produces a single long pulse with gradually ramped duty cycle. This allows for making longer, quieter sparks with low power...good for testing.

I also got to set up an experiment that I've wanted to do ever since I got my Grasshopper 3 camera. The idea is to use the global shutter and external trigger capabilities of the industrial camera to image a Tesla coil arc at a precise time. Taking it one step further, I also have my favorite Tektronix 2445 analog oscilloscope and a current transformer. I thought that it would be really cool to have the scope trace of primary current and the arc in the same image at the same time, and then to sweep the trigger point along the duration of the pulse to see how they both evolve.




The setup for this was a lot of fun.

Camera is in the foreground, taped to a tripod because I lost my damn tripod adapter.
Using a picture frame glass as a reflective surface with a black background (and a neon bunny!).
I knew I wanted to keep the scope relatively far from the arc itself, but still wanted the image of the scope trace to appear near the spark and be in focus. So, I set up a reflective surface at a 45º angle and placed the scope about the same distance to the left as the arc is behind the mirror, so they could both be in focus. When imaged straight on, they appear side by side, but the scope trace is horizontally flipped, which is why the pulse progresses from right to left.


This picture is a longer exposure, so you can see the entire pulse on the scope. To make it sweep out the pulse and show the arc condition, I set the exposure to 20-50μs and had the trigger point sweep from the very start of the pulse to the end on successful pulses. So, each frame is actually a different pulse (should be clear from the arcs being differently-shaped) but the progression still shows the life cycle of the spark, including the ring-up period before the arc even forms.The pulse timer fires the trigger at the right point in the pulse through a GPIO on the microcontroller. Luckily, the trigger input on the camera is already optocoupled, so it didn't seem to have any issues with EMI.

Seeing the pulse shape and how it relates to arc length is really interesting. It might be useful for tuning to be able to see primary current waveform and arc images as different things are adjusted. No matter what, the effect is cool and I like that it can only really be done with old-school analog methods and mirrors (no smoke, yet).

Tuesday, September 22, 2015

GS3 / SurfaceCam Multipurpose Adapter

It's been a while since I've made anything purely mechanical, so I had a bit of fun this weekend putting together a multipurpose adapter for my Grasshopper 3 camera.


The primary function of the adapter is to attach the camera to a Microsoft Surface Pro tablet, which acts as the monitor and recorder via USB 3.0. I was going to make a simple adapter but got caught up in linkage design and figured out a way to make it pivot 180º to fold flat in either direction.



Some waterjet-cut parts and a few hours of assembly later, and it's ready to go. The camera cage has 1/4-20 mounts on top and bottom for mounting to a tripod, or attaching accessories, like an audio recorder in this case. There's even a MōVI adapter for attaching just the Sufrace Pro 2 to an M5 handlebar for stabilizer use. (The camera head itself goes on the gimbal inside a different cage, if I can ever find a suitable USB 3.0 cable.)

Anyway, quick build, quick post. Here are some more recent videos I've done with the GS3 and my custom capture and color processing software.


Plane spotting at SeaTac using the multipurpose adapter and a 75mm lens (250mm equivalent).

Slow motion weed trimming while testing out an ALTA prototype. No drones were harmed in the making of this video.


Freefly BBQ aftermath. My custom color processing software was still a bit of a WIP at this point.

Sunday, January 18, 2015

Three-Phase Color

I was doing a bit more work on my DirectX-based .raw image viewer when I came across a nice mathematical overlap with  three-phase motor control theory. It has to do with conversion from red/green/blue (RGB) to hue/saturation/lightness (HSL), two different ways of representing color. Most of the conversion methods are piecewise-linear, with max(), min(), and conditionals to break up the color space. But I figured motors are round and color wheels are round, so maybe I would try applying a motor phase transform to [R, G, B] to see what happens.


The transform of interest is the Clarke transform, which converts a three-phase signal into two orthogonal components (α and β) and a zero-sequence component (γ) that is just the average of the three phases. In motor control with symmetric three-phase signals, γ is usually zero. Applied to [R, G, B], it's just the intensity, one measure of lightness.

In motor control, it's common to find the phase and magnitude of the vector defined by α and β, for example to determine the amplitude and electrical angle of sinusoidal back EMF in a PMSM. It turns out the phase and magnitude are useful in color space as well, representing the hue and saturation, respectively. It might not be exactly adherent to the definition of these terms, but rather than rambling on about hexagons and circles, I'll just say it is close enough for me. (The Wikipedia article's alternate non-hexagon hue (H2) and chroma (C2) calculation is exactly the Clarke transform and magnitude/phase math.)

So I added this hue and saturation adjustment method to the raw viewer's pixel shader:






I'm particularly happy about the fact that it occupies barely 15 lines of HLSL code:

// Clarke Transform Color Processing:
c_alpha = 0.6667f * tempcolor.r - 0.3333f * tempcolor.g - 0.3333f * tempcolor.b;
c_beta = 0.5774f * tempcolor.g - 0.5774f * tempcolor.b;
c_gamma = 0.3333f * tempcolor.r + 0.3333f * tempcolor.g + 0.3333f * tempcolor.b;
c_hue = atan2(c_beta, c_alpha);
c_sat = sqrt(pow(abs(c_alpha), 2) + pow(abs(c_beta), 2));
c_sat *= saturation;
c_hue += hue_shift;
c_alpha = c_sat * cos(c_hue);
c_beta = c_sat * sin(c_hue);

tempcolor.r = c_alpha + c_gamma;
tempcolor.g = -0.5f * c_alpha + 0.8660f * c_beta + c_gamma;
tempcolor.b = -0.5f * c_alpha - 0.8660f * c_beta + c_gamma;

I doubt it's the most computationally efficient way to do it (with the trig and all), but it does avoid a bunch of conditionals from the piecewise methods. And as I mentioned in the last post, the pixel shader is far from the performance bottleneck of the viewer right now.

Updated HLSL Source: debayercolor.fx

Updated Viewer Source (VB 2012 Project): RawView_v0_2.zip
Built for .NET 4.0 64-bit. Requires the SlimDX SDK.

And for fun, here's some 150fps video of a new kitchen appliance I just received and hope to put to good use soon: