MMXVIII: the year in review

Image of a performance of Sympathy at SubZERO, with noise and smoke.
Performing at SubZERO Festival in June. Photo by Lisa Teng

It’s been a prolific year for Lucidbeaming: multimedia art by Joshua Curry. Beginning with a new art studio and finishing up with a host of Winter projects.

The main theme has been expansion. I took my music and found ways to incorporate performance and sculptural elements. The video work has been scaled up to building size and fed into monitors for physical effect. I pushed my limits on public interaction by making 9(!) appearances with a booth at SubZERO/First Fridays.

Personally, I’ve found new creative friendships and nurtured existing ones. I have no interest doing any of this alone, even though my studio life is very private. It’s just more interesting to find other people also putting their energy into something non-commercial, independent, and fucking weird.

The Citadel Studio

I had to give up my apartment at the beginning of the year. Instead of trying to find another (expensive) combined live/work spot, I took the plunge and leased an art studio. It turned out to be a good decision because my creative environment has been stable while the sleep spots have come and gone.

It has generous storage up top and a separate room for music production. My whole workflow and process has grown because of the space. I feel very fortunate to have this.

Panorama my art studio at the Citadel Complex in downtown San Jose, CA
Just moved in.
Back wall of the studio with video projection
Video projection and staging area.

Wolves

I have a thing for wolves, especially wolves living at the Chernobyl nuclear disaster site. This work is the beginning of a long-term project about wolves that requires “vectorized” images of wolves in motion.

Making these digital drawings involves a variety of new skill-sets and hardware for me. I have worked with animators and graphic designers who have experience digitizing images and working with stylus devices, but never had much opportunity to dive in myself.

I couldn’t afford a high-end Wacom tablet or iPad Pro, but I did find an older tablet/laptop hybrid at my aunt’s house one Thanksgiving. She used it for teaching before her retirement. When it was new, hybrid tablet/PCs were novel and sounded great, in theory.

When I got it, the battery was dead and Windows 7 had been locked by security and update issues. I got a new battery and installed Linux Ubuntu. Setup was not flawless, but it has ended up working fine (including all the stylus/touchscreen features).

To do the rotoscoping of video footage, I exported all the video frames with ffmpeg and then used Inkscape to draw over the top of them. So far, so good. It’s time consuming and manual work, but meditative and interesting.

Rotoscoping wolf motion with an old laptop
Rotoscoping wolves on a tc4400 running Inkscape on Ubuntu.

Critters gets reviewed on Badd Press

I released my second full-length album, Critters, late last year. To promote it, I used more organic methods than with my first album, Spanner. Basically, I sent it out to a lot of blogs that cover ambient and experimental music. It’s tough to cut through the volume of submissions they get. One of the people who did respond was Kevin Press at Badd Press.

It was strange but gratifying to read his review when it got posted early this year. For many years, I worked at an alt-weekly newspaper in Charleston, SC and saw lots of bands and artists try to get reviewed or covered. I also saw lots of them get worked up about the reviews. I admit to feeling a little nervous about what he might say. His review was thoughtful and generous.

Cover image for the album Critters

Multimedia artist and experimental composer Joshua Curry in San Jose, California can lay claim to a unique accomplishment. His November release Critters is its own language. It is unlike anything we’ve heard. Mixing recordings of wildlife at sunset with synthesizers and a genuinely unique approach to composition, Curry has produced a phenomenal 15-track album.

— Kevin Press, Badd Press

Read his full review of Critters on the Badd Press website.

Critters on college radio

The first time I heard my music on the radio was in April of this year. I was on my way to the DMV to handle the smog certification for my vehicle. On the radio was KFJC, a local college radio station that has a huge broadcast reach in this valley. I heard a song that sounded really familiar and after a few seconds I realized it was mine. I got chills.

It was such a rad feeling to catch it on the radio at random. A month before, I had packed up 40 or so custom CDs of my album Critters and shipped them out to college radio stations across the U.S. and Canada. So much was going on at that time that I didn’t follow up to see if any of the stations played it.

A stack of Critters CDs ready to ship out to radio stations.
CDs ready to be shipped to college radio stations.

After some web searches later that night I found that lots of stations had picked it up and put it into regular rotation. I didn’t even know. KALX in Berkeley, WNYU in New York, KFJC here in San Jose, CISM in Montreal, KBOO in Portland and many more had been playing songs from Critters.

Radio station charts for Critters from KALX, WYNU, and KFJC
Songs from Critters on playlist charts from KALX, WYNU, and KFJC.

It’s hard to say what the tangible impact of the airplay really is, though. My Bandcamp sales and Spotify streams had bumps in their numbers, but not a huge amount.

One thing I can say is that I’ve learned the entire process from making music to getting it on the air. From recording and post-production to mastering and export for streaming and CD masters to online distribution and building radio mailing lists to packaging, UPC labeling, and shipping to verifying airplay.

That experience will probably come in handy in the future.

Neuroprinter

Well, it was an interesting failure.

Built with the SubZERO festival in mind, I thought Neuroprinter might be an interesting sculpture for people to interact with at an outside festival. I was able to complete it in time for the festival, but rushed through some of the fabrication and it showed.

The original idea was to build a back projection box for flickering film loops. It grew into a memory machine that took the process of memory imprinting and visualized it as a sci-fi prop. The final presentation lacked context and connection, but I learned a lot about the processes to execute the individual stages.

Although it wasn’t meant to be a piece of clean hi tech sculpture, the metalwork ended up being too rough and poorly supported. I intended to have a patched together kind of aesthetic, but it was too much.

People thought it was cool, but it required way too much explanation to survive as any kind of sculptural object. I have since dismantled the piece, but have plans for the components as individual pieces.

Animated GIF of the 8mm film projection
Clip of the projection.

MaChinE

This was a sleeper of a project that had been on my mind for years. Back in 2002, I made a Flash-based drum machine/sampler using scanned machine parts and sounds from circuit bent toys. It was produced for the E.A.A.A. (Electronic Arts Alliance of Atlanta) annual member show and lived on as a lonely website on an obscure server.

Screenshot of MaChinE

I always thought it would be cool to build a kiosk for people to use it. Over the years, Flash was eventually phased out and my plans to port it to HTML5 were always deferred to something shinier and newer.

At surplus electronics stores this year, I noticed that they were dumping fairly nice flat-screen VGA monitors for peanuts. I picked one up and found some wire screen and miscellaneous junk to build an object base. It runs on a Raspberry Pi with an old version of Flash.

Tech folks see it as a novelty and laugh when I tell them it was made with Flash. Kids love it though and I’m glad to see out it the world with people playing with it.

Machine at SubZERO
The standalone piece on display at SubZERO.

Noise toys

Last year I built two Raspberry Pi based synthesizers using ZynAddSubFx and Fluidsynth. I still use them to make music, but they are more software based than hardware. They sound great, but don’t have external controls for LFO or filter changes.

Recent efforts are more tactile and simple. With more outboard effects and amplifiers available in the studio, I’ve focused more on basic sound generators and sequencers/timers. One of the noisier ones is a Velleman KA02 Audio Shield I picked up locally. It has some timing quirks that I took advantage of to generate some great percussive noise.

The memo recorder is cycling through a bunch of short recordings from a police scanner.
Getting close to permanent installation on the little Kustom amp I have.
I made some new patches for ZynAddSubFx so the Raspberry Pi synth I made was more relevant to the music I actually make. This is a a rhythmic glitch sound coming from an arpeggiation.

Krusher and Sympathy

Countdown timer for performance of Sympathy
Krusher on the left and Sympathy on the right, metal sculptures for noise performance.

Built from steel pipes, heavy duty compression springs, and contact mics, these metal sculptures are primal noise instruments. The smaller one, Krusher, was the first version. I wanted to build a kind of pile driver drum machine. After considering mechanical means of driving it, I had more fun just playing the damn thing through a cheap amp.

The tall one, Sympathy, came later and with more contact mics attached. After playing them together, an idea for a performance was born.

View this post on Instagram

Crusher feedback test #noise #metal #diy

A post shared by Joshua Curry (@lucidbeaming) on

SubZERO

The view from inside my booth at SubZERO.

For the past couple of years, SubZERO Festival and subsequent South FIRST FRIDAYS have become primary destinations for the kind of work I’m doing. It’s a great chance to gauge reaction to the work and motivation to finish projects.

It can also be nerve wracking and challenging. This year I chose an ambitious timeline and also debuted three distinct pieces and performances at the same time. In the end it all worked out, but things got pretty stressful towards the last minute. I had to take shortcuts with execution and I wasn’t happy with some of the consequences of those compromises.

Looks like something from Sanford and Son.
Projected imagery coming from the side of the booth.

The peak of the festival for me (and all of 2018, really) was the performance of the sculptures I had made, in a piece I simply called Sympathy. It was loud, intense, and had tons of multi-colored smoke. I did two cycles, one on each night of the festival. I also did one last performance in October, at the end of First Fridays.

Main performance of Sympathy on June 2 in downtown San Jose, CA.

80s skating

Back in the late 80s, I was living in south San Jose and was a skateboarder along with most of my friends. It was a huge part my life and my first professional work as a photographer was produced during that time. I went on to be a professional photographer and multimedia artist for the next 30 years.

Taking advantage of the foot traffic during Cinequest this year, I picked four skating photographs from 1988-1990 and had them printed as large scale posters. I chose images of Tony Henry, Brandon Chapman, Tommy Guerrero, and Jun Ragudo.

I talked to Bob Schmelzer, owner of Circle-A skateshop in downtown San Jose about hanging them in his windows temporarily. He was totally cool about it and the photos were seen by hundreds during that time.

I left the posters at his shop and when he finished some work on the back wall, he re-installed them to face 2nd Street. They are still there now and I’m stoked to walk downtown and seem them hanging.

One of the images of Tommy Guerrero was seen by Jim Thiebaud of Real Skateboards. He asked if he could use it on a limited edition deck to raise money for medical costs of the family of Refuge Skateshop. Of course, I said yes. They all sold out and the Refuge family got the funds. I also managed to snag a Real deck with my photo on it. Fucking rad.

Noise and Waffles

A couple of years ago I went to Norcal Noisefest in Sacramento. At that time, most of my exposure to live experimental music was around San Francisco and was electronic and tech oriented.

After seeing some videos of booked performers, I knew I had to check it out. I went for two days and had a mind blowing experience. I had never seen that level of pure volume and abstraction. It was more metal than any metal show I had seen.

Most importantly, I was impressed by the community. The noise scene around there is one of the last refuges of true experimental sound without institutional gatekeepers. Keeping everything together was Lob Instagon, the festival organizer.

When I got back to San Jose, my whole musical world was upside down because of that festival. I started to explore a much heavier side of sound. I also wanted to have something to perform live that wasn’t centered around a laptop or screen.

After building Krusher and Sympathy, I posted some video of me playing it that eventually got back to Lob. He reached out and invited me to perform at one the weekly Sacramento Audio Waffle shows he runs at the Red Museum in Sacramento.

I was stoked to say yes. The show was a lot of fun and I liked the other groups that played. Also, I got to hear Sympathy on a substantial P.A. with big bass cabinets. That shit rumbled the roof.


Poster for Sacramento Audio Waffle #47

Cassette

One of the things I noticed at Norcal Noisefest was how much they didn’t care about online distribution. Lots of tapes on tables and even some Vinyl releases. CDs were there but not as much as cassettes.

A limitation of the live performance at SubZERO was a lack of powerful amps to drive the bass tones. Lots of sub 75hz tones get generated by the steel pipes and springs.

So, I made some full range recordings of both and ran them through a little EQ and compression. Here is the tape of that effort, inspired by the noise heads in Sacramento. Fun to make.

Self-published cassette of recordings of Sympathy performances.

Teleprofundo

Having my own studio space has expanded the scale I can work in. The back wall gets used regularly for video projection experiments. Most of what I do with projection is pretty old school. I don’t use VJ tools or Final Cut or Adobe Premiere for this.

It’s just a few cheap office projectors, an old Canon Hi-8 camera for feedback, and a variety of video source footage. Now with the 8mm film projector, I can add even more footage to the mix.

I found an interesting source of public domain film footage, the New York Public Library. Their online archives are outstanding.

Recently, I picked up some monochrome security monitors and have been running all kinds of feedback and low-fi video signals through them.

Mixing stock film footage from 50s Hollywood
Multiple layers of feedback
Monochrome composite monitors chaining camera feedback

Macroglitch

While trying to find smooth ways of converting 24fps video to 30fps, I stumbled across niche online communities that are into high frame rates. I was looking for simple, but high fidelity, frame interpolation. They are into generating slow motion and high fps videos of video games.

One of the most interesting tools I found is Butterflow. It uses a combination of ffmpeg and OpenCV to generate impressive motion interpolated frame generation. Things got really interesting when I started running short, jumpy, and abstract video clips through the utility.

Below is a video clip I shot of a thistle from two inches away, at 24fps. With Butterflow and ffmpeg, I stretched it out more than 10X. It’s kind of like Paulstretch for video. The line effect is from a sobel filter in ffpmeg.

butterflow -s a=0,b=end,spd=0.125 in.mov --poly-s 1.4 -ff gaussian -r 29.97 -v -o out.mp4
An early test using a thistle.

Since then I’ve expanded this project in many directions. I’ve set up all kinds of table top macro video shots with plants, dead insects, shells, electronics, and more.

Generating so much stretched footage has taken days of rendering and filled terabytes of space. One of the first finished pieces I made was this music video for the song Aerodome. The audio waveforms were generated with ffmpeg.

Short form abstract video

When I released Spanner, I found out how tricky it is to deal with audio on social media. Sharing things with SoundCloud kept the interest trapped in the SoundCloud ecosystem and people rarely visited my website or Bandcamp page. I used Audiograms for a while, but didn’t like the aesthetic.

So, I loaded the raw audio files on my phone and started using clips in 60 second videos I would make with video apps. That was two years ago. Since then, I’ve made around 200 videos for social, mostly Instagram.

I try to keep them unique and don’t use the presets that come with the apps. A lot of the videos represent multiple generations through multiple apps in different workflow. Also, most of the recent videos have custom audio tracks I make with soft synths and granular sample manglers.

I get asked how I make them all the time. So, here are all the “secrets”.

I start out with geometric images or video clips, like buildings or plants or something repetitive. Most of the time I capture in slow-motion. Then I import clips or images into LumaFusion and crop them square and apply all kinds of tonal effects like monochrome, hi/low key, halftone. For static images, I’ll apply a rotation animation with x/y movement.

Then I’ll make some audio in Fieldscaper, Synthscaper, Filtatron, or use a clip of one my songs. That gets imported into a track in LumaFusion. Then I trim the clip so it’s just below 60 seconds, which is the limit for Instagram and useful for others.

After exporting at 720×720, I open it in Hyperspektiv, Defekt, or maybe TiltShift Video. I pick a starting transformation and then export it, bringing it back into Lumafusion or maybe running it through more effects.

That process gets repeated a few times until I end up with something I like or the clip starts to get fried from too much recompression. They key is to keep working until I get something distinctive and not a iTunes visualizer imitation.

It’s funny that I have people who follow all these little videos and don’t realize I do all kinds of more substantial work. But, I’m glad to have something people enjoy and they are fun to make.

Where is Embers?

Embers is alive and well at Kaleid gallery in downtown San Jose, CA. It’s been there for a while now and I still enjoy going by the gallery to watch people interact with it.

The future is uncertain though. It’s a fairly large piece and made of lots of rice paper. I hope to find a permanent home for it this coming year.


Next year

I’m not really a a goal oriented planner. Most of my life and creative work is process oriented. I learn from doing and often there is something finished at the end of it.

I hope the upcoming year offers more chances to learn, get loud, and work with like-minded folks.

Running Fluidsynth on a Raspberry PI Zero W

One of the reasons I’ve spent so much time experimenting with audio software on Raspberry Pis is to build standalone music sculpture. I want to make machines that explore time and texture, in addition to generating interesting music.

The first soft synth I tried was Fluidsynth. It’s one of the few that can run headless, without a GUI. I set it up on a Pi 3 and it worked great. It’s used as a basic General MIDI synthesizer engine for a variety of packages and even powers game soundtracks on Android.

This video is a demo of the same sound set used in this project, but on an earlier iteration using a regular Raspberry Pi 3 and a Pimoroni Displayotron HAT. I ended up switching to the smaller Raspberry Pi Zero W and using a webapp instead of a display.

The sounds are not actually generated from scratch, like a traditional synthesizer. It draws on a series of predefined sounds collected and mapped in SoundFonts. The .sf2 format was made popular by the now defunct Sound Blaster AWE32 sound card that was ubiquitous on 90s PCs.

Back then, there was a niche community of people producing custom SoundFonts. Because of that, development in library tools and players was somewhat popular. Fluidsynth came long after, but benefits from the early community work and a few nostalgic archivists.

The default SoundFont that comes with common packages is FluidR3_GM. It is a full General Midi set with 128 instruments a small variety of drum kits. It’s fine for building a basic keyboard or MIDI playback utility. But, it’s not very high fidelity or interesting.

What hooked me was finding a repository of commercial SoundFonts. That site has an amazing collection of 70s-90s synths in SoundFont format, including Jupiter-8, TB-303, Proteus 1/2/3, Memory Moog, and an E-MU Modular. The E-MU Modular sounds pretty rad and is the core of the sound set I put together for this. They’re all cheap and I picked up a few to work with. The sound is excellent.

Raspberry Pi Zero W

For this particular project, I ended up using a Raspberry Pi Zero W for its size and versatility. Besides running Fluidsynth, it also serves up a Node.js webapp over wifi for changing instruments. It’s controllable by any basic USB MIDI keyboard and runs on a mid-sized USB battery pack for around 6 hours. Pretty good for such a tiny footprint and it costs around $12.

Setting it up

If you want to get something working fast or just want to make a kid’s keyboard, setup is a breeze.

After configuring the Pi Zero and audio:

sudo apt-get install fluidsynth

That’s it.

But, if you want more flexibility or interactivity, things get a bit more complex. The basic setup is the same as what I laid out in my ZynAddSubFX post.

Download Jessie Lite and find a usable Micro SD card. The following is for Mac OS. Instructions for Linux are similar and Windows details can be found on the raspberrypi.org site.

Insert the SD card into your computer and find out what designation the OS gave it. The unmount it and write the Jessie Lite image to it.

diskutil list

/dev/disk1 (external, physical):
 #: TYPE NAME SIZE IDENTIFIER
 0: FDisk_partition_scheme *8.0 GB disk1
 1: Windows_FAT_32 NO NAME 8.0 GB disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

Pull the card out and reinsert it. Then, add two files to the card to make setup a little faster and skip a GUI boot.

cd /Volumes/boot
touch ssh

sudo nano wpa_supplicant.conf

Put this into the file you just opened.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
}

Put the card in the Pi Zero and power it up, then configure the box with raspi-config. One trick I learned was not to change the root password and expand the file system at the same time. I’m not sure what the problem is, but often it corrupts the ssh password to do both at the same time.

Update the Pi:

sudo apt-get update
sudo apt-get upgrade

Fluidsynth needs a higher thread priority than the default, so I use the same approach as setting up Realtime Priority. It might be overkill, but it’s consistent with the other Pi boxes I set up. Add the user “pi” to the group “audio” and then set expanded limits.

Pi commands

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

The file should be empty. Add this to it.

@audio - rtprio 80
@audio - memlock unlimited

If you’re not using an external USB audio dongle or interface, you don’t need to do this. But, after you hear what the built-in audio sounds like, you’ll want something like this.

sudo nano /boot/config.txt

Comment out the built-in audio driver.

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on
sudo nano /etc/asound.conf

Set the USB audio to be default. It’s useful to use the name of the card instead of the stack number.

pcm.!default {
 type hw card Device
 }
 ctl.!default {
 type hw card Device
 }

Reboot and then test your setup.

sudo reboot

aplay -l

lsusb -t

speaker-test -c2 -twav

A voice should speak out the left and right channels. After verifying that, it’s time to set up Fluidsynth.

The reason I compile it from the git repo is to get the latest version. The version in the default Raspbian repository used by apt-get is 1.1.6-2. The latest is 1.1.6-4. The reason we need this is Telnet.

That’s right, Fluidsynth uses Telnet to receive commands and as its primary shell. It’s a classic text based network communication protocol used for remote administration. Think Wargames.

Telnet

But, there’s a bug in the standard package that causes remote sessions to get rejected in Jessie. It’s been addressed in the later versions of Fluidsynth. I needed it to work to run the web app.

Grab the dependencies and then compile Fluidsynth. It’s not complicated, but there are some caveats.

sudo apt-get install git libgtk2.0-dev cmake cmake-curses-gui build-essential libasound2-dev telnet

git clone git://git.code.sf.net/p/fluidsynth/code-git

cd code-git/fluidsynth
 mkdir build
 cd build
 cmake ..
 sudo make install

The install script misses a key path definition that aptitude usually handles, so I add it manually. It’s needed so libfluidsynth.so.1 can be found. If you see an error about that file, this is why.

sudo nano /etc/ld.so.conf

Add this line:

/usr/local/lib

Then:

sudo ldconfig
 export LD_LIBRARY_PATH=/usr/local/lib

Now we need to grab the default SoundFont. This is available easily with apt-get.

sudo apt-get install fluid-soundfont-gm

That’s it for Fluidsynth. It should run fine and you can test it with a help parameter.

fluidsynth -h

Now to install Node.js and the webapp to change instruments with.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back into an ssh session. That makes nvm available.

nvm install v6.10.1

Grab the webapp from my repo and install it.

git clone https://github.com/lucidbeaming/Fluidsynth-Webapp.git fluidweb

cd fluidweb

npm install --save

Find the IP address of you Pi on your local network. Visit <ip address> port 7000 on any other device.

http://192.168.1.20:7000

If Fluidsynth isn’t running, it will display a blank page. If it is running, it will list all instruments available, dynamically. This won’t be much of a problem once the launch script is setup. It launches Fluidsynth, connects any keyboards attached through ALSA, and launches the webapp.

Create the script and add the following contents. It’s offered as a guideline and probably won’t work if copied and pasted. You should customize it according to your own environment, devices, and tastes.

sudo nano fluidsynth.sh
#!/bin/bash

if pgrep -x "fluidsynth" > /dev/null
then
echo fluidsynth already flowing
else
fluidsynth -si -p "fluid" -C0 -R0 -r48000 -d -f ./config.txt -a alsa -m alsa_seq &
fi

sleep 3

mini=$(aconnect -o | grep "MINILAB")
mpk=$(aconnect -o | grep "MPKmini2")
mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
then
aconnect 'Arturia MINILAB':0 'fluid':0
echo MINIlab connected
elif [[ $mpk ]]
then
aconnect 'MPKmini2':0 'fluid':0
echo MPKmini connected
elif [[ $mio ]]
then
aconnect 'mio':0 'fluid':0
echo Mio connected
else
echo No known midi devices available. Try aconnect -l
fi

cd fluidweb
node index.js
cd ..

exit

Note that I included the settings -C0 -R0 in the Fluidsynth command. That turns off reverb and chorus, which saves a bit of processor power and doesn’t sound good anyway.

Now, create a configuration file for Fluidsynth to start with.

sudo nano config.txt
echo "Exploding minds"
gain 3
load "./soundfonts/lucid.sf2"
select 0 1 0 0
select 1 1 0 1
select 2 1 0 2
select 3 1 0 3
select 4 1 0 4
select 5 1 0 5
select 6 1 0 6
select 7 1 0 7
select 8 1 0 8
select 10 1 0 9
select 11 1 0 10
select 12 1 0 11
select 13 1 0 12
select 14 1 0 13
select 15 1 0 14
echo "bring it on"

The select command chooses instruments for various channels.

select <channel> <soundfont> <bank> <program>

Note that channel 9 is the drumkit.

To get the launch script to run on boot(or session) it needs to have the right permissions first.

sudo chmod a+x fluidsynth.sh

Then, add the script to the end of .bash_profile. I do that instead of other options for running scripts at boot so that fluidsynth and node.js run as a user process for “pi” instead of root.

sudo nano .bash_profile

At the end of the file…

./fluidsynth.sh

Reboot the Pi Zero and when it gets back up, it should run the script and you’ll be good to go. If you run into problems, a good place to get feedback is LinuxMusicians.com. They have an active community with some helpful folks.

Raspberry Pi Zero W in a case

Here’s another quick demo I put together. Not much in terms my own playing, haha, but does exhibit some of the sounds I’m going for.

 

Embers: a breath powered interactive installation celebrating collaboration

Photo by Jerry Berkstresser

It started with incendiary memories: looking at a fading bonfire with friends at the end of a good day, stoking the fire in a pot belly stove, and watching Haitian women cooking chicken over a bed of coals.

I wanted to build something with modern technology that evoked these visceral feelings and associations. Without using screens or typical presentations, the goal was to create an artwork that a wide variety of people could relate to and connect with. It also had to be driven by their own effort.

The initial work began at the Gray Area Foundation for the Arts in San Francisco, during the 2017 Winter Creative Code Immersive. I was learning the mechanics of building interactive art and was looking for a project to bridge my past experience with modern tools.

In January, I travelled to Washington D.C. to photograph the Women’s March and the Presidential Inauguration. They were very different events, but I was struck by the collective effort that went into both. Ideological opposites, they were still the products of powerful collaboration.

When I got back, I heard a lot of fear and anxiety. I had worked in Haiti with an organization called Zanmi Lakay and it had blossomed into effectiveness through group collaboration. I wanted to harness some of that energy and make art that celebrated it.

Embers was born. The first glimpses came from amber hued blinking LEDs in a workroom at Gray Area. 4 months later, the final piece shimmered radiantly in front of thousands of people at the SubZERO art festival in San Jose, CA. In the end, the project itself was a practical testament to collaboration grounded in its conceptual beginnings.

Building the Prototype

For the Gray Area Immersive Showcase, I completed a working study with 100 individually addressable LEDs, 3 Modern Device Wind Sensors (Rev. C), an Arduino Uno, and 100 hand folded rice paper balloons as diffusers. I worked on it alone at my house and didn’t really know how people would respond to it.

When it debuted at the showcase, it was a hit. People were drawn to the fading and evolving light patterns and were delighted when it lit up as they blew on it. I didn’t have to explain much. People seemed to really get it.

The Dare

In early May, I showed a video clip of the study to local gallery owner Cherri Lakay of Anno Domini. She surprised me by daring me to make it for an upcoming art festival called SubZERO. I hesitated, mostly because I thought building the prototype had already been a lot of work. Her fateful words, “you should get all Tom Sawyer on it.”

So, a plan gestated while working on some music for my next album. It was going to be expensive and time consuming and I wasn’t looking forward to folding 1,500 rice paper balloons. A friend reminded me about the concept of the piece itself, “isn’t it about collaboration anyway? Get some people to help.”

I decided to ask 10 people to get 10 more people together for folding parties, with the goal of coming up with 150 balloons at each party. I would give a brief speech and demo the folding. The scheme was simple enough, but became a complex web of logistics I hadn’t counted on.

In the end, it turned out to be an inspiring and fun experience. 78 people helped out in all, with a wide range of ages and backgrounds.

Building Embers

The prototype worked well enough that I thought scaling it up would just be a matter of quantity. But, issues arose that I hadn’t dealt with in the quick paced immersive workshop. Voltage stabilization and distribution, memory limitations, cost escalation, and platform design were all new challenges.

The core of the piece was an Arduino Mega 2560, followed by 25 strands of 50-count WS2811 LEDs, 16 improved Modern Device wind sensors (Rev. P), and 300 ft. of calligraphy grade rice paper. Plenty of trips to Fry’s Electronics yielded spools of wire in many gauges, CAT6 cabling for the data squids, breadboards, and much more.

My living room was transformed into a mad scientist lab for the next month.

Installation

Just a few days before SubZERO, my house lit up in an amber glow. The LED arrays were dutifully glittering and waning in response to wavering breaths. The power and data squids had been laid out and the Arduino script was close to being finished.

I was confident it would work and was only worried about ambient wind at that point. A friend had built a solid platform table for the project and came over the day of the festival to pick up the project. We took it downtown and found my spot on First St. After unloading and setting up the display tent, I began connecting the electronics.

After a series of resource compromises, I had ended up with 1,250 LEDs and around 1,400 paper balloons. The balloons had to be attached to each LED by hand and that took a while. I tested the power and and data connections and laid out the sensors.

Winding the LED strands in small mounds on the platform took a long time and I was careful not to crush the paper balloons. It was good to have friends and a cousin from San Luis Obispo for help.

Lighting the Fire

I flipped the switches for the Arduino assembly, the LED power brick, and then the sensor array. My friends watched expectantly as precisely nothing happened. After a half hour of panicked debugging, it started to light up but with all the wrong colors and behavior. It wasn’t working.

I spent the first night of the two day festival with the tent flap closed, trying to get the table full of wires and paper to do what I had intended. It was pretty damn stressful. Mostly, I was thinking about all the people who had helped and what I’d tell them. I had to get it lit.

Around 10 minutes before midnight (when the festival closed for the night), it finally began to glow amber and red and respond to wind and breath. Around 10 people got to see it before things shut down. But, it was working. I was so relieved.

It turns out that a $6.45 breakout board had failed. It’s a tiny chip assembly that ramps up the voltage for the data line. I can’t recommend the SparkFun TXB0104 bi-directional voltage level translator as a result. The rest of what I have to say about that chip is pretty NSFW.

I went home and slept like a rock.

The next day was a completely different. I showed up a bit early and turned everything on. It worked perfectly throughout the rest of the festival.

People really responded to it and I spent hours watching people laugh and smile at the effect. They wanted to know how it worked, but also why I had made it. I had some great conversations about where it came from and how people felt interacting with it.

It was an amazing experience and absolutely a community effort.

Photo by Jerry Berkstresser

Photo by Joshua Curry

Thanks to all the people and organizations that helped make this a reality:

Grey Area Art Foundation for the Arts, Anno Domini, SubZERO, Diane Sherman, Tim, Brooklyn Barnard, Anonymous, Chris Carson, Leila Carson-Sealy, Cristen Carson, Jonny Williams, Michael Loo, Elizabeth Loo, Kieran Vahle, Jasmina Vahle, Peter Vahle, Kilty Belt-Vahle, Sara Vargas, Sydney Twyman, Annie Sablosky, Martha Gorman, Nancy Scotton, Melody Traylor, Morgan Wysling, Bianca Smith, Susan Bradley, Jen Pantaleon, Guy Pantaleon, Carloyn Miller, Paolo Vescia, Amelia Hansen, Maddie Vescia, Natalie Vescia, Cathi Brogden, Evelyn Lay Snyder, Alice Adams, Lisa Sadler-Marshall, Gena Doty Sager, Mack Doty, Mary Doty, James W. Murray, Greg Cummings, Vernon Brock, Jerry Berkstresser, Lindsey Cummings, Kyle Knight, Liz Hamm, Rebecca Kohn, Shannon Knepper, John Truong, DIane Soloman, Stephanie Patterson, Robertina Ragazza, Sarah Bernard, Jarid Duran, Deb Barba, Astrogirl, Tara Fukuda, CHristina Smith, Yumi Smith, NN8 Medal Medal, Gary Aquilina, Pamela Aquilina, Dan Blue, Chris Blue, Judi Shade, Dave Shade, Margaret Magill, Jim Magill, Brody Klein, Chip Curry, Jim Camp, Liz Patrick, Diana Roberts, Connie Curry, Tom Lawrence, Maria Vahle Klein, Susan Volmer, Jana Levic

 

Joshua Curry is on Instagram, Twitter, and Facebook as @lucidbeaming

Setting up a Raspberry Pi 3 to run ZynAddSubFX in a headless configuration

Most of my music is production oriented and I don’t have a lot of live performance needs. But, I do want a useful set of evocative instruments to take to strange places. For that, I explored the options available for making music with Raspberry Pi minicomputers.

The goal of this particular box was to have the Linux soft-synth ZynAddSubFX running headless on a battery powered and untethered Raspberry Pi, controllable by a simple MIDI keyboard and an instrument switcher on my phone.

Getting things to run on the desktop version of Raspbian and ZynAddSubFX was pretty easy, but stripping away all the GUI and introducing command line automation with disparate multimedia libraries was a challenge. Then, opening it up to remote control over wifi was a rabbit hole of its own.

But, I got it working and it sounds pretty amazing.

Setting up the Raspberry Pi image

I use Jessie Lite because I don’t need the desktop environment. It’s the same codebase without a few bells and whistles. When downloading from rasperrypi.org, choose the torrent for a much faster transfer than getting the ZIP directly from the site. These instructions below are for Mac OS X, using Terminal.

diskutil list

/dev/disk1 (external, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:        FDisk_partition_scheme                        *8.0 GB     disk1
1:                 DOS_FAT_32 NO NAME                 8.0 GB     disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

After the image gets written, I create an empty file on the boot partition to enable ssh login.

cd /Volumes/boot
touch ssh

Then, I set the wifi login so it connects to the network on first boot.

sudo nano wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
 }

The card gets removed from the laptop and inserted into the Pi. Then, after it boots up I go through the standard setup from the command line. The default login is “pi” and the default password is “raspberry”.

sudo raspi-config

[enable ssh,i2c. expand filesystem. set locale and keyboard.]

After setting these, I let it restart when prompted. When it comes back up, I update the codebase.

sudo apt-get update
sudo apt-get upgrade

Base configuration

Raspberry config for ZynAddSubFX

ZynAddSubFX is greedy when it comes to processing power and benefits from getting a bump in priority and memory resources. I add the default user (pi) to the group “audio” and assign the augmented resources to that group, instead of the user itself.

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

...
@audio - rtprio 80
@audio - memlock unlimited
...

The Raspbian version of Jessie Lite has CPU throttles, or governors, set to conserve power and reduce heat from the CPU. By default, they are set to “on demand”. That means the voltage to the CPU is reduced until general use hits 90% of CPU capacity. Then it triggers a voltage (and speed) increase to handle the load. I change that to “performance” so that it has as much horsepower available.

This is done in rc.local:

sudo nano /etc/rc.local
...
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
...

Note that it gets set for all four cores, since the Raspberry Pi is multi-core. For more info about governors and even overclocking, this is a good resource.

Virtual memory also needs to get downgraded so there is little swap activity. Zynaddsubfx is power hungry but doesn’t use much memory, so it doesn’t need VM.

sudo /sbin/sysctl -w vm.swappiness=10

Now, to set up the audio interface. For my ZynAddSubFX box, I use an IQaudio Pi-DAC+. I’ve also used a standard USB audio interface and have instructions for that in my post about the Pi Zero. Raspbian uses Device Tree overlays to load I2C, I2S, and SPI interface modules. So, instead of separate drivers to install, I just edit config.txt to include the appropriate modules for the Pi-DAC+. Note that I also disabled the crappy built-in audio by commenting out “dtparam=audio=on”. This helps later on when setting the default audio device used by the system.

sudo nano /boot/config.txt

...

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on

dtoverlay=i2s-mmap
dtoverlay=hifiberry-dacplus

...

For Jack to grab hold of the Pi-DAC+ for output, the default user (pi) needs a DBus security policy for the audio device.

sudo nano /etc/dbus-1/system.conf

...
<!-- Only systemd, which runs as root, may report activation failures. -->
<policy user="root">
<allow send_destination="org.freedesktop.DBus"
    send_interface="org.freedesktop.systemd1.Activator"/>
</policy>
<policy user="pi">
    <allow own="org.freedesktop.ReserveDevice1.Audio0"/>
</policy>
...

Next, ALSA gets a default configuration for which sound device to use. Since I disabled the built-in audio earlier, the Pi-DAC+ is now “0” in the device stack.

sudo nano /etc/asound.conf

pcm.!default {
 type hw card 0
 }
ctl.!default {
 type hw card 0
 }

sudo reboot

Software installation

ZynAddSubFX has thick dependency requirements, so I collected the installers in a bash script. Most of it was lifted from the Zynthian repo. Download the script from my Github repo to install the required packages and run it. The script also includes rtirq-init, which can improve performance on USB audio devices and give ALSA some room to breath.

git clone https://raw.githubusercontent.com/lucidbeaming/pi-synths/master/ZynAddSubFX/required-packages.sh

sudo chmod a+x required-packages.sh

./required-packages.sh

Now the real meat of it all gets cooked. There are some issues with build optimizations for SSE and Neon (incompatible with ARM processors), so you’ll need to disable those in the cmake configuration.

git clone https://github.com/zynaddsubfx/zynaddsubfx.git
cd zynaddsubfx
mkdir build
cd build
cmake ..
ccmake .
[remove SSE parameters and NoNeonplease=ON]
sudo make install

Usually takes 20-40 minutes to compile. Now to test it out and get some basic command line options listed.

zynaddsubfx -h

Usage: zynaddsubfx [OPTION]

-h , –help Display command-line help and exit
-v , –version Display version and exit
-l file, –load=FILE Loads a .xmz file
-L file, –load-instrument=FILE Loads a .xiz file
-r SR, –sample-rate=SR Set the sample rate SR
-b BS, –buffer-size=SR Set the buffer size (granularity)
-o OS, –oscil-size=OS Set the ADsynth oscil. size
-S , –swap Swap Left <–> Right
-U , –no-gui Run ZynAddSubFX without user interface
-N , –named Postfix IO Name when possible
-a , –auto-connect AutoConnect when using JACK
-A , –auto-save=INTERVAL Automatically save at interval (disabled with 0 interval)
-p , –pid-in-client-name Append PID to (JACK) client name
-P , –preferred-port Preferred OSC Port
-O , –output Set Output Engine
-I , –input Set Input Engine
-e , –exec-after-init Run post-initialization script
-d , –dump-oscdoc=FILE Dump oscdoc xml to file
-u , –ui-title=TITLE Extend UI Window Titles

The web app

Webapp to switch ZynAddSubFX instruments

I also built a simple web app to switch instruments from a mobile device (or any browser, really). It runs on Node.js and leverages Express, Socket.io, OSC, and Jquery Mobile.

First, a specific version of Node is needed and I use NVM to grab it. The script below installs NVM.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back in to have NVM available to you.

nvm install v6.10.1

My Node app is in its own repo. The dependencies Express, Socket.io, and OSC will be installed with npm from the included package.json file.

git clone https://github.com/lucidbeaming/ZynAddSubFX-WebApp.git
cd ZynAddSubFX-WebApp
npm install

Test the app from the ZynAddSubFX-WebApp directory:

node index.js

On a phone/tablet (or any browser) on the same wifi network, go to:

http://<IP address of the Raspberry Pi>:7000

Image of webapp to switch instruments

You should see a list of instruments to choose from. It won’t do anything yet, but getting the list to come up is a sign of initial success.

Now, for a little secret sauce. The launch script I use is from achingly long hours of trial and error. The Raspberry Pi is a very capable machine but has limitations. The command line parameters I use come from the best balance of performance and fidelity I could find. If ZynAddSubFX gets rebuilt with better multimedia processor optimizations for ARM, this could change. I’ve read that improvements are in the works. Also, this runs Zynaddsubfx without Jack and just uses ALSA. I was able to get close to RTprio with the installation of rtirq-init.

#!/bin/bash

export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket

if pgrep zynaddsubfx
 then
 echo Zynaddsubfx is already singing
 exit 0
 else
 zynaddsubfx -U -A=0 -o 512 -r 96000 -b 512 -I alsa -O alsa -P 7777 -L "/usr/local/share/zynaddsubfx/banks/Choir and Voice/0034-Slow Morph_Choir.xiz" &
 sleep 4

   if pgrep zynaddsubfx
   then
   echo Zyn is singing
   else
   echo Zyn blorked. Epic Fail.
   fi

fi

mini=$(aconnect -o | grep "MINILAB")
 mpk=$(aconnect -o | grep "MPKmini2")
 mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
 then
 aconnect 'Arturia MINILAB':0 'ZynAddSubFX':0
 echo Connected to MINIlab
 elif [[ $mpk ]]
 then
 aconnect 'MPKmini2':0 'ZynAddSubFX':0
 echo Connected to MPKmini
 elif [[ $mio ]]
 then
 aconnect 'mio':0 'ZynAddSubFX':0
 echo Connected to Mio
 else
 echo No known midi devices available. Try aconnect -l
 fi

exit 0

I have 3 MIDI controllers I use for these things and this script is set to check for any of them, in order of priority, and connect them with ZynAddSubFX. Also, I have a few “sleep” statements in there that I’d like to remove when I find a way of including graceful fallback and error reporting from a bash script. For now, this works fine.

I add this line to rc.local to launch Zynaddsubfx automatically on boot and connect MIDI.

su pi -c '/home/pi/zynlaunch.sh >> /tmp/zynaddsubfx.log 2>&1 &'

Unfortunately, Node won’t launch the web app from rc.local, so I add some conditionals to /home/pi/.profile to launch the app after the boot sequence.

if pgrep zynaddsubfx
then
echo Zynaddsubfx is singing
fi

if pgrep node
then
echo Zyn app is up
else
node /home/pi/ZynAddSubFX-WebApp/index.js
fi

Making music

This ended up being a pad and drone instrument in my tool chest. ZynAddSubFX is really an amazing piece of software and can do much more than I’m setting up here. The sounds are complex and sonically rich. The GUI version lets you change or create instruments with a deep and precise set of graphic panels.

For my purposes, though, I want something to play live with that has very low resource needs. This little box does just that.

Raspberry Pi 3 with Pi-DAC+