MMXX: signals, sounds, sights

I spent most of the year in my art studio while the city around me contracted and calcified due to Covid. I was fortunate that my plans coincided with the timing and degree of changes in the world. It could have very easily gone the other way, as I’ve seen firsthand. Lots of my friends in the art community are struggling.

My work this year reflects more studio and internet based processes. Previous years always included public festivals, performances, and collaborations. Some of that change was to save money, but it was also an effort to make use of what I had around me. It was to stay present and maintain momentum with ongoing projects.

I did actually manage to pull off a few public projects, including a portable projection piece that had animated wolves running on rooftops. I savored that experience and learned a lot from the constraints of lock-down art performances.

Looking back on this year, I see new priorities being formed. While the coding and online projects were effective, the amount of screen time required took a toll. I relished the drawing projects I had and hope to keep working in ways that make a huge mess.

Sightwise

My studio complex has a co-op of artists called FUSE Presents. We hold regular group art shows in normal times and for each show, two artists get featured. I was one of the featured artists for the March 2020 show. That meant I got extra gallery space and special mention in marketing materials.

The work I picked was drawn from a variety of efforts in the previous two years. As a grouping, it represented my current best efforts as a multimedia artist. I worked hard to finalize all the projects and really looked forward to the show.

It combined abstract video, traditional photography, sculptural video projection, installation work, and works on paper.

I designed the show’s poster in open source software called Inkscape.

Unfortunately, the show happened right as the first announcements about the local spread of Covid had begun. People were already quarantined and we heard about the first deaths in our county. That news didn’t exactly motivate people to come out to the art show. Attendance was sparse at best. But, all that work is finished now and ready for future exhibits.

Camel

I found a cigarette tin that had been used as a drug paraphernalia box and decided to build a synthesizer out of it. I had been experimenting with a sound synthesis library called Mozzi and was ready to make a standalone instrument with it. I spent about a month on the fabrication and added a built-in speaker and battery case to make it portable. Sounds pretty rad.

I released my code as open source in a Github repo and a follower from Vienna, Austria replicated my synth using a cake box from Hotel Sacher. (apparently famous for their luxury cakes?)

Wolves

The Wolves project was a major undertaking that took place over 2 years. It began with an interest in the Chernobyl wolves that became a whole genre of art for me.

I began hand digitizing running wolves from video footage and spent a year adding to that collection. I produced hundreds and hundreds of hand drawn SVG frames and wrote some javascript that animated those frames in a variety of ways. I got to the point where I could run a Raspberry Pi and a static video projector with the wolves running on it. I took a break from the project after that.

By the time I returned to the project, the Covid lockdown was in full swing and American city streets looked abandoned. We all started seeing footage of animals wandering into urban areas. It made sense to finish the Wolves project as an urban performance, projecting onto buildings from empty streets.

Building a stable, self-powered and portable rig that could be pulled by bicycle turned out to be harder than I thought. There were so many details and technical issues that I hadn’t imagined. Every time I thought I was a few days from launch, I would have to rebuild something that added weeks.

The first real ride with this through Japantown in northern San Jose was glorious. Absolutely worth the effort. I ended up taking it out on the town many times in the months to come.

Power up test in the backyard
San José City Hall
Japantown, north of downtown San José

The above video is from Halloween, which was amazing because so people were outside walking around. That’s when the most people got to see it in the wild.

But, my favorite moment was taking it out during a power blackout. Whole neighborhoods were dark, except for me and my wolves. I rode by one house where a bunch of kids lived and the family was out in the yard with flashlights. The kids saw my wolves and went crazy, running after them and making wolf howl sounds while the parents laughed. Absolute highlight of the year.

Videogrep

Videogrep is a tool to make video mashups from the time markers in closed captioning files. It’s the kind of thing where you can take a politician’s speech and make him/her say whatever you want by rearranging the parts where they say specific words. It was a novelty in the mid-2000s that was seen on talk shows and such, as a joke. Well, the computer process behind the tool is very useful.

I didn’t create videogrep, Sam Lavigne did and released his code on Github. (BTW, the term “grep” in videogrep comes from a Unix utility (grep) used to search for things) What I did do is use it to find other things besides words, such as breathing noises and partial words. I used videogrep to accentuate mistakes and sound glitches as much as standalone speech and words.

Here is a typical series of commands I would use:

videogrep --input videofile.mp4 -tr

cat videofile.mp4.transcription.txt | tr -s ' ' '\n' | sort | uniq -c | sort -r | awk '{ print $2, $1 }' | sed '/^[0-9]/d' > words.txt

videogrep -i videofile.mp4 -o outputvideo.mp4 -t -p 25 -r -s 'keyword' -st word

ffmpeg -i outputvideo.mp4 -filter_complex "frei0r=nervous,minterpolate='fps=120:scd=none',setpts=N/(29.97*TB),scale=1920:1080:force_original_aspect_ratio=increase,crop=1920:480" -filter:a "atempo=.5,atempo=.5" -r 29.97 -c:a aac -b:a 128k -c:v libx264 -crf 18 -preset veryfast -pix_fmt yuv420p if-stretch-big.mp4

Below is a stretched supercut of the public domain Orson Welles movie The Stranger. I had videogrep search for sounds that were similar to speech but not actual words or language. Below that clip is a search of a bunch of 70s employee training films for the word “blue”. Last is a supercut of one the Trump/Biden debates where the words “football and “racist” are juxtaposed.

Specific repeated words used in a 2020 Presidential Debate: fear, racist, and football

Vid2midi

While working on the videos produced by videogrep, I found a need for soundtracks that were timed to jumps in image sequences. After some experimenting with OpenCV and Python, I found a way to map various image characteristics to musical notation.

I ended up producing a standalone command-line utility called vid2midi that converts videos into MIDI files. The MIDI file can be used in most music software to play instruments and sounds in time with the video. Thus, the problem of mapping music to image changes was solved.

It’s now open source and available on my Github site.

The video above was made with a macro lens on a DSLR and processed with a variety of video tools I use. The soundtrack is controlled by a MIDI file produced by vid2midi.

Bad Liar

This project was originally conceived as a huge smartphone made from a repurposed big screen TV. The idea is that our phones reflect our selves back to use, but as lies.

It evolved into an actual mirror after seeing a “smart mirror” in some movie. The information in the readout scrolling across the bottom simulates a stock market ticker. Except, this is a stock market for emotions. The mirror is measuring your varying emotional states and selling them to network buyers in a simulated commodities exchange.

Screen test showing emotional stock market
Final demo in the studio

Hard Music in Hard Times

TQ zine is an underground experimental music zine from the U.K. I subscribed a few years ago after reading a seminal essay about the “No audience underground”. I look forward to it each month because it’s unpretentious and weird.

They ran an essay contest back in May and I was one of the winners! My prize was a collection of PCBs to use in making modular synthesizers. I plan to turn an old metal lunchbox into a synth with what I received.

Here is a link to the winning essay:

Lunetta Synth PCB prizes from @krustpunkhippy

Books

I spent much of my earlier art career as a documentary photographer. I still make photographs but the intent and subject matter have changed. I’m proud of the photography I made throughout the years and want to find good homes for those projects.

Last year I went to the SF Art Book Fair and was inspired by all the publishers and artists. Lots of really interesting work is still being produced in book form.

Before Covid, I had plans to make mockups of books of my photographs and bring them to this year’s book fair to find a publisher. Of course, the fair was cancelled. I took the opportunity to do the pre-production work anyway. Laying out a book is time consuming and represents a standalone art object in itself.

I chose two existing projects and one new one. American Way is a collection of photos I made during a 3 month American road trip back in 2003. Allez La Ville gathers the best images I made in Haiti while teaching there in 2011-13 and returning in 2016. The most recent, Irrealism, is a folio of computer generated “photographs” I made using a GAN tool.

It was a thrill to hold these books in my hands and look through them, even if they are just mockups. After all these years, I still want my photos to exist in book form in some way.

Allez La Ville, American Way, Irrealism

Art Review Generator

Working on the images for the Irrealism book mentioned above took me down a rabbit hole into the world of machine learning and generative art. I know people who only focus on this now and I can understand why. There is so much power and potential available from modern creative computing tools. That can be good and bad though. I have also seen a lot of mediocre work cloaked in theory and bullshit.

I gained an understanding of generative adversarial networks (GAN) and the basics of setting up Linux boxes for machine learning with Tensorflow and PyTorch. I also learned why the research into ML and artificial intelligence is concentrated at tech companies and universities. It’s insanely expensive!

My work is absolutely on a shoestring budget. I buy old computer screens from thrift stores. I don’t have the resources to set up cloud compute instances with stacked GPU configurations. I have spent a lot of time trying to figure out how to carve a workflow from free tiers and cheap hardware. It ain’t easy.

One helpful resource is Google Collab. It lets “researchers” exchange workbooks with executable code. It also offers free GPU usage (for now, anyway). That’s crucial for any machine learning project.

When I was laying out the Irrealism book, I wanted to use a computer generated text introduction. But, the text generation tools available online weren’t specialized enough to produce “artspeak”. So, I had the idea to build my own art language generator.

The short story is that I accessed 57 years of art reviews from ArtForum magazine and trained a GPT-2 language model with the results. Then I built a web app that generates art reviews using that model, combined with user input. Art Review Generator was born.

This really was a huge project and if you’re interested in the long story, I wrote it up as a blog post a few months ago. See link below.

See examples of generated results and make your own.

Kiosk

Video as art can be tricky to present. I’m not always a fan of the little theaters museums create to isolate viewers. But, watching videos online can be really limited in fidelity of image or sound. Projection is usually limited by ambient light.

I got the idea for this from some advertising signage. It was seeded with a monitor donation (thanks Julie Meridian!) and anchored with a surplus server rack I bought. The killer feature is the audio level rises and falls depending on whether is someone is standing in front of it or not. That way, all my noise and glitch soundtracks aren’t at top volume all the time.

This plays 16 carefully selected videos in a loop and runs autonomously. No remote control or start and stop controls. Currently installed at Kaleid Gallery in downtown San Jose, CA.

Holding the Moment

Hanging out in baggage claim with no baggage or even a flight to catch

In July, the San José Office of Cultural Affairs announced a call for submissions for a public art project called Holding the Moment. The goal was to showcase local artists at Norman Y. Mineta San José International Airport.

COVID-19 changed lives everywhere — locally, nationally, and internationally. The Arts, and individual artists, are among those most severely impacted. In response, the City of San José’s Public Art Program partnered with the Norman Y. Mineta San José International Airport to offer local artists an opportunity to reflect, comment, and on of this global crisis and the current challenging time. More than 327 submissions were received, and juried by a prominent panel of Bay Area artists and arts professionals. Ultimately 96 artworks by 77 San José artists were awarded a $2,500 prize and a place in this six-month exhibition.

SAN JOSE OFFICE OF CULTURAL AFFAIRS

Two of my artworks were chosen for this show and they are on display at the airport until January 9. They picked some challenging pieces, PPE and Mask collage, with interesting back stories of their own.

Here are the stories of the two pieces they chose for exhibition.

PPE

The tale of this image begins in Summer of 1998. I had a newspaper job in Louisiana that went badly. One of the few consolations was a box of photography supplies I was able to take with me. In that box was a 100′ bulk roll of Ilford HP5+ black and white film. My next job happened to involve teaching digital photography so I stored that bulk roll, unopened and unused, for decades. I kept it while I moved often, always thinking there would be some project where I would need a lot of black and white film.

Earlier this year, I was inspired to buy an old Nikon FE2 to make some photos with. I just wanted to do some street photography. After Covid there weren’t many people in the streets to make photos of. But, I did break out that HP5+ that I kept for decades and loaded it onto cassettes for use in the camera I had bought. I also pulled out a Russian Zenitar 16mm f2.8 that I used to shoot skateboarding with.

This past Summer, I went to Alviso Marina County Park often. It’s a large waterfront park near my house that has access to the very bottom of San Francisco bay. People would wear masks out in the park and I even brought one with me. It was absolutely alien to wear protective gear out in a huge expanse like that.

So, my idea was to make a photo that represented that feeling. I brought my FE2 with the old film and Zenitar fisheye to the park, along with a photo buddy to actually press the button. People walking by were weirded out by the outfit, but that’s kind of the desired effect.

This image was enlarged and installed in the right-hand cabinet at the airport show.

An interesting side note to this project was recycling the can that the old film came in. Nowadays that would be made of plastic but they still shipped bulk film in metal cans back then. I took that can and added some knobs and switches to control a glitching noisemaker I had built last year. So, that old film can is now in use as a musical instrument.

The film can that used to hold 100′ of Ilford HP5+ is now a glitch sound machine

Mask Collage

Face masks are a part of life now but a lot of people are really pissed that they have to wear them. I was in the parking lot of a grocery store and a guy in front of me was talking to himself, angry about masks. Turns out he was warming up to argue with the security guard and then the manager. While I was inside shopping (~20 minutes) he spent the whole time arguing loudly with the manager. It was amazing to me how someone could waste that much time with that kind of energy.

When I got back to my studio I decided to draw a picture of that guy in my sketchbook. That kicked off a whole series of drawings over the next month.

I have a box of different kinds of paper I have kept for art projects since the early 90s. In there was a gift from an old roommate: a stack of blank blood test forms. I used those forms as the backgrounds for all the drawings. Yellow and red spray ink from an art colleague who moved away provided the context and emotional twists.

The main image is actually a collage of 23 separate drawings. It was enlarged and installed in the left-hand cabinet at the airport show.

Internet Archive

A few weeks ago, my video Danse des Aliénés won 1st place in the Internet Archive Public Domain Day film contest. It was made entirely from music and films released in 1925.

Danse des Aliénés

Film and music used:

In Youth, Beside the Lonely Sea

Das wiedergefundene Paradies
(The Newly Found Paradise)
Lotte Lendesdorff and Walter Ruttmann

Jeux des reflets et de la vitesse
(Games on Reflection and Speed)
Henri Chomette

Koko Sees Spooks
Dave Fleischer

Filmstudie
Hans Richter

Opus IV
Walther Ruttmann

Joyless Street
Georg Wilhelm Pabst

Danse Macabre Op. 40 Pt 1
(Dance of Death)
Camille Saint-Saëns
Performed by the Philadelphia Symphony Orchestra

Danse Macabre Op. 40 Pt 2
(Dance of Death)
Camille Saint-Saëns
Performed by the Philadelphia Symphony Orchestra

Plans? What plans?

Vaccines are on the way. Hopefully, we’ll see widespread distribution in the next few months. Until then, I’ll still be in my studio working on weird tech art and staying away from angry mask people.

I am focused on future projects that involve a lot of public participation and interactivity. I think we will need new ways of re-socializing and I want to be a part of positive efforts in that direction.

I also have plans for a long road trip from California to the east coast and back again. It will be a chance to rethink the classic American photo project and find new ways to see. But, that depends on how things work out with nature’s plans.

Hard Music in Hard Times

This essay was my winning entry in a TQ zine essay contest back in June of 2020. TQ is an underground music zine that hails from Northumberland, England.

We live in a golden age of irreverent and unsentimental hard music, released by armies of atonal warriors onto Bandcamp, Soundcloud, and cassette tape. We can listen to hundreds of hours of crushed white noise decorated with screams and clipped crunches.

Performers can boldly destroy any expectation of comfort or familiarity. It’s a full body embrace of the anxiety and struggle that people feel in a society that produces so much disconnected sound pollution in service of consumption.

But where does it fit when we live in a pandemic, with people suffering and dying? Should we still be making harsh music in harsh times? Who is it for if so many people are flocking to feel-good music and movies, nostalgia, and any other cultural salve they can find? Are folks really spending months in isolation, listening to Merzbow?

Abso-fucking-lutely.

Plenty of crazy bastards are not only listening, they are making more of it. What else is there to do, watch more vapid bullshit on the internet?

If you spend any time around firefighters, you’ll notice many of them smoke cigarettes. Seems strange to do something you have to avoid while doing your job. It is a way of getting use to something you will have to deal with eventually. Firefighters don’t get to hold their breath and ignore the smoke while fighting fires.

The noise of city life, cheap vehicles, and expensive phones surround us. Brooms brush and scrape. Street machines move us back and forth between to places to earn money to buy new machines. A new nowness is needed to remember to listen. Listen to all these things around us instead of filtering them out. Use the noise to feed noise.

In 2015, Tasha Howe from Humboldt State University published a paper about the midlife status of metalheads from the 80s. It reported they were better adjusted as adults than a similar cohort of non-metal fans.

I’ve found that to be true in my own life and among friends I grew up around. Anecdotally, I’ve seen friends that were into heavy shit in adolescence end up as healthy and interesting adults. More importantly, they tend to have a bit more empathy than the people I knew who were into pop music. I have no real explanation of that other than a belief that people who confront struggle and pain in their lives do much better in emotional maturity than people who ignore the same.

That Humboldt study was of adolescence, though. How about grown folks having a hard time in the middle of a pandemic? If not already a fan of challenging music, listening during a painful time probably won’t do much for them. Telling them about the noise project you’re into on social media probably won’t get a whole lot of interest either.

Performers cry, bleat, and moan about their metrics. Nobody gave a shit before the pandemic, why would they now? Hahahaha. N.A.U., motherfuckers.

I have been building a lot of noise instruments during the lockdown. Playing them is fulfilling and liberating. There is a physicality and connection to them and the sounds they make. Even a lousy day around that kit is so much better than any Marvel movie or Game of Drones mental mush.

I could probably put out a decent full-length of well constructed ambient right now. Something soothing and somatic. It would get more likes and downloads I suppose. But, I don’t feel that way. This idle time and solitude has inspired a visceral reaction.

I want to make sure my mind stays alive. Opting for intensity keeps me in the now with an undeniable sonic force. I don’t want to tune the world out. When they announce that more people are hurting or have died, I want to know that and feel it for real.

Ignoring the news and letting Netflix hijack my empathy with the melodramas of fictional people can only lead to something bad down the line. I plan on retaining my emotional life.

So, here’s to feedback, squelches, cracks, and booms. All of it. Snip some diodes in your pedals and point your amps at each other. Turn off every screen you can find. Smoke nothing. Drink nothing. Be radically present. Say everything you think out loud into a microphone. Then say it again louder. Scream it.

Pulverize craniums boldly. Celebrate the resonance of the real and serenade the suffering. Let go of irony and cleverness. Record nothing. Play for your plants and animals. Liberate your intent from ego.

Above all, stay human. Keep feeling. Live loudly.

Prizes included PCBs galore

One of the rules was that the first three words of at least three paragraphs had to start with P, C, and B. The lengths was also set at minimum 650 words.

I was thrilled to win this because the prizes included a bunch of electronics components for building modular audio gear. My plan is to turn this metal lunch box I bought at a thrift store into a portable synthesizer rig.

Running Fluidsynth on a Raspberry PI Zero W

One of the reasons I’ve spent so much time experimenting with audio software on Raspberry Pis is to build standalone music sculpture. I want to make machines that explore time and texture, in addition to generating interesting music.

The first soft synth I tried was Fluidsynth. It’s one of the few that can run headless, without a GUI. I set it up on a Pi 3 and it worked great. It’s used as a basic General MIDI synthesizer engine for a variety of packages and even powers game soundtracks on Android.




This video is a demo of the same sound set used in this project, but on an earlier iteration using a regular Raspberry Pi 3 and a Pimoroni Displayotron HAT. I ended up switching to the smaller Raspberry Pi Zero W and using a webapp instead of a display.

The sounds are not actually generated from scratch, like a traditional synthesizer. It draws on a series of predefined sounds collected and mapped in SoundFonts. The .sf2 format was made popular by the now defunct Sound Blaster AWE32 sound card that was ubiquitous on 90s PCs.

Back then, there was a niche community of people producing custom SoundFonts. Because of that, development in library tools and players was somewhat popular. Fluidsynth came long after, but benefits from the early community work and a few nostalgic archivists.

The default SoundFont that comes with common packages is FluidR3_GM. It is a full General Midi set with 128 instruments a small variety of drum kits. It’s fine for building a basic keyboard or MIDI playback utility. But, it’s not very high fidelity or interesting.

What hooked me was finding a repository of commercial SoundFonts (no longer active). That site has an amazing collection of 70s-90s synths in SoundFont format, including Jupiter-8, TB-303, Proteus 1/2/3, Memory Moog, and an E-MU Modular. The E-MU Modular sounds pretty rad and is the core of the sound set I put together for this. They’re all cheap and I picked up a few to work with. The sound is excellent.

Raspberry Pi Zero W

For this particular project, I ended up using a Raspberry Pi Zero W for its size and versatility. Besides running Fluidsynth, it also serves up a Node.js webapp over wifi for changing instruments. It’s controllable by any basic USB MIDI keyboard and runs on a mid-sized USB battery pack for around 6 hours. Pretty good for such a tiny footprint and it costs around $12.

Setting it up

If you want to get something working fast or just want to make a kid’s keyboard, setup is a breeze.

After configuring the Pi Zero and audio:

sudo apt-get install fluidsynth

That’s it.

But, if you want more flexibility or interactivity, things get a bit more complex. The basic setup is the same as what I laid out in my ZynAddSubFX post.

Download Jessie Lite and find a usable Micro SD card. The following is for Mac OS. Instructions for Linux are similar and Windows details can be found on the raspberrypi.org site.

Insert the SD card into your computer and find out what designation the OS gave it. The unmount it and write the Jessie Lite image to it.

diskutil list

/dev/disk1 (external, physical):
 #: TYPE NAME SIZE IDENTIFIER
 0: FDisk_partition_scheme *8.0 GB disk1
 1: Windows_FAT_32 NO NAME 8.0 GB disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

Pull the card out and reinsert it. Then, add two files to the card to make setup a little faster and skip a GUI boot.

cd /Volumes/boot
touch ssh

sudo nano wpa_supplicant.conf

Put this into the file you just opened.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
}

Put the card in the Pi Zero and power it up, then configure the box with raspi-config. One trick I learned was not to change the root password and expand the file system at the same time. I’m not sure what the problem is, but often it corrupts the ssh password to do both at the same time.

Update the Pi:

sudo apt-get update
sudo apt-get upgrade

Fluidsynth needs a higher thread priority than the default, so I use the same approach as setting up Realtime Priority. It might be overkill, but it’s consistent with the other Pi boxes I set up. Add the user “pi” to the group “audio” and then set expanded limits.

Pi commands

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

The file should be empty. Add this to it.

@audio - rtprio 80
@audio - memlock unlimited

If you’re not using an external USB audio dongle or interface, you don’t need to do this. But, after you hear what the built-in audio sounds like, you’ll want something like this.

sudo nano /boot/config.txt

Comment out the built-in audio driver.

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on
sudo nano /etc/asound.conf

Set the USB audio to be default. It’s useful to use the name of the card instead of the stack number.

pcm.!default {
 type hw card Device
 }
 ctl.!default {
 type hw card Device
 }

Reboot and then test your setup.

sudo reboot

aplay -l

lsusb -t

speaker-test -c2 -twav

A voice should speak out the left and right channels. After verifying that, it’s time to set up Fluidsynth.

The reason I compile it from the git repo is to get the latest version. The version in the default Raspbian repository used by apt-get is 1.1.6-2. The latest is 1.1.6-4. The reason we need this is Telnet.

That’s right, Fluidsynth uses Telnet to receive commands and as its primary shell. It’s a classic text based network communication protocol used for remote administration. Think Wargames.

Telnet

But, there’s a bug in the standard package that causes remote sessions to get rejected in Jessie. It’s been addressed in the later versions of Fluidsynth. I needed it to work to run the web app.

Grab the dependencies and then compile Fluidsynth. It’s not complicated, but there are some caveats.

sudo apt-get install git libgtk2.0-dev cmake cmake-curses-gui build-essential libasound2-dev telnet

git clone git://git.code.sf.net/p/fluidsynth/code-git

cd code-git/fluidsynth
 mkdir build
 cd build
 cmake ..
 sudo make install

The install script misses a key path definition that aptitude usually handles, so I add it manually. It’s needed so libfluidsynth.so.1 can be found. If you see an error about that file, this is why.

sudo nano /etc/ld.so.conf

Add this line:

/usr/local/lib

Then:

sudo ldconfig
 export LD_LIBRARY_PATH=/usr/local/lib

Now we need to grab the default SoundFont. This is available easily with apt-get.

sudo apt-get install fluid-soundfont-gm

That’s it for Fluidsynth. It should run fine and you can test it with a help parameter.

fluidsynth -h

Now to install Node.js and the webapp to change instruments with.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back into an ssh session. That makes nvm available.

nvm install v6.10.1

Grab the webapp from my repo and install it.

git clone https://github.com/lucidbeaming/Fluidsynth-Webapp.git fluidweb

cd fluidweb

npm install --save

Find the IP address of you Pi on your local network. Visit <ip address> port 7000 on any other device.

http://192.168.1.20:7000

If Fluidsynth isn’t running, it will display a blank page. If it is running, it will list all instruments available, dynamically. This won’t be much of a problem once the launch script is setup. It launches Fluidsynth, connects any keyboards attached through ALSA, and launches the webapp.

Create the script and add the following contents. It’s offered as a guideline and probably won’t work if copied and pasted. You should customize it according to your own environment, devices, and tastes.

sudo nano fluidsynth.sh
#!/bin/bash

if pgrep -x "fluidsynth" > /dev/null
then
echo fluidsynth already flowing
else
fluidsynth -si -p "fluid" -C0 -R0 -r48000 -d -f ./config.txt -a alsa -m alsa_seq &
fi

sleep 3

mini=$(aconnect -o | grep "MINILAB")
mpk=$(aconnect -o | grep "MPKmini2")
mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
then
aconnect 'Arturia MINILAB':0 'fluid':0
echo MINIlab connected
elif [[ $mpk ]]
then
aconnect 'MPKmini2':0 'fluid':0
echo MPKmini connected
elif [[ $mio ]]
then
aconnect 'mio':0 'fluid':0
echo Mio connected
else
echo No known midi devices available. Try aconnect -l
fi

cd fluidweb
node index.js
cd ..

exit

Note that I included the settings -C0 -R0 in the Fluidsynth command. That turns off reverb and chorus, which saves a bit of processor power and doesn’t sound good anyway.

Now, create a configuration file for Fluidsynth to start with.

sudo nano config.txt
echo "Exploding minds"
gain 3
load "./soundfonts/lucid.sf2"
select 0 1 0 0
select 1 1 0 1
select 2 1 0 2
select 3 1 0 3
select 4 1 0 4
select 5 1 0 5
select 6 1 0 6
select 7 1 0 7
select 8 1 0 8
select 10 1 0 9
select 11 1 0 10
select 12 1 0 11
select 13 1 0 12
select 14 1 0 13
select 15 1 0 14
echo "bring it on"

The select command chooses instruments for various channels.

select <channel> <soundfont> <bank> <program>

Note that channel 9 is the drumkit.

To get the launch script to run on boot(or session) it needs to have the right permissions first.

sudo chmod a+x fluidsynth.sh

Then, add the script to the end of .bash_profile. I do that instead of other options for running scripts at boot so that fluidsynth and node.js run as a user process for “pi” instead of root.

sudo nano .bash_profile

At the end of the file…

./fluidsynth.sh

Reboot the Pi Zero and when it gets back up, it should run the script and you’ll be good to go. If you run into problems, a good place to get feedback is LinuxMusicians.com. They have an active community with some helpful folks.

Raspberry Pi Zero W in a case

Here’s another quick demo I put together. Not much in terms my own playing, haha, but does exhibit some of the sounds I’m going for.




Setting up a Raspberry Pi 3 to run ZynAddSubFX in a headless configuration

Most of my music is production oriented and I don’t have a lot of live performance needs. But, I do want a useful set of evocative instruments to take to strange places. For that, I explored the options available for making music with Raspberry Pi minicomputers.

The goal of this particular box was to have the Linux soft-synth ZynAddSubFX running headless on a battery powered and untethered Raspberry Pi, controllable by a simple MIDI keyboard and an instrument switcher on my phone.

Getting things to run on the desktop version of Raspbian and ZynAddSubFX was pretty easy, but stripping away all the GUI and introducing command line automation with disparate multimedia libraries was a challenge. Then, opening it up to remote control over wifi was a rabbit hole of its own.

But, I got it working and it sounds pretty amazing.




Setting up the Raspberry Pi image

I use Jessie Lite because I don’t need the desktop environment. It’s the same codebase without a few bells and whistles. When downloading from rasperrypi.org, choose the torrent for a much faster transfer than getting the ZIP directly from the site. These instructions below are for Mac OS X, using Terminal.

diskutil list

/dev/disk1 (external, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:        FDisk_partition_scheme                        *8.0 GB     disk1
1:                 DOS_FAT_32 NO NAME                 8.0 GB     disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

After the image gets written, I create an empty file on the boot partition to enable ssh login.

cd /Volumes/boot
touch ssh

Then, I set the wifi login so it connects to the network on first boot.

sudo nano wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
 }

The card gets removed from the laptop and inserted into the Pi. Then, after it boots up I go through the standard setup from the command line. The default login is “pi” and the default password is “raspberry”.

sudo raspi-config

[enable ssh,i2c. expand filesystem. set locale and keyboard.]

After setting these, I let it restart when prompted. When it comes back up, I update the codebase.

sudo apt-get update
sudo apt-get upgrade

Base configuration

Raspberry config for ZynAddSubFX

ZynAddSubFX is greedy when it comes to processing power and benefits from getting a bump in priority and memory resources. I add the default user (pi) to the group “audio” and assign the augmented resources to that group, instead of the user itself.

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

...
@audio - rtprio 80
@audio - memlock unlimited
...

The Raspbian version of Jessie Lite has CPU throttles, or governors, set to conserve power and reduce heat from the CPU. By default, they are set to “on demand”. That means the voltage to the CPU is reduced until general use hits 90% of CPU capacity. Then it triggers a voltage (and speed) increase to handle the load. I change that to “performance” so that it has as much horsepower available.

This is done in rc.local:

sudo nano /etc/rc.local
...
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
...

Note that it gets set for all four cores, since the Raspberry Pi is multi-core. For more info about governors and even overclocking, this is a good resource.

Virtual memory also needs to get downgraded so there is little swap activity. Zynaddsubfx is power hungry but doesn’t use much memory, so it doesn’t need VM.

sudo /sbin/sysctl -w vm.swappiness=10

Now, to set up the audio interface. For my ZynAddSubFX box, I use an IQaudio Pi-DAC+. I’ve also used a standard USB audio interface and have instructions for that in my post about the Pi Zero. Raspbian uses Device Tree overlays to load I2C, I2S, and SPI interface modules. So, instead of separate drivers to install, I just edit config.txt to include the appropriate modules for the Pi-DAC+. Note that I also disabled the crappy built-in audio by commenting out “dtparam=audio=on”. This helps later on when setting the default audio device used by the system.

sudo nano /boot/config.txt

...

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on

dtoverlay=i2s-mmap
dtoverlay=hifiberry-dacplus

...

For Jack to grab hold of the Pi-DAC+ for output, the default user (pi) needs a DBus security policy for the audio device.

sudo nano /etc/dbus-1/system.conf

...
<!-- Only systemd, which runs as root, may report activation failures. -->
<policy user="root">
<allow send_destination="org.freedesktop.DBus"
    send_interface="org.freedesktop.systemd1.Activator"/>
</policy>
<policy user="pi">
    <allow own="org.freedesktop.ReserveDevice1.Audio0"/>
</policy>
...

Next, ALSA gets a default configuration for which sound device to use. Since I disabled the built-in audio earlier, the Pi-DAC+ is now “0” in the device stack.

sudo nano /etc/asound.conf

pcm.!default {
 type hw card 0
 }
ctl.!default {
 type hw card 0
 }

sudo reboot

Software installation

ZynAddSubFX has thick dependency requirements, so I collected the installers in a bash script. Most of it was lifted from the Zynthian repo. Download the script from my Github repo to install the required packages and run it. The script also includes rtirq-init, which can improve performance on USB audio devices and give ALSA some room to breath.

git clone https://raw.githubusercontent.com/lucidbeaming/pi-synths/master/ZynAddSubFX/required-packages.sh

sudo chmod a+x required-packages.sh

./required-packages.sh

Now the real meat of it all gets cooked. There are some issues with build optimizations for SSE and Neon (incompatible with ARM processors), so you’ll need to disable those in the cmake configuration.

git clone https://github.com/zynaddsubfx/zynaddsubfx.git
cd zynaddsubfx
mkdir build
cd build
cmake ..
ccmake .
[remove SSE parameters and NoNeonplease=ON]
sudo make install

Usually takes 20-40 minutes to compile. Now to test it out and get some basic command line options listed.

zynaddsubfx -h

Usage: zynaddsubfx [OPTION]

-h , –help Display command-line help and exit
-v , –version Display version and exit
-l file, –load=FILE Loads a .xmz file
-L file, –load-instrument=FILE Loads a .xiz file
-r SR, –sample-rate=SR Set the sample rate SR
-b BS, –buffer-size=SR Set the buffer size (granularity)
-o OS, –oscil-size=OS Set the ADsynth oscil. size
-S , –swap Swap Left <–> Right
-U , –no-gui Run ZynAddSubFX without user interface
-N , –named Postfix IO Name when possible
-a , –auto-connect AutoConnect when using JACK
-A , –auto-save=INTERVAL Automatically save at interval (disabled with 0 interval)
-p , –pid-in-client-name Append PID to (JACK) client name
-P , –preferred-port Preferred OSC Port
-O , –output Set Output Engine
-I , –input Set Input Engine
-e , –exec-after-init Run post-initialization script
-d , –dump-oscdoc=FILE Dump oscdoc xml to file
-u , –ui-title=TITLE Extend UI Window Titles

The web app

Webapp to switch ZynAddSubFX instruments

I also built a simple web app to switch instruments from a mobile device (or any browser, really). It runs on Node.js and leverages Express, Socket.io, OSC, and Jquery Mobile.

First, a specific version of Node is needed and I use NVM to grab it. The script below installs NVM.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back in to have NVM available to you.

nvm install v6.10.1

My Node app is in its own repo. The dependencies Express, Socket.io, and OSC will be installed with npm from the included package.json file.

git clone https://github.com/lucidbeaming/ZynAddSubFX-WebApp.git
cd ZynAddSubFX-WebApp
npm install

Test the app from the ZynAddSubFX-WebApp directory:

node index.js

On a phone/tablet (or any browser) on the same wifi network, go to:

http://<IP address of the Raspberry Pi>:7000

Image of webapp to switch instruments

You should see a list of instruments to choose from. It won’t do anything yet, but getting the list to come up is a sign of initial success.

Now, for a little secret sauce. The launch script I use is from achingly long hours of trial and error. The Raspberry Pi is a very capable machine but has limitations. The command line parameters I use come from the best balance of performance and fidelity I could find. If ZynAddSubFX gets rebuilt with better multimedia processor optimizations for ARM, this could change. I’ve read that improvements are in the works. Also, this runs Zynaddsubfx without Jack and just uses ALSA. I was able to get close to RTprio with the installation of rtirq-init.

#!/bin/bash

export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket

if pgrep zynaddsubfx
 then
 echo Zynaddsubfx is already singing
 exit 0
 else
 zynaddsubfx -U -A=0 -o 512 -r 96000 -b 512 -I alsa -O alsa -P 7777 -L "/usr/local/share/zynaddsubfx/banks/Choir and Voice/0034-Slow Morph_Choir.xiz" &
 sleep 4

   if pgrep zynaddsubfx
   then
   echo Zyn is singing
   else
   echo Zyn blorked. Epic Fail.
   fi

fi

mini=$(aconnect -o | grep "MINILAB")
 mpk=$(aconnect -o | grep "MPKmini2")
 mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
 then
 aconnect 'Arturia MINILAB':0 'ZynAddSubFX':0
 echo Connected to MINIlab
 elif [[ $mpk ]]
 then
 aconnect 'MPKmini2':0 'ZynAddSubFX':0
 echo Connected to MPKmini
 elif [[ $mio ]]
 then
 aconnect 'mio':0 'ZynAddSubFX':0
 echo Connected to Mio
 else
 echo No known midi devices available. Try aconnect -l
 fi

exit 0

I have 3 MIDI controllers I use for these things and this script is set to check for any of them, in order of priority, and connect them with ZynAddSubFX. Also, I have a few “sleep” statements in there that I’d like to remove when I find a way of including graceful fallback and error reporting from a bash script. For now, this works fine.

I add this line to rc.local to launch Zynaddsubfx automatically on boot and connect MIDI.

su pi -c '/home/pi/zynlaunch.sh >> /tmp/zynaddsubfx.log 2>&1 &'

Unfortunately, Node won’t launch the web app from rc.local, so I add some conditionals to /home/pi/.profile to launch the app after the boot sequence.

if pgrep zynaddsubfx
then
echo Zynaddsubfx is singing
fi

if pgrep node
then
echo Zyn app is up
else
node /home/pi/ZynAddSubFX-WebApp/index.js
fi

Making music

This ended up being a pad and drone instrument in my tool chest. ZynAddSubFX is really an amazing piece of software and can do much more than I’m setting up here. The sounds are complex and sonically rich. The GUI version lets you change or create instruments with a deep and precise set of graphic panels.

For my purposes, though, I want something to play live with that has very low resource needs. This little box does just that.

Raspberry Pi 3 with Pi-DAC+