MMXXIII: Beyond Berlin

This year was filled with travel around Europe and also back to the USA. I had a new job and a stable place to live, so I explored further into Berlin and around the region. I had my first Berlin art show and started working in new mediums. Trips to Stockholm and London brought me in contact with the international art world and I learned more about how it works, for better or worse. A trip to Auschwitz was humbling and yielded insights into the human experience of that era. I have been here 2 years now and feel like there is still so much to experience.

Surrendering the San Jose studio

At the beginning of the year I flew back to San Jose, California to move out of my art studio. It was physically and economically difficult.

I had planned on maintaining that space until I returned. The cost of that was right up to the edge of what I could afford. The landlord notified everyone in the studio complex that he was raising rents 50-200%. Mine was set to double. There was no way I could afford that, so plans were made to fly back and put everything into storage and let go of the space. Fortunately, my new employer let me take a week off to do that. Waiting longer would have been even more expensive.

In addition to my power tools, art supplies and equipment, all of my personal belongings had been put there. It took 3 days just to consolidate and pack everything. Then, the actual moving took 3 more days by myself. It was around 7 U-Haul van loads in all. A lot of that shit was really heavy. I was still dealing with 9 hour jet lag as well.

I found a good storage spot, though. It’s secure, weather protected, and reasonably priced. It was a huge effort, but it all worked out in the end. I saved $1000s by handling it quickly. Letting go of that particular art studio was a bummer though. I made a lot of art there and was able to handle dirty fabrication and clean tech work in the same space. That’s rare to have.

On a positive note, everything is consolidated and well stored, so when I return I don’t have to deal with my stuff right away. It gives me more options.

Atlanta College of Art reunion

I had never been to any school reunions. They just weren’t something I was very interested in. I’ve definitely visited old friends though and maintaining connections is important to me. While I was planning my trip back to California, I got an invitation to an Atlanta College of Art reunion that covered many years of graduates.

At first, I thought ‘no’ because I was already spending the money on a return to the U.S., but I saw a lot of names of people I hadn’t seen since the mid 90s. All our paths diverged widely and also some were having hard times. Something in my gut said I wouldn’t I see many of them ever again. So, at the last minute I bought a ticket and committed to attending.

Neil Carver with five dollars of fun

It was a mixed experience. Obviously, it was cool to see so many people from a fun time in our lives. But, our stories since then had plenty of struggle. Some of the people we thought would be art stars ended up doing nothing and some we thought were just hanging around did quite well. I see the same upside down trend in younger artists here in Berlin. Also, Atlanta has changed so much. Whatever connection and nostalgia I had for that city has closed. That chapter is done.

I also learned that 2 people I had known well were now dead. There were stories of others on the edge. Sure enough, within 3 months of the reunion, I heard 3 more had died. I won’t get into details for privacy, but it wasn’t from natural causes. We lost some interesting and creative people. That particular group of people from that time, myself included, seem to be connected by a hardship we rarely talked about openly.

Overall, I’m glad I went. I saw some friends I really wanted to keep in touch with, in person. It was crucial to make human contact with friends that had become online avatars. Social media is useful, but it’s not real. I also see our experience in terms of what I’ve learned about art history. The paths of historic artists are not as clean and heroic as we make them out to be. For every artist we remember, there are thousands that didn’t make it into history. But, nonetheless, they had amazingly prolific and creative lives making art.

Supermarket in Stockholm

Supermarket is an international art fair focused on art collectives and artist run spaces. It’s held annually in Stockholm, Sweden to coincide with the more commercial Market Art Fair. I wanted to learn more about how different collectives are managed and run, so I signed up for their Meetings Extended package. That let me attend the fair with access similar to a group exhibitor, but as an individual artist.

I came back thinking I had seen the future. There was a broad range of community organizations. Very little was institutional or connected to a single person. They had formed logistic groups and had access to property and diverse funding sources. Co-operative art groups are nothing new, but the internet absorbed much of the energy people used to put into organizing in person. It was refreshing and inspiring to see so many effective collectives from around the world.

It was especially notable because of the failure I saw at Documenta 15 last year. That famous art festival turned over control to a collective made of other collectives. It was ambitious, but the art was weak and overly dependent on buying into the theoretical and ideological structures they based it on. It offered an example of how collectives don’t work. But, at Supermarket, I had a glimpse of more diverse and structured approaches to collectivism that do work.

Some interesting groups I connected with:

I was there for 5 days and learned practical approaches to organizing and met lots of interesting people. When I return to the U.S., I hope to bring that logistical knowledge with me. American art institutions, in so many ways, are collapsing. These collectives are the way forward.

The new lucidbeaming.com

This year, I completely redesigned my website to match how people actually use it. You’re looking at it now.

I’m a member of a local tech art group called Creative Code Berlin. Each month, a variety of people give demos of projects they’re working on and a short presentation. At the end, people post some kind of contact info. Overwhelmingly, people share their Instagram accounts and not much else. A few have dedicated websites, but most depend on Instagram to do the heavy lifting of showing off their work.

That has serious drawbacks. First, it’s a commercial platform with limits on content and it weighs content using a proprietary algorithm. It’s also designed as a feed and not an archive. It’s much more difficult to see historical activity in context. Art is not always made like that. Artists aren’t content machines. We’re human.

It gave me the idea to take the useful aspects of sharing an account link but retaining context and ownership. I simplified the main template to be mobile first and fast-loading. Interactivity is basic and javascript features are limited. People are used to swiping and scrolling static content, so that’s what they get. Not many slideshows or video carousels. Everything is embedded on-page and doesn’t require accounts on external platforms. I even removed Google Analytics and don’t have any personal tracking at all.

It’s been the most effective redesign yet and I still get compliments on how simple it is. Most importantly, I don’t have to depend on Instagram to be my internet presence.

3D scans

My phone has lidar, which is a tiny laser that can be used to make volumetric scans of objects and areas. That means I can make 3D images of lots of things. I used it to scan a variety of sculpture I saw this year. Here is a video with the best results.

Leidkultur at HilbertRaum

I had 5 pieces in a group show here in Berlin, at a gallery called HilbertRaum. I was invited based on the glitch portraits I made last year of problematic sculptures at the Zitadelle. They are fairly political and I’m an American, so I was surprised to get asked to show art about German history at a German gallery.

It was an excellent grouping and the space was well suited for my work. I presented them using transparencies on large light panels. The portraits themselves were detailed, vibrant, and somewhat unsettling. Showing them in a way used in bus stop advertisements was a gamble, but I’m happy with how they turned out.

ICC Berlin

The Internationales Congress Centrum Berlin was built in the 70s as the largest conference center in Europe. It’s a massive facility and looks futuristic and monumental. It never lived up to the hype though and has been closed for many years.

photo from bz-berlin.de

They opened it this year for visitors on just one weekend. I had to reserve time slot weeks ahead of time. I wanted to make some photos inside with my old Polaroid SX-70. I also brought my digital camera and made many 10X layered multiple exposures.

Here is a slideshow…

Berghain Box

I built a kick drum machine out of a cigarette tin. It’s named after a famous dance club in Berlin called Berghain that stays open for days at a time.

Near my apartment is weekly flea market in a place called Mauerpark. It has declined in quality since I moved here, but there are still some vendors that have authentic objects from pre-unification East Germany and Eastern Europe. It also seems to do brisk business in selling the belongings of dead people from retirement homes. The number of very high quality family photo albums is somewhat disturbing.

One of my favorite booths sells old cigarette and tea containers made of metal. The graphic design is classic and the boxes are a handy size. I bought a few to hold small Arduino synthesizers I make. The latest is this drum machine.

It has no patterns, shuffle, or other drum sounds. It only cycles a kick drum endlessly. It has tap for tempo, pitch, timbre, and filter. There is no stop/start. I made it as a machine to sync other instruments to and to drive a techno track. I intend to use it with the Nachtbox effect box I made last year. Used together, the yield is a noisey, glitched out industrial drum collider.

Auschwitz

This September, I visited the Auschwitz concentration camp in Poland. It was something I’ve wanted to do to since I arrived in Germany. It took a while to get the logistics timed right, but things came together in the Fall.

I took this as I was leaving, right before the site closed.

Auschwitz is an important place in human history, not only for what actually happened there but what it represents. During World War II, the Nazi regime of Germany gassed and incinerated 1.1 million people at this site, from 1941-1944. They were overwhelmingly Jewish and most were killed immediately upon arrival. The were told they were being relocated and when they arrived, led into what looked like showers for bathing. Then they were gassed with a range of toxic chemicals. Prisoners at the camp loaded the bodies into nearby ovens and they were cremated. It’s not the only place where this happened, but it is the most notorious and the only one left intact as a memorial and museum.

I knew about about this history and had read plenty about the context. But, I really wanted to see the place for myself. I wanted to know what it felt like to stand on the grounds and see the trees and hear the natural sounds of the area. I also wanted to make drawings of the buildings and interiors, as a way of staying present and focused.

In the morning I took the group tour, which I hated. The people I was with were there for very different reasons. I think most of them just wanted to see the spectacle of it all, like a haunted house. For them, it was one of many places they breezed through while touring Europe or Poland. People took selfies and did video streaming, got bored and talked over important information from the guide. Thankfully, I had another ticket for later that let me roam the grounds independently. That ended up being the most meaningful experience.

One building had 100s of prisoner portraits in the hallways. I made some drawings of a few faces, but wished I had more time to do studies. I discovered that most of the images were made by a Polish photographer who was captured and put to work in the camp. His name was Wilhelm Brasse. When I returned to Berlin I watched a documentary about him and also found an archive of many of the images he took.

I started to make drawings of the prisoners, a few each day after work. I also visited the site of the Wansee Conference. It was the meeting where the practical plans for the elimination of Jews were made. Auschwitz was a direct result of that meeting. I made drawings of the Nazi organizers that were there.

My drawing style is too cartoonish to do justice to these images. I kept drawing anyway because it felt like a constructive way of actually connecting to the individual people in these photographs. Instead of thinking of them as “the Jews”, I though a bit more about what their individual stories might have been. I have no idea what I’ll do with these.

One of the biggest impacts of that visit came when I returned. From landing at the airport to riding subways, I was seeing the Germans next to me differently. I didn’t think they were Nazis. But, I wondered if they were put in that time, in that context, would they have ended up that way? It’s similar to thoughts I had growing up in the Southern U.S. If I had been born in 1850, would I have been a Confederate or had friends in the KKK?

We tend to have these easy conclusions in hindsight, from decided history. It’s very easy to pick the right side of history. I’m skeptical when I hear people make righteous proclamations about what they would have done in historical times. It’s crucial that we continue to reflect on our principles as people and societies, to make sure they continue to come from place inside us that is real and enduring.

A few doors down from my apartment in Berlin are these markers. They indicate the people living there were extracted and sent to Auschwitz, where they died. Markers like this are placed all over Berlin as incorporated history. Reminders that we are all living right where the terror began.

Frieze London

One of my goals while living here was to visit a major international art fair – the kind that gets written about in ArtForum. I considered Art Basel in Switzerland or Frieze London in the United Kingdom, or possibly both.

I picked Frieze London and attended in October. I sort of knew what to expect. Mingling with the global super rich in the heart of London was bizarre and not very revealing. I got the feeling there wasn’t much beyond the obvious there. It was such a contrast to the independent art scene in Berlin and the co-ops of Supermarket in Stockholm.

This was hardcore transactional culture on parade. If you didn’t understand the art it was because you didn’t know the right names or follow the right galleries. Most importantly, you had say it all sucked, because putting down artists and gossiping is the social currency there. I was so far out of my element in that place.

I really tried to walk around and find art to connect to, but it was hard. So much of it was designed to be seen and bought, not felt or resonated with. I’m sure many of those artists had other work that offered that, but not there. The whole place reeked of cocaine sweat and plastic wrap, with breezes of expensive perfumes.

Here are a few pieces I did like.

New work

My own art was pretty scattered and sparse this year. Full-time work and limited space suppressed my creative output. I did find some success, though. I made the best of my small working area and focused on new techniques and mediums that were appropriate for that. That meant works on paper with Polaroid transfers, screen printing, leather collage, drawings, lino prints, and print transfers.

I haven’t done this kind of work in a long time. Sometimes I was pushing too hard to get things “right”. I had to just let go and accept that I was a beginner again when it came to printmaking. I also tried to stay focused on imagery and vibes that are natural to me, instead of imitating something I saw on the internet. That impulse turned out to be the most constructive result. I had a regular activity that pulled from somewhere real within my life.

Upcoming plans

I’ve been here 2 years now. It’s been a visceral experience and I’m not done yet. In the next year, I hope to be in more shows and make more solid connections with other artists and gallerists. That isn’t something that can be forced though. I just need to stay active and engaged. Eventually, I meet the right people for me. That’s the way it’s always been.

At the end of this year, I have begun to think of what my life will be when I return. I have no intention of living as an expat. I’ve met many Americans here and they don’t have lives I want to emulate. This move was always time boxed. I’m going to learn what I can and then return to continue a life in art.

I have some technical projects and more multimedia work that is unfinished. I’m not posting many here, but there are all kinds of tech projects cooking right now. They will probably start to manifest in Spring. That will be a whole new chapter of this experience in Berlin.

MMXXII: Berlin

Last February, I moved to Berlin, Germany to connect with the global art world and explore new ideas in technology and art. It has been challenging, surprising, and fulfilling.

The move was inspired by a failed California Arts Council grant application. I had planned to use that grant to visit Berlin during some key art festivals. While I was waiting to hear about the application, I spent a lot of time researching Berlin and to make best use of the time that grant would enable.

I didn’t get that grant. The feedback I got from the review was conflicting and confusing, but I didn’t dwell on it. Instead, I made plans to drive across the United States as an art making expedition. That turned out well and I returned to my San Jose art studio to work through multiple bodies of work generated out on the road.

The owner of the house I lived in decided to sell his house and I was looking at options for where to live the next year. Post-Covid Silicon Valley didn’t look very appealing and all my options were very expensive. It dawned on me that I might be able to move to Berlin instead. I had done a lot of research already so I knew where to explore that option. Within a week, I decided to take the gamble.

I started applying for jobs in Berlin and 3 months later, I had one. A month after that, I arrived at Brandenberg airport with a backpack, a laptop, and some clothes. A year has passed since then and the experience has been intense and inspiring.

Impressions

Berlin is a very modern large city. It has a strong public transportation network and diverse civic infrastructure. It has been able to absorb many class and culture imports and offers a strong social support network. Culturally, it has the most active and engaged art audience I ever seen in a city. More than New York.

It is not a skyscraper city, but more spread out into neighborhoods defined by rows and rows of 6 story apartment buildings. Much of the architecture is relatively new. Berlin was heavily bombed in WWII. But, there are integrations of historic places everywhere. The past is not forgotten here.

The late 80s East/West culture and re-integration is still a dominant influence on cultural history. The art scene is steeped in stories of an explosion of culture in the early 90s. But, recent shifts include immigration from Arabic and Global South culture. Also, the internet has diffused many of the cultural silos that defined European regionalism. Many books have been written about all these topics. It’s fascinating to be in the middle of it all.

Berlin Art Sites

One of the first things I did when I arrived was look for art events. There are so many here, it was hard to sift through them all. Specifically, finding galleries showing work I was interested in was a chore. There are over 300 galleries here. Then you have regional institutions on top of that. Going through their websites turned out to be a big hassle. Many weren’t made well and it was a slow process.

I wanted a quick way to look at all of the sites without all the pop-ups and GDPR notices. So, I created a web script that made screenshots of all of them and put the images in a folder. That turned out to be pretty useful and I though others would be interested in the results.

So, I built Berlin Art Websites (berlinartgalleries.de). It uses a tool called Puppeteer to make screenshots of all the websites, each Sunday morning. The results are always up to date and show the latest work on their front pages. It just runs on its own, quietly grabbing the latest from all the galleries around Berlin.

Screenshot of the top of the Berlin Art Websites main landing page

I posted a link to it on Reddit and the response was huge. People seemed to really want something like this.

Artwork

In the past year, I have gone to 1-5 art events a week. I’ve seen the best and worst of what Berlin’s art scene offers. I now have a collection of hundreds of postcards and brochures from the shows. Not sure how I’m going to get them all back to the U.S., actually.

Here is a slideshow of some of the highlights.

190 images in a 15 minute video slideshow, with background music

Art takeaways

Berlin has a strong set of art institutions that are well-funded and staffed. It also attracts legions of fresh art school graduates from around the world. There is a good variety of art in the middle as well.

NFTs were very popular when I arrived and had multiple galleries dedicated to only that kind of work. By the end of Summer, many of them were gone or in decline. The legacy of that is screen based art is everywhere. Even if they don’t show NFTs, many galleries went all-in with video screens. Whole rooms with nothing but screens.

Nostalgia for 1992 is still popular, from the re-unification of East and West Berlin. There were a lot of middle-aged artists doing work that first began then. It was hit and miss though. All nostalgia is like that. It’s an optimized memory and not a real connection. Art needs something more real to survive.

Diaspora art was prominent. There were shows dedicated to the Global South, the Middle East, and Ukraine. It felt like every other independent show had the word “de-colonize” in a curatorial statement. In Kassel, east of Berlin, the spectacular failure of Documenta 15 (organized by a Jakarta art collective) was in all the art media.

Projects

Laser

Finding a useful workspace in Berlin turned out to be more difficult than I thought. Most of art spaces take connections to get in or lots of money. There aren’t as many maker spaces, either. I could only find 3 that were public.

I settled on a smaller lab in north Berlin that was close to a subway stop and hardware store. It’s called Happylab and is focused on hobbyists and some electronic makers. It has a small storage space that ended up being really useful for someone getting around by subway all the time.

Typical of modern maker spaces, they have a laser cutter. I never used one for art and wasn’t sure what I would do with it. But, I ended up exploring a few different directions for lighting and as a drawing tool.

10X

Using a technique I stumbled onto during my cross-country drive, I’ve continued to make layered abstract photographs. These were made at the Botanischer Garten, Museum für Naturkunde, and Park am Gleisdreieck.

Geist

Germany has a difficult history and has gone to great lengths to incorporate its past into the present, using lessons learned from decades of accountability and scholarship. The Zitadelle is a museum in West Berlin that houses a unique exhibit for this purpose. Throughout the region monuments to problematic past leaders were built from 1849 to 1986. Many of the men they memorialize that had terrible legacies. It includes religious leaders, Prussian military leaders, businessmen, and mythical representations of men in power at the time.

These memorials were getting destroyed and vandalized after re-unification. Archivists and historians were left with a dilemma, how to preserve these artifacts without perpetuating the cultural impact they were intended for. They decided to move them all to a central location at a side gallery at the Zitadelle. There they are presented without pedestals or plaques, living on in anonymity and stripped of iconography.

Due to the political upheavals in the 20th century, monuments that represented problematic or even threatening reminders or appreciation of the old ways were removed from public spaces by the new governments. The museum offers an opportunity to come to terms with the great symbols of the German Empire, the Weimar Republic, National Socialism and the GDR, which were supposed to be buried and forgotten – and now serve a new function as testimonies to German history. Instead of commanding reverence, they make historical events tangible in the truest sense of the word.

Unveiled – Zitadelle Museum
“Unveiled. Berlin and its monuments” – Zitadelle Museum

I think that’s a fascinating and powerful solution that can be explored in the United States for our monumental legacy throughout the South.

I photographed the faces and decided to re-contextualize their appearances. In the past decade social media has resurrected some of the worst ideologies in history. They were dying out until anonymous politics became a thing and rekindled their popularity. My idea is to use these statues to build illustrations of these old dying ideas that are empowered by online culture.

Different steps to create a vector mask

Video

I brought an archive of multimedia files I have created over the years. I thought it would be useful, in the absence of a proper fabrication space, to have some computer based art projects to work on. I also shot some new video footage and got certified to fly my drone in the E.U.

Here is a piece I made using time-lapse footage at a famous subway stop called Alexanderplatz:

This abstract video is made from drone footage over Nevada salt flats:

The most recent work combines drone footage from a decommissioned airport with a generative computer art tool called Primitive:

NachtBox

Finished assembly connected to recorder

Back in the U.S. I go to thrift stores fairly regularly. I look for small obsolete electronics I can repurpose and dated hardware to build sculptures around. I tried the same here, but most of the thrift stores are focused on clothing. Buying 2nd hand clothing in Europe and reselling it online is a huge underground business.

I did find a place run buy the recycling agency, called NochMall. It’s grossly overpriced but is a source of occasional treasure for electronic art making.

There I found a micro cassette recorder that a German man had used to record himself playing guitar with TV shows in the background. That tape was the real gold. I decided to use it as the core of a music machine that played the tape through a variety of effects. It turned out to be a long-term complex project.

I chose a Teensy 4.1 micro-controller as the main engine for processing the audio. Besides being fast and having decent memory, the manufacturer has an excellent audio library to make use of. It allowed me to prototype very quickly and get to the noise making steps fast. I’m pretty stoked on how this turned out and look forward to performing with it soon.

Next year

I plan on staying in Berlin for at least another 2 years. I have been paying rent on my San Jose art studio, hoping to return to it when I finish my experience here. Unfortunately, problems with the landlord are forcing me to let go of that work space and move everything into storage. It has been a difficult and expensive conclusion to that place.

However, I feel like I am just getting to know this city. This first year has been interesting, but it feels like I’ve just seen the surface. I’m looking forward to getting to know more artists and gallery folks, as well as the creative coding community. After all, it’s the people that define a community, not just the place.

MMXXI: art in the age of COVID

Making art in the pandemic age requires new perspectives on context, value, and presentation. I had to deal with these challenges just like every other artist this year. I was already producing work that occupied a hybrid online and physical space. But, the new context relied much heavier on virtual space. I handled that for a while but got burned out with online life.

So, I spent a month driving across the country while we were still on lockdown in March. I deleted or suspended most of my social media and headed east from the California coast. I brought a wide variety of multimedia recording tools and came back with a harvest of imagery, sounds, and experiences. The rest of the year was heavily influenced by that trip.

I did find exhibition opportunities despite so many institutions being shutdown. It was important to me to keep momentum going when it came to in-person art exhibitions. It was very difficult though, because attendance was low even when I managed to carve out a space. COVID-19 was a tough adversary.

I made it through this year healthy and am very grateful for that. It has given me a new appreciation for what I do while still stomping around on the planet. My life is fully focused on making art now and I hope to sustain that through the years to come.

Camel

The first release of the year was a noise tape made with a synthesizer I built. I’m happy with the recording and the tape, but the promo for it might be more interesting than the thing itself. I guess that’s the age we live in.

Promo video for Camel
cassette tape
Official 60 minute cassette

The full recording can be heard and purchased on Bandcamp.

music synthesizer
Made with this DIY synth

Jojo Crawdad

In March, I embarked on an epic trip across the country. It was expensive, dangerous, cold, and isolating. I’m so glad I did it.

map of the United States with route
6704 miles in 28 days

It all started with a need to visit a library in Slidell, Louisiana. I used to work for a newspaper there and it went out of business without leaving a digital archive behind. One of the few records of the work I did there is on microfiche at the local library. It is only accessible in person. I needed to get copies of one particular story I did and began thinking of a way to get there.

Driving there offered a chance to make art along the way. But, it’s a loooong drive and if I’m going that far, why not all the way across? So, a trip across the country was born.

March was still cold and once I got into the mountains, even colder. The roads were empty in long stretches and even more in the country backroads I took. I rarely got above 65mph or took the mega interstates.

I brought a drone for aerial shots, dSLR, GoPro, car mounted cameras, and an audio recorder. My intent was to harvest a wide variety of media for use in post-production over the next year. There wasn’t much of a preconceived concept or aesthetic I tried to realize. It was just to be present, over and over, far from home.

The title JoJo Crawdad comes from combining words I picked up once I crossed the Mississippi. A jojo is a small potato wedge that gas stations served fried. Crawdad is a familiar version of crayfish, which are tiny lobster shaped crustaceans that folks eat by the bushel in Louisiana.

I was on the road for a full 28 days.

empty gambling hall
Callahan, FL
abstract image
Huntsville, AL
skateboard park from above
Charleston, SC

The abstract images that look blended are not made with Photoshop or on a computer. They are the result of 10 multiple exposures made on the same frame inside the camera. They got made while on-site and once they were shot, there was fixing or changing them. I got to the point where it was a little dance movement to get a variety of viewpoints in each frame.

Some of the photography ended up in my solo show at Art Ark in August. A video piece, Vulture, was recently in a film festival. There is still so much to work with. I feel like this trip will be paying dividends for many years.

Wolves

The Wolves project I started last year was chosen by the Palo Alto Public Art Program for a 5 night performance in May. Each night I rode around a pre-chosen area near downtown Palo Alto, projecting the animation onto houses and businesses.

Of all the public art projects I’ve done, this one got the most press coverage by far. A large profile in the local section of the San Jose Mercury News was a highlight. It was followed by coverage from ABC 7, Palo Alto Online, Hoodline, Content Magazine, and some online aggregators. It also got shared a lot from those outlets and I spotted it on social media a few times.

newspaper clipping

For this event, I created a Wolf Tracker web app for people to find my location when I was going through their neighborhoods. At first I tried to use the location sharing feature of my phone, but that was too limited. So I bought a GPS module for the projection cart and wrote some Python scripts to get that data to my server. It was well intentioned and worked fine in testing but I had issues out in the wild. It turned out that certain locations around Palo Alto actually blocked the GPS signals. I have no idea how to explain it, but was able to verify the block multiple times. Weird.

A friend from Haiti came by one of the nights and put together this cool video.

Refactor

A local gallery, Kaleid, had recently cleared out an extra room and was offering it as an installation space. I had a variety of tech art and video I had finished the previous year. Although it was a small show, it got some foot traffic and was a good size for the smaller interactive work. It ended up being a retrospective of the past 5 years in multimedia.

Bad Liar
Closer
Delphi
Embers
Spanner player
Tintype

Beacon

In August, I had a large solo show of recent work at Art Ark in downtown San Jose, California. It’s a beautiful space and has hosted many top notch shows. I was offered the Summer residency and used that to put together 62 artworks for display.

entrance to art gallery
Entrance

Logistically, the biggest challenge was framing. Only a handful of the pieces were framed. To save money, I decided to frame them myself. I thought it was a good idea at the time, but it ended up being a massive effort.

I cut down around 2700ft of oak strips into 192 pieces with rabbets and angled ends. Then I assembled and stapled the frames by hand. The work was in different sizes, so the frames were made in sized batched. The acrylic came at a discount from Tap Plastics (thanks guys!). The window mats were also cut by hand and that was time consuming.

table with cut wood
Fresh cut oak

The work paid off though because the presentation was really nice. Consistent and clean. Also, I now have a lot of framed work for distribution and exhibition. I sent some of them to national shows I explain later in the post.

newspaper clipping
Pick of the week in San Jose Metro

Going National

This year, I made a real effort to exhibit across the country and in other contexts as well. I made use of CaFE (Calls for Entry) and FilmFreeway for film festivals. There is so much competition for just a handful of exhibitions now. The internet makes more available to people like me, but can also overwhelm organizers with thousands of entries. A small cottage industry has sprouted up around the whole enterprise and there are a lot of sketchy “pay-to-play” shows. That means I’m supposed to pay for the privilege of showing my work. I’m not doing that.

But, I did find some interesting opportunities out there. It was mostly regional group shows. I didn’t get to attend, but I thought it would be good to know the process of getting work to them from beginning to end. Shipping anything fragile is incredibly expensive now. I wasn’t anticipating that.

I had a piece in a virtual show at the San Luis Obispo Museum of Modern Art. A pianist named Ting Luo saw that and reached out for some collaboration. She founded an organization call New Arts Collaboration and wanted to know if I wanted to contribute some video. It took some time going back and forth, but we put together a collaborative multimedia piece she performed recently in San Francisco. Brett Austin composed the music, titled How Deep is the Valley, and I made a custom video piece for it. She played the piece live a few months later.

Ting Luo, New Arts Collaboration, San Francisco, CA
How Deep is the Valley
Main Street Arts, Clifton, NY
Gray Loft Gallery, Oakland, CA
Another Hole in the Head film festival, San Francisco, CA
Decode Gallery, Tucson, AZ

Alviso Aviary

A nearby nature preserve is one my favorite day hikes. It’s very open and clear and has a rich variety of birds. I took some footage of lost seagulls one day and was inspired to make some animation assets. This is a minor project that will eventually get folded into some other context.

Original footage
illustration of bird wing movements
Conversion process
Final animation

Bric-a-brac

I found a gold frame in a trash can that had upholstery fabric inside. It was probably a mount for a Virgin of Guadalupe statue, which is common around here. I brought it back to my studio and was inspired to make use of this Pirelli tank car graphic I had been digitizing. After some cleanup work in Inkscape, I sent the outline to my vinyl cutter with some black vinyl material. The result was pretty sweet.

graphic of car on fabric inside a frame

Fresh from the success of my trash collage, I decided to do a whole series. I scoured some local thrift stores and a few yard sales for frames. My art studio neighbor had a bunch he donated. I ended up with 12 cheapo gold frames of various sizes.

My grandfather used to work re-upholstering furniture and I thought there might be a supply store for that kind of work. San Jose didn’t have any furniture specific shops, but there are a lot of fabric stores. One of the best was actually close to my house, Fabrics R Us. They had a bunch of inexpensive, but ornate and metallic, upholstery fabric. I came back with a nice variety that I added some donated wallpaper to.

A trip to home depot got me some MDF that I cut up and wrapped in fabric. I took some photos of the results and began to lay out designs in Inkscape. I had done a lot of vector drawing for the Wolves project, so I had a workflow ready.

I spent many months collecting imagery and designs to convert to graphic outlines. I avoided the internet and made use of the local library and even some old Playboys I bought at a record shop. I have plenty to work with now, but the digitizing and plotting has been time consuming.

[ dogs ]

After 4 years of experimenting, fabrication, and sound design, I premiered [ dogs ] at Anno Domini Gallery in December. It’s an interactive sound art experience that involves 9 people carrying around autonomous speakers with computers inside them. The speakers bark, snarl, or squeal if near one group and resonate within a tone chord if near another group.

Each speaker can detect the distance and disposition of all the other speakers. Some are friends and others enemies. Participants discover which is which by walking around and getting real-time audio feedback from those around them.

The project began with the purchase of 10 cheap sub-woofers from the now defunct Weird Stuff Warehouse. They were unpowered and basically empty. I decided to adopt them and figure out what to do later. At first, I thought I might build an independent 10 channel synthesizer. That would require some kind of communication between them so I bought a bunch of Raspberry Pi Zeros and got to work.

After many different approaches and some new inspiration from Norcal Noisefest, I decided to make them loud and antagonistic. This came at a time when social media conflict was off the charts, for reasons I’m sure everyone knows. The noise and anger level got so high, it was hard to tell who was saying what and why.

A chance encounter at Streetlight Records brought a CD full of animal sounds into my studio. I used many of those clips as the basis for layering and pitching individual sounds for each speaker. The result is a tiered collection of distorted and loud samples of animals in distress. The psychological effect of working on those sounds for hours and hours was pretty intense.

In the end, the experience is heavily influenced by the people participating. 2 rounds of experiences with different groups yielded very different results. It achieved the status of social experiment above whatever artistic intent I wielded. The conversations after each experience were really interesting and lasted a while.

Next year

On my trip across the country I had a lot of time to think. I wondered about the distribution and reception of this art work. I thought about what I really wanted to accomplish. I managed to carve an art life out of this crazy year and I’m proud of that. It was exhausting, though.

A couple of months ago, I got some consequential personal news. I had to decide where I was going to live in 2022 because the space I am renting is being sold. So, I decided to take the leap on a move I have been considering for a while.

I’m moving to Berlin, Germany. I hope to connect with the art community there and globally. So, next year my annual art year recap will come from Berlin.

MMXX: signals, sounds, sights

I spent most of the year in my art studio while the city around me contracted and calcified due to Covid. I was fortunate that my plans coincided with the timing and degree of changes in the world. It could have very easily gone the other way, as I’ve seen firsthand. Lots of my friends in the art community are struggling.

My work this year reflects more studio and internet based processes. Previous years always included public festivals, performances, and collaborations. Some of that change was to save money, but it was also an effort to make use of what I had around me. It was to stay present and maintain momentum with ongoing projects.

I did actually manage to pull off a few public projects, including a portable projection piece that had animated wolves running on rooftops. I savored that experience and learned a lot from the constraints of lock-down art performances.

Looking back on this year, I see new priorities being formed. While the coding and online projects were effective, the amount of screen time required took a toll. I relished the drawing projects I had and hope to keep working in ways that make a huge mess.

Sightwise

My studio complex has a co-op of artists called FUSE Presents. We hold regular group art shows in normal times and for each show, two artists get featured. I was one of the featured artists for the March 2020 show. That meant I got extra gallery space and special mention in marketing materials.

The work I picked was drawn from a variety of efforts in the previous two years. As a grouping, it represented my current best efforts as a multimedia artist. I worked hard to finalize all the projects and really looked forward to the show.

It combined abstract video, traditional photography, sculptural video projection, installation work, and works on paper.

I designed the show’s poster in open source software called Inkscape.

Unfortunately, the show happened right as the first announcements about the local spread of Covid had begun. People were already quarantined and we heard about the first deaths in our county. That news didn’t exactly motivate people to come out to the art show. Attendance was sparse at best. But, all that work is finished now and ready for future exhibits.

Camel

I found a cigarette tin that had been used as a drug paraphernalia box and decided to build a synthesizer out of it. I had been experimenting with a sound synthesis library called Mozzi and was ready to make a standalone instrument with it. I spent about a month on the fabrication and added a built-in speaker and battery case to make it portable. Sounds pretty rad.

I released my code as open source in a Github repo and a follower from Vienna, Austria replicated my synth using a cake box from Hotel Sacher. (apparently famous for their luxury cakes?)

Wolves

The Wolves project was a major undertaking that took place over 2 years. It began with an interest in the Chernobyl wolves that became a whole genre of art for me.

I began hand digitizing running wolves from video footage and spent a year adding to that collection. I produced hundreds and hundreds of hand drawn SVG frames and wrote some javascript that animated those frames in a variety of ways. I got to the point where I could run a Raspberry Pi and a static video projector with the wolves running on it. I took a break from the project after that.

By the time I returned to the project, the Covid lockdown was in full swing and American city streets looked abandoned. We all started seeing footage of animals wandering into urban areas. It made sense to finish the Wolves project as an urban performance, projecting onto buildings from empty streets.

Building a stable, self-powered and portable rig that could be pulled by bicycle turned out to be harder than I thought. There were so many details and technical issues that I hadn’t imagined. Every time I thought I was a few days from launch, I would have to rebuild something that added weeks.

The first real ride with this through Japantown in northern San Jose was glorious. Absolutely worth the effort. I ended up taking it out on the town many times in the months to come.

Power up test in the backyard
San José City Hall
Japantown, north of downtown San José

The above video is from Halloween, which was amazing because so people were outside walking around. That’s when the most people got to see it in the wild.

But, my favorite moment was taking it out during a power blackout. Whole neighborhoods were dark, except for me and my wolves. I rode by one house where a bunch of kids lived and the family was out in the yard with flashlights. The kids saw my wolves and went crazy, running after them and making wolf howl sounds while the parents laughed. Absolute highlight of the year.

Videogrep

Videogrep is a tool to make video mashups from the time markers in closed captioning files. It’s the kind of thing where you can take a politician’s speech and make him/her say whatever you want by rearranging the parts where they say specific words. It was a novelty in the mid-2000s that was seen on talk shows and such, as a joke. Well, the computer process behind the tool is very useful.

I didn’t create videogrep, Sam Lavigne did and released his code on Github. (BTW, the term “grep” in videogrep comes from a Unix utility (grep) used to search for things) What I did do is use it to find other things besides words, such as breathing noises and partial words. I used videogrep to accentuate mistakes and sound glitches as much as standalone speech and words.

Here is a typical series of commands I would use:

videogrep --input videofile.mp4 -tr

cat videofile.mp4.transcription.txt | tr -s ' ' '\n' | sort | uniq -c | sort -r | awk '{ print $2, $1 }' | sed '/^[0-9]/d' > words.txt

videogrep -i videofile.mp4 -o outputvideo.mp4 -t -p 25 -r -s 'keyword' -st word

ffmpeg -i outputvideo.mp4 -filter_complex "frei0r=nervous,minterpolate='fps=120:scd=none',setpts=N/(29.97*TB),scale=1920:1080:force_original_aspect_ratio=increase,crop=1920:480" -filter:a "atempo=.5,atempo=.5" -r 29.97 -c:a aac -b:a 128k -c:v libx264 -crf 18 -preset veryfast -pix_fmt yuv420p if-stretch-big.mp4

Below is a stretched supercut of the public domain Orson Welles movie The Stranger. I had videogrep search for sounds that were similar to speech but not actual words or language. Below that clip is a search of a bunch of 70s employee training films for the word “blue”. Last is a supercut of one the Trump/Biden debates where the words “football and “racist” are juxtaposed.

Specific repeated words used in a 2020 Presidential Debate: fear, racist, and football

Vid2midi

While working on the videos produced by videogrep, I found a need for soundtracks that were timed to jumps in image sequences. After some experimenting with OpenCV and Python, I found a way to map various image characteristics to musical notation.

I ended up producing a standalone command-line utility called vid2midi that converts videos into MIDI files. The MIDI file can be used in most music software to play instruments and sounds in time with the video. Thus, the problem of mapping music to image changes was solved.

It’s now open source and available on my Github site.

The video above was made with a macro lens on a DSLR and processed with a variety of video tools I use. The soundtrack is controlled by a MIDI file produced by vid2midi.

Bad Liar

This project was originally conceived as a huge smartphone made from a repurposed big screen TV. The idea is that our phones reflect our selves back to use, but as lies.

It evolved into an actual mirror after seeing a “smart mirror” in some movie. The information in the readout scrolling across the bottom simulates a stock market ticker. Except, this is a stock market for emotions. The mirror is measuring your varying emotional states and selling them to network buyers in a simulated commodities exchange.

Screen test showing emotional stock market
Final demo in the studio

Hard Music in Hard Times

TQ zine is an underground experimental music zine from the U.K. I subscribed a few years ago after reading a seminal essay about the “No audience underground”. I look forward to it each month because it’s unpretentious and weird.

They ran an essay contest back in May and I was one of the winners! My prize was a collection of PCBs to use in making modular synthesizers. I plan to turn an old metal lunchbox into a synth with what I received.

Here is a link to the winning essay:

Lunetta Synth PCB prizes from @krustpunkhippy

Books

I spent much of my earlier art career as a documentary photographer. I still make photographs but the intent and subject matter have changed. I’m proud of the photography I made throughout the years and want to find good homes for those projects.

Last year I went to the SF Art Book Fair and was inspired by all the publishers and artists. Lots of really interesting work is still being produced in book form.

Before Covid, I had plans to make mockups of books of my photographs and bring them to this year’s book fair to find a publisher. Of course, the fair was cancelled. I took the opportunity to do the pre-production work anyway. Laying out a book is time consuming and represents a standalone art object in itself.

I chose two existing projects and one new one. American Way is a collection of photos I made during a 3 month American road trip back in 2003. Allez La Ville gathers the best images I made in Haiti while teaching there in 2011-13 and returning in 2016. The most recent, Irrealism, is a folio of computer generated “photographs” I made using a GAN tool.

It was a thrill to hold these books in my hands and look through them, even if they are just mockups. After all these years, I still want my photos to exist in book form in some way.

Allez La Ville, American Way, Irrealism

Art Review Generator

Working on the images for the Irrealism book mentioned above took me down a rabbit hole into the world of machine learning and generative art. I know people who only focus on this now and I can understand why. There is so much power and potential available from modern creative computing tools. That can be good and bad though. I have also seen a lot of mediocre work cloaked in theory and bullshit.

I gained an understanding of generative adversarial networks (GAN) and the basics of setting up Linux boxes for machine learning with Tensorflow and PyTorch. I also learned why the research into ML and artificial intelligence is concentrated at tech companies and universities. It’s insanely expensive!

My work is absolutely on a shoestring budget. I buy old computer screens from thrift stores. I don’t have the resources to set up cloud compute instances with stacked GPU configurations. I have spent a lot of time trying to figure out how to carve a workflow from free tiers and cheap hardware. It ain’t easy.

One helpful resource is Google Collab. It lets “researchers” exchange workbooks with executable code. It also offers free GPU usage (for now, anyway). That’s crucial for any machine learning project.

When I was laying out the Irrealism book, I wanted to use a computer generated text introduction. But, the text generation tools available online weren’t specialized enough to produce “artspeak”. So, I had the idea to build my own art language generator.

The short story is that I accessed 57 years of art reviews from ArtForum magazine and trained a GPT-2 language model with the results. Then I built a web app that generates art reviews using that model, combined with user input. Art Review Generator was born.

This really was a huge project and if you’re interested in the long story, I wrote it up as a blog post a few months ago. See link below.

See examples of generated results and make your own.

Kiosk

Video as art can be tricky to present. I’m not always a fan of the little theaters museums create to isolate viewers. But, watching videos online can be really limited in fidelity of image or sound. Projection is usually limited by ambient light.

I got the idea for this from some advertising signage. It was seeded with a monitor donation (thanks Julie Meridian!) and anchored with a surplus server rack I bought. The killer feature is the audio level rises and falls depending on whether is someone is standing in front of it or not. That way, all my noise and glitch soundtracks aren’t at top volume all the time.

This plays 16 carefully selected videos in a loop and runs autonomously. No remote control or start and stop controls. Currently installed at Kaleid Gallery in downtown San Jose, CA.

Holding the Moment

Hanging out in baggage claim with no baggage or even a flight to catch

In July, the San José Office of Cultural Affairs announced a call for submissions for a public art project called Holding the Moment. The goal was to showcase local artists at Norman Y. Mineta San José International Airport.

COVID-19 changed lives everywhere — locally, nationally, and internationally. The Arts, and individual artists, are among those most severely impacted. In response, the City of San José’s Public Art Program partnered with the Norman Y. Mineta San José International Airport to offer local artists an opportunity to reflect, comment, and on of this global crisis and the current challenging time. More than 327 submissions were received, and juried by a prominent panel of Bay Area artists and arts professionals. Ultimately 96 artworks by 77 San José artists were awarded a $2,500 prize and a place in this six-month exhibition.

SAN JOSE OFFICE OF CULTURAL AFFAIRS

Two of my artworks were chosen for this show and they are on display at the airport until January 9. They picked some challenging pieces, PPE and Mask collage, with interesting back stories of their own.

Here are the stories of the two pieces they chose for exhibition.

PPE

The tale of this image begins in Summer of 1998. I had a newspaper job in Louisiana that went badly. One of the few consolations was a box of photography supplies I was able to take with me. In that box was a 100′ bulk roll of Ilford HP5+ black and white film. My next job happened to involve teaching digital photography so I stored that bulk roll, unopened and unused, for decades. I kept it while I moved often, always thinking there would be some project where I would need a lot of black and white film.

Earlier this year, I was inspired to buy an old Nikon FE2 to make some photos with. I just wanted to do some street photography. After Covid there weren’t many people in the streets to make photos of. But, I did break out that HP5+ that I kept for decades and loaded it onto cassettes for use in the camera I had bought. I also pulled out a Russian Zenitar 16mm f2.8 that I used to shoot skateboarding with.

This past Summer, I went to Alviso Marina County Park often. It’s a large waterfront park near my house that has access to the very bottom of San Francisco bay. People would wear masks out in the park and I even brought one with me. It was absolutely alien to wear protective gear out in a huge expanse like that.

So, my idea was to make a photo that represented that feeling. I brought my FE2 with the old film and Zenitar fisheye to the park, along with a photo buddy to actually press the button. People walking by were weirded out by the outfit, but that’s kind of the desired effect.

This image was enlarged and installed in the right-hand cabinet at the airport show.

An interesting side note to this project was recycling the can that the old film came in. Nowadays that would be made of plastic but they still shipped bulk film in metal cans back then. I took that can and added some knobs and switches to control a glitching noisemaker I had built last year. So, that old film can is now in use as a musical instrument.

The film can that used to hold 100′ of Ilford HP5+ is now a glitch sound machine

Mask Collage

Face masks are a part of life now but a lot of people are really pissed that they have to wear them. I was in the parking lot of a grocery store and a guy in front of me was talking to himself, angry about masks. Turns out he was warming up to argue with the security guard and then the manager. While I was inside shopping (~20 minutes) he spent the whole time arguing loudly with the manager. It was amazing to me how someone could waste that much time with that kind of energy.

When I got back to my studio I decided to draw a picture of that guy in my sketchbook. That kicked off a whole series of drawings over the next month.

I have a box of different kinds of paper I have kept for art projects since the early 90s. In there was a gift from an old roommate: a stack of blank blood test forms. I used those forms as the backgrounds for all the drawings. Yellow and red spray ink from an art colleague who moved away provided the context and emotional twists.

The main image is actually a collage of 23 separate drawings. It was enlarged and installed in the left-hand cabinet at the airport show.

Internet Archive

A few weeks ago, my video Danse des Aliénés won 1st place in the Internet Archive Public Domain Day film contest. It was made entirely from music and films released in 1925.

Danse des Aliénés

Film and music used:

In Youth, Beside the Lonely Sea

Das wiedergefundene Paradies
(The Newly Found Paradise)
Lotte Lendesdorff and Walter Ruttmann

Jeux des reflets et de la vitesse
(Games on Reflection and Speed)
Henri Chomette

Koko Sees Spooks
Dave Fleischer

Filmstudie
Hans Richter

Opus IV
Walther Ruttmann

Joyless Street
Georg Wilhelm Pabst

Danse Macabre Op. 40 Pt 1
(Dance of Death)
Camille Saint-Saëns
Performed by the Philadelphia Symphony Orchestra

Danse Macabre Op. 40 Pt 2
(Dance of Death)
Camille Saint-Saëns
Performed by the Philadelphia Symphony Orchestra

Plans? What plans?

Vaccines are on the way. Hopefully, we’ll see widespread distribution in the next few months. Until then, I’ll still be in my studio working on weird tech art and staying away from angry mask people.

I am focused on future projects that involve a lot of public participation and interactivity. I think we will need new ways of re-socializing and I want to be a part of positive efforts in that direction.

I also have plans for a long road trip from California to the east coast and back again. It will be a chance to rethink the classic American photo project and find new ways to see. But, that depends on how things work out with nature’s plans.

Fine-tuning a GPT-2 language model and generating text with a Flask web app

This is a long blog post. I included many details that were part of the decision process at each phase. If you are looking for a concise tech explainer, try this post instead.

I recently published a book of computer generated photographs and wanted to also generate the introductory text for it. I looked around for an online text generator that lived up to the AI hype, but they were mostly academic or limited demos. I also wanted a tool that would yield language specific to art and culture. It didn’t exist, so I built it.

My first impulse was to make use of an NVIDIA Jetson Nano that I had bought to do GAN video production. I had spent a few months trying to get that up and running but got frustrated with dependency hell. I pulled it back out and started from scratch using recent library updates from NVIDIA.

Long story short; it was a failure. Getting that little GPU machine running with modern PyTorch and Tensorflow was a huge ordeal and it is just too under-powered. Specifically, 4gb of RAM isn’t enough to load even basic models for manipulation. I was asking much more from it than the design intent, but was hoping it was hackable. Nope.

FWIW, I did come up with a Gist that got it very close to a ML configuration. Others may find it valuable if lost in that rabbit hole.

The breakthrough came while I was digging around the community for Huggingface.co tutorials that focused on deploying language models. Somebody recommended a Google Collab notebook by Max Woolf that simplified the training process. I discovered that Google Collab is not only a free service, it allows use of attached GPU contexts for use in scripts. That’s a big deal because online GPU resources can be expensive and complicated to set up.

In learning to use that notebook I realized I needed a large dataset to train the GPT-2 language model in the kind of writing I wanted it to produce. A few years ago I had bought a subscription to ArtForum magazine in order to read through the archives. I was, and still am, interested in the art criticism of the 60s and 70s because so much of it came from disciplined and strong voices. Art criticism was still a big deal back then and taken very seriously.

I went back to the ArtForum website and found the archives were complete back to 1963 and presented with a very consistent template system. Some basic web scraping could yield the data I needed.

Scraping with Python into an SQLite3 database

The first thing I did was pay for access to the archive. It was worth the price and I got a subscription to the magazine itself. Everything I did after that was as a logged in user, so nothing too sneaky here.

I used Python with the Requests and Beautiful Soup libraries to craft the scraping code. There are many tutorials for web scraping out there, so I won’t get too detailed here.

I realized there might be circuit breaker and automated filtering on the server, so I took steps to avoid hitting those. First I rotated the User Agent headers to avoid fingerprinting and I also used a VPN proxy to request from different IP addresses. Additionally, I put a random delay ~1 second between requests so it didn’t hammer the server. That was probably more generous that it needed to be, but I wanted the script to run unattended so I erred on the side of caution. There was no real hurry.

headers_list = [
    # iphone
     {
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Language": "en-us",
        "Connection": "keep-alive",
        "Accept-Encoding": "br, gzip, deflate",
        "User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 13_1_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.1 Mobile/15E148 Safari/604.1"
    },
    # ipad
    {
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Language": "en-us",
        "Connection": "keep-alive",
        "Accept-Encoding": "br, gzip, deflate",
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.1 Safari/605.1.15"
    },
    # mac chrome
    {
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Language": "en-us",
        "Connection": "keep-alive",
        "Accept-Encoding": "br, gzip, deflate",
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36"
    },
    # mac firefox 
    {
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Language": "en-us",
        "Connection": "keep-alive",
        "Accept-Encoding": "br, gzip, deflate",
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:70.0) Gecko/20100101 Firefox/70.0" 
    }
]

This was a 2 part process. The first was a script that collected all the links in the archive from the archive landing page, grouped by decade. The larger part was next, requesting all 21,304 links collected and isolating the text within.

for x in range(1, quantity):
        if (x == 1) or ((x % 10) == 0):
            proxy = random.choice(proxies)
            headers = random.choice(headers_list)
            sesh.headers = headers
        sleep(random.uniform(.5, 1.5))
        URL = links[x][0]
        page = sesh.get(URL, proxies=proxy)
        soup = BeautifulSoup(page.content, 'html.parser')
        h1 = soup.find_all('h1', class_='reviews__h1')
        body = soup.find_all('section', class_='review__content')
        date = URL[39:43]
        title = " ".join(str(h1[0].text).split())
        text = " ".join(str(body[0].text).split())
        text = text.replace(u"\u2018", "").replace(u"\u2019", "").replace(u"\u201c","").replace(u"\u201d", "")
        try:
            cursor.execute("INSERT INTO reviews(date,title,text) VALUES (?,?,?)", (date,title,text))
        except sqlite3.Error as error:
            print("Failed to insert", error)

Once all the reviews were collected, I ran some cleaning queries and regex to get rid of punctuation. Then it was a simple export to produce a CSV file from the database.

Training the GPT-2 model

Now that I had a large collection of language examples to work with, it was time to feed the language model. This stage requires the most compute power out of any part of the project. It also makes use of specialized libraries that run most efficiently with a GPU. On a typical desktop computer system, the GPU is usually the video card and comes from a variety of vendors.

Last decade, the rise of cryptocurrency mining absorbed large stocks of GPU hardware. People built huge GPU farms to generate this new virtual gold. That drove innovation and research capital in the GPU manufacturing market. Modern machine learning and neural network implementation reap the benefits of that fast progress.

In academic and corporate environments, custom onsite infrastructure is an option. For smaller businesses, startups, and independent developers, that can be cost prohibitive. What has evolved is a new GPU provisioning economy. In some ways it’s a throwback to the mainframe timeshare ecosystems of the 70s. Full circle.

For this project, my budget was zero. GPU attached server instances come at a premium starting at $.50/hr. ($360 a month). So, I looked into all kinds of free tiers and promotional servers. I even asked around at Grey Area hoping some alpha geek had her own GPU cluster she was willing to share. No dice.

What I did find was a Tensorflow training tutorial using Google Colab, which offers FREE GPU compute time as part of the service. I didn’t know about Colab, but I had heard plenty about Jupyter notebooks from former co-workers. They are sharable research notebooks that can run code. Jupyter depends on the underlying capabilities of the host machine. Google has vast resources available, so their notebook service include GPU capability.

The tutorial is straightforward and easy. After months of wrestling the Jetson Nano into a stalemate, watching Collab load Tensorflow and connect to my CSV so fast was shocking. I successfully had simple training working in less than an hour. Generated text was only a few minutes later. I was in business.

There are a few options for the training function and I spent some time researching what they meant and tinkering. The only option that had relevant effect was the number of training steps, with a default of 1000.

sess = gpt2.start_tf_sess()

gpt2.finetune(sess,
              dataset=file_name,
              model_name='355M',
              steps=10000,
              restore_from='latest',
              run_name='runmed',
              print_every=10,
              sample_every=200,
              save_every=500,
              overwrite=True
              )

I had interesting results at 200 training steps, good results at 1000, better at 5000 steps. I took that to mean more is always better, which is not true. I ended up training for 20000 steps and that took two nights of 6 hour training sessions. Based on the results, I think I’m getting the best it is capable of and more training wouldn’t help. Besides, I have a suspicion that I over-trained it and now have overfitting.

Something I was very fortunate with but didn’t realize until later was the length of the original reviews. They are fairly consistent in length and structure. By structure I mean paragraph length and having and opening or closing statement. They are mostly in the third person also.

But it was the length that was key. I hit upon the sweet spot of what GPT-2 can produce with the criteria I had. It’s not short form, but they aren’t novels either. 400-600 words is a good experimental length to work with.

Another benefit of training like this was being able to generate results so soon in the process. It was really helpful to know what kind of output I could expect. The first few prompts were a lot of fun and I was pleasantly surprised to see so many glitches and weirdness. I was excited about sharing it, too. I thought that if more people could experiment without having to deal with any of the tech, they might be inspired to explore it as a creative tool.

Into the wild

Now that I had a trained language model, the challenge of deploying it was next. The Colab notebook was fine for my purposes, but getting this in front of average folks was going to be tricky. Again, I had to confront the issue of compute power.

People have high expectations of online experiences now. Patience and attention spans are short. I wasn’t intending a commercial application, but I knew people would expect something close to what they are given in consumer contexts. That meant real-time or near real-time results.

The Hugging Face crew produced a close to real-time GPT-2 demo called Talk to Transformer that was the inspiration for producing an app for this project. That demo produces text results pretty fast, but limited in length.

I made one last lap around the machine learning and artificial intelligence ecosystem, trying to find a cheap way to deploy a GPU support app. Google offers GPUs for their Compute Engine, Amazon has P3 instances of EC2, Microsoft has Azure NC-series, IBM has GPU virtual servers, and there are a bunch of smaller fish in the AI ocean. Looking through so much marketing material from all of those was mind-numbing. Bottom line: it’s very expensive. A whole industry has taken shape.

I also checked my own personal web host, Digital Ocean, but they don’t offer GPU augmentation yet. But, they do offer high end multi-core environments in traditional configurations. Reflecting back on my struggle with the Jetson Nano, I remembered an option when compiling Tensorflow. There was a flag for --config=cuda that could be omitted, yielding a CPU-only version of Tensorflow.

That matters because Tensorflow is at the core of the training and generation functions and is the main reason I needed a GPU. I knew CPU-only would be way too slow for training, but maybe the generator would be acceptable. To test this out, I decided to spin up a high powered Digital Ocean droplet because I would only pay for the minutes it was up and running without a commitment.

I picked an upgraded droplet and configured and secured the Linux instance. I also installed all kinds of dependencies from my original gist because I found that they were inevitably used by some installer. Then I tried installing Tensorflow using the Python package manager pip. That dutifully downloaded Tensorflow 2 and built it from the resulting wheel. Then I tried to install the gpt-2-simple repository that was used in the Colab tutorial. It complained.

The gpt-2-simple code uses Tensorflow 1.x, not 2. It is not forward compatible either. Multiple arcane exceptions were thrown and my usual whack-a-mole skills couldn’t keep up. Downgrading Tensorflow was required, which meant I couldn’t make use of the pre-built binaries from package managers. My need for a CPU-only version was also an issue. Lastly, Tensorflow 1.x doesn’t work with Python 3.8.2. It requires 3.6.5.

I reset the Linux instance and got ready to compile Tensorflow 1.15 from source. Tensorflow uses a build tool called Bazel and v0.26.1 of the tool is specifically required for these needs. I set up Bazel and cloned the repo. After launching the installer, I thought it was going fine but realized it was going to take a looooong time, so I let it run overnight.

The next day I saw that it had failed with OOM (Out Of Memory) in the middle of the night. My Digital Ocean droplet had 8gb of RAM so I bumped that up to 16gb. Thankfully I didn’t have to rebuild the box. I ran it again overnight and this time it worked. It took around 6 hours on a 6 core instance to build Tensorflow 1.15 CPU-only. I was able to downgrade the droplet afterwards so I didn’t have to pay for the higher tier any more. FWIW, compiling Tensorflow cost me about $1.23.

I then loaded gpt-2-simple, the medium GPT-2 (355M) model, and my checkpoint folder from fine tuning in Google Colab. That forms the main engine of the text generator I ended up with. I was able run some manual Python tests and get generated results in ~90 seconds. Pretty good! I had no idea how long it was going to be when I started down this path. My hunch that a CPU-only approach for generation paid off.

Now I had to build a public facing version of it.

The Flask app

Robotron

The success of the project so far came from Python code. So, I decided to deploy it also using Python, as a web application. I’ve been building websites for ~20 years and used many different platforms. When I needed to connect to server processes, I usually did it through an API or some kind of PHP bridge code.

In this case I had my own process I needed to expose and then send data. I figured having a Python web server would make things easier. It was definitely easier at the beginning when was experimenting, but as I progressed the code became more modular and what had been easy was a liability. Flask is a Python library used to build web services (mostly websites) and it has a simple built-in web server. I knew plenty about it, but never used it in a public facing project.

One of the first development decisions I made was to split the web app and text generator into separate files. I could have tried threading but there was too much overhead already with Tensorflow and I doubted my ability to correctly configure a balanced multiple process application in a single instance.* I wanted the web app to serve pages regardless of the state of the text generation. I also wanted them to have their own memory pools that the server would manage, not the Python interpreter.

* I did end up using Threading for specific parts of the generator at a later stage of the project.

Every tutorial I found said I should not use the built-in Flask web server in production. I followed that advice and instead used NGINX and registered the main python script as a WSGI service. After I had already researched the configurations of those, I found this nginx.conf generator that would have made things faster and easier.

After securing the server and getting a basic Hello World page to load, I went through the Let’s Encrypt process to get an SSL certificate. I sketched out a skeleton of the site layout and the pages I would need. The core of the site is a form to enter a text prompt, a Flask route to process the form data, and a route and template to deliver the generated results. Much of the rest is UX and window dressing.

A Flask app can be run from anywhere on the server, not necessarily the /html folder as typically found in a PHP based site. In order to understand page requests and deliver relevant results, a Python script is created that describes the overall environment and the routes that will yield a web page. It is a collection of Python functions, one for each route.

@app.route("/")
def index():
    current_url = base_url
    return render_template('index.html', page_title='Art Review Generator', current_url=current_url, copyright_year=today.year)

@app.route("/generate/")
def generate():
    current_url = base_url + "/generate/"
    return render_template('generate.html', page_title='Generate a review', current_url=current_url, copyright_year=today.year)

@app.route("/examples/")
def examples():
    current_url = base_url + "/examples/"
    return render_template('examples.html', page_title='Examples of generated art reviews', current_url=current_url, copyright_year=today.year)

For the actual pages, Flask uses a built-in template engine called Jinja. It is very similar to Twig, which is popular with PHP projects. There are also Python libraries for UI and javascript, but it felt like overkill to grab a bunch of bloatware for this basic site. My CSS and js includes are local and static and limited to what I actually need. There is no inherent template structure in Flask, so I rolled my own with header, footer, utility, and content includes of modular template files.

Python

return render_template('index.html', page_title='Art Review Generator')

HTML

<title>{{ page_title }}</title>

Based on experience, the effort you put into building out a basic modular approach to templates pays dividends deep into the process. It’s so much easier to manage.

{% include 'header.html' %}
{% include 'navigation.html' %}
	<div class="container">
		<div class="row">
			<div class="col">
				<p>This site generates art reviews</p>
			</div>
		</div>
	</div>
{% include 'footer.html' %}

After building out the basic site pages and navigation, I focused on the form page and submission process. The form itself is simple but does have some javascript to validate fields and constrain the length of entries. I didn’t want people to copy and paste a lot of long text. On submission, the form is sent as POST to a separate route. I didn’t want to use URL parameters that a GET request would enable because it’s less secure and generates errors if all the parameter permutations aren’t sanitized.

The form processing is done within the Flask app, in the function for the submission route. It checks and parses the values, stores the values in a SQLite database row, and then sends a task to the RabbitMQ server.

@app.route("/submission", methods=["POST"])
def submission():
    payload = request.form
    if spam_check:
        email = payload["submission_email"]
        valid_email = is_email(email, check_dns=True)
        if valid_email:
            # collecting form results
            prompt = payload["prompt"]
            # empty checkbox caused error 400 so change parser
            if not request.form.get('update_check'):
                subscribe = 0
            else:
                subscribe = 1
            eccentricity = int(payload["eccentricity"])
            ip = request.environ.get('HTTP_X_REAL_IP', request.remote_addr)
            result = {"email": email, "prompt": prompt, "eccentricity": eccentricity, "subscribe": subscribe, "ip": ip}

            # create database entry
            dConn = sqlite3.connect("XXXXXXXX.db")
            dConn.row_factory = sqlite3.Row
            cur = dConn.cursor()

            # get id of last entry and hash it
            cur.execute("SELECT * FROM xxxxxx ORDER BY rowid DESC LIMIT 1")
            dRes = cur.fetchone()
            id_plus_one = str(int(dRes["uid"]) + 1).encode()
            b.update(id_plus_one)
            urlhsh = b.hexdigest()
            now = str(datetime.now(pytz.timezone('US/Pacific')).strftime("%Y-%m-%d %H:%M:%S"))

            # insert into database
            try:
                cur.execute("INSERT INTO xxxxxx(ip,email,eccentricity,submit_date,prompt,urlhsh,subscribed) VALUES(?,?,?,?,?,?,?)", (ip, email, eccentricity, now, prompt, urlhsh, subscribe))
                logging.debug('Prompt %s submitted %s', urlhsh, now)
            except sqlite3.Error as error:
                logging.exception(error)

            dConn.commit()
            cur.close()
            dConn.close()

            # notify task queue
            rabbit_connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
            rabbit_channel = rabbit_connection.channel()
            rabbit_channel.queue_declare(queue='task_queue', durable=True)
            rabbit_channel.basic_publish(
                exchange='',
                routing_key='task_queue',
                body=urlhsh,
                properties=pika.BasicProperties(
                    delivery_mode=2,  # make message persistent
                ))
            rabbit_connection.close()

            # confirmation page
            return render_template('submission.html', page_title='Request submitted', result=result, copyright_year=today.year)
        else:
            ed = email + " is not a valid email address"
            return render_template('error.html', page_title='Error', error_description=ed, copyright_year=today.year)
    else:
        return render_template('submission.html', page_title='Request submitted', result="spam", copyright_year=today.year)

Chasing the rabbit

RabbitMQ is a server based message broker that I use to relay data between the submission form and the generator. Although the two scripts are both Python it’s better to have them running separately. There is no magical route between concurrent Python scripts so some sort of data broker is helpful. The first script creates a task and tells RabbitMQ it is ready for processing. The second script checks in with RabbitMQ, finds the task and executes it. It is a middleman between the two scripts.

It does all this very fast, asynchronously, and persistently (in a crash or reboot it remembers the tasks that are queued). Also, it was easy to use. Other options were Redis and Amazon SQS, but I didn’t need the extra overhead or features those offer, or want the dependencies they require.

It was easy to install and I used the default configuration. Specifically I chose to limit connections to localhost for security. That is the default setting, but it can absolutely be set up to allow access from another server. So, I had the option of running my web app on one server and the generator on another. Something to consider when scaling for production or integrating into an existing web property.

sudo apt install rabbitmq
sudo rabbitmq-diagnostics status
Status of node rabbit@xxxxxx ...
Runtime

OS PID: 909
OS: Linux
Uptime (seconds): 433061
Is under maintenance?: false
RabbitMQ version: 3.8.8
Node name: rabbit@xxxxxx
Erlang configuration: Erlang/OTP 23 [erts-11.0.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:64]
Erlang processes: 294 used, 1048576 limit
Scheduler run queue: 1
Cluster heartbeat timeout (net_ticktime): 60

Delivering results

I chose to deliver results with an email notification instead of (near)real-time for a number of reasons. The primary issue was compute time. My best tests were getting results in 93 seconds. That’s with no server load and an ideal environment. If I tried to generate results while people waited on the page, the delay could quickly climb to many minutes or hours. Also, the site itself could hang while chewing on multiple submissions. I don’t have a GPU connected to the server, so everything is going through normal processing cores.

When Facebook first started doing video, the uploading and processing times were much longer than they are now. So, to keep people clicking around on the site they set up notifications for when the video was ready. I took that idea and tried to come up with delayed notifications that didn’t require a login or keeping a tab/window open. That’s very important for mobile users! Nobody is going to sit there holding their phone while this site chews on Tensorflow for 10 minutes.

I also thought of the URL shortener setup where they use random characters to serve as a bookmark for a link. Anybody can use the link and it doesn’t have content or identity signifiers in the URL.

The delivery process was divided into three stages, processing, notification, and presentation.

Processing stage

The main computational engine of the whole project is a python script that checks RabbitMQ for tasks, executes the generating process with Tensorflow, stores the result and sends an email to the user when it is ready.

Checking RabbitMQ for tasks

def on_message(channel, method_frame, header_frame, body, args):
    (connection, threads) = args
    delivery_tag = method_frame.delivery_tag
    t = threading.Thread(target=motor, args=(channel, delivery_tag, header_frame, body))
    t.start()
    threads.append(t)


channel.basic_qos(prefetch_count=1)
threads = []
on_message_callback = functools.partial(on_message, args=(connection, threads))
channel.basic_consume(queue='task_queue', on_message_callback=on_message_callback)

channel.start_consuming()

The reason I have to use threading is because querying RabbitMQ is a blocking process. It’s a very fast and lightweight blocking process, but can absolutely monopolize resources when running. I found that out the hard way because the script kept silently crashing when asking for new tasks at the same time it was using Tensorflow. Trust me, this took a few days of logging and debugging trying to figure out what was causing the generation process to simply disappear without error.

Retrieve prompt from db and start Tensorflow

    logging.debug("Rabbit says %r" % body.decode())

    # get record using rabbitmq msg
    urlhsh = body.decode()
    dConn = sqlite3.connect("XXXXXXXX.db")
    dConn.row_factory = sqlite3.Row
    cur = dConn.cursor()
    try:
        cur.execute("SELECT * FROM xxxxxx WHERE urlhsh = ?", (urlhsh,))
    except sqlite3.Error as error:
        logging.exception(error)
    dRes = cur.fetchone()
    logging.debug("Found %r, processing..." % urlhsh)
    row_id = dRes["uid"]
    temperature = int(dRes["eccentricity"]) * .1

    # the main generation block, graph declaration because threading
    with graph.as_default():
        result = gpt2.generate(
            sess,
            run_name='porridge',
            length=400,
            temperature=temperature,
            prefix=dRes["prompt"],
            truncate="<|endoftext|>",
            top_p=0.9,
            nsamples=5,
            batch_size=5,
            include_prefix=False,
            return_as_list=True,
        )
    result = json.dumps(result)

Because I’m using threading for RabbitMQ, I had to declare a graph for Tensorflow so it had access to the memory reserved for the model.

    with graph.as_default():

Store generated result and send notification email

    # store generated results in db
    now = str(datetime.now(pytz.timezone('US/Pacific')).strftime("%Y-%m-%d %H:%M:%S"))
    try:
        cur.execute("UPDATE xxxxxx SET gen_date = ?, result = ? WHERE uid = ?", (now, result, row_id))
        logging.debug("Published %s on %s", urlhsh, now)
    except sqlite3.Error as error:
        logging.exception(error)
    dConn.commit()
    dConn.close()

    # send notification email
    prompt = dRes["prompt"]
    submit_date = dRes["submit_date"]
    email_address = dRes["email"]
    link = "https://artreviewgenerator.com/review/" + urlhsh
    if len(dRes["prompt"]) < 50:
        preview_text = prompt
    else:
        preview_text = prompt[:50] + "..."
    email = {
        'subject': 'Your results are ready',
        'from': {'name': 'Joshua Curry', 'email': 'info@artreviewgenerator.com'},
        'to': [
            {'email': email_address}
        ],
        "template": {
            'id': '383338',
            'variables': {
                'preview_text': preview_text,
                'prompt': prompt,
                'submit_date': submit_date,
                'link': link
            }
        },
    }
    rest_email = [{'email': email_address, 'variables': {}}]
    try:
        SPApiProxy.smtp_send_mail_with_template(email)
        logging.debug("Notification sent to " + email_address)
        if int(dRes["subscribed"]) == 1:
           SPApiProxy.add_emails_to_addressbook(SP_maillist, rest_email)
    except Exception:
        logger.error("Problem with SendPulse mail: ", exc_info=True)

I’m using SendPulse for the email service instead of my local SMTP server. There are a couple of good reasons for this. Primarily, I want to use this project to start building an email list of my art and tech projects. So, I chose a service that also has mailing list features in addition to API features. SendPulse operates somewhere between the technical prowess of Twilio and friendly features of MailChimp. Also important is their free tier allows for 12000 transactional emails per month. Most of the other services tend to focus on number of subscribers and the API access is a value add to premium plans. Another thing I liked about them was their verification process for SMTP sending. Avoiding spam and spoofing is seriously considered in their services.

Presentation

Upon receipt of the notification email, users are given a link to see the results that have been generated. If I was designing a commercial service I would have probably chosen to deliver the results in the actual email. It would be more efficient. But, I also wanted people to share the results they get. Having a short url with a permalink easy to copy and paste was important. I also wanted the option of showcasing recent entries. That isn’t turned on now, but I thought it would be interesting to have a gallery in the future. It would also be pretty useful for SEO if I went down that path.

https://artreviewgenerator.com/review/8e92db17

The characters at the end are a hash of the unique id of the SQLite row that was created when the user submission was recorded. Specifically, they are hashed using the Blake2b “message digest” (aka secure hash) with the built-in hashlib library of Python 3. I chose that because the library offers an adjustable character length for that hash type, unlike others that are fixed at long lengths.

from hashlib import blake2b

b = blake2b(digest_size=4)
# get id of last entry and hash it
cur.execute("SELECT * FROM xxxxxx ORDER BY rowid DESC LIMIT 1")
dRes = cur.fetchone()
id_plus_one = str(int(dRes["uid"]) + 1).encode()
b.update(id_plus_one)
urlhsh = b.hexdigest()

When the url is visited, the Flask app loads the page using a simple db retrieval and Jinja template render.

return render_template(
  'review.html',
  page_title=truncated_title,
  prompt=dRes["prompt"],
  review=result_list,
  gen_date=dRes["gen_date"],
  urlhsh=urlhsh,
  current_url=current_url,
  copyright_year=today.year
)

Eventually I would like to offer a gallery of user generated submissions, but I want to gauge participation in the project and get a sense of what people submit. Any time you open up a public website with unmoderated submissions, people can be tempted to submit low value and offensive content.

That is why you see a link beneath the prompt on the results page. I built in a capability for people to report a submission. It’s actually live and immediately blocks the url from loading. So feel free to block your own submissions.

Prologue

This project has been running for a month now without crashing or hitting resource limits. I’m pretty happy with how it turned out, but now have a new mouth to feed by paying for the hosting of this thing.

The actual generator results are interesting to me on a few levels. They reveal intents, biases, and challenges that come from describing our culture at large. They also make interesting mistakes. From my point of view, interesting mistakes are a critical ingredient for creative output.

Now go generate your own art review

MMXIX: time, noise, light

This year saw the completion of new sound sculptures and large installation work. It offered up new performance contexts and an expansion of exhibition options. The projects have grown in scale and scope, but the internal journey continues.

Wheel of Misfortune

A few years ago I noticed neighborhood kids putting empty water bottle into spokes of the back wheels of their bikes. They got a cool motorcycle sound out of it. One of them had two bottles offset and that produced a rhythmic but offbeat cycle that sounded interesting.

It gave me the idea to use a bicycle wheel for repeating patterns the way drum machines an sequencers do. I also thought it would be an interesting object to build from a visual standpoint.

It took a while, but having the workspace to lay out larger electronics assemblies was helpful. I settled on five sensors in a bladed array reading magnets attached to the spokes.

A first performance at local gallery Anno Domini with Cellista was fun, but the sounds I had associated with the triggers lacked bite. I reworked the Raspberry Pi running Fluidsynth and built 14 new instruments using a glitched noise sound pack I released a few years ago.

To switch between the instruments I came up with a contact mic trigger using a chopstick and an Arduino. It has a satisfying crack when tapped and cycles the noise patches effectively.

The Wheel got a loud and powerful public test at Norcal Noisefest. People responded not only to novelty of the bicycle wheel, but the badass sound it could make.

https://www.instagram.com/p/B0XYC6hjkVq/?utm_source=ig_web_copy_link

Oracle

I get asked to do sound performances more often these days and it can be challenging because I don’t have much outboard musical gear. So, I have a general effort to create more gear to use live. A common need is to have an interesting way of triggering longform loops I created in my studio.

Taking a cue from the grid controllers used by Ableton Live, I had the idea to build a player that keyed off objects placed under a camera. Reading location and size, it could arrange loops in a similar way.

Computer vision test for Oracle

The project kicked off with an analog video stand I found that was used for projecting documents in a business presentation. I connected that to a primitive but very effective computer vision board for Arduino called the Video Experimenter.

After months of testing with different objects I settled on small white rocks that brought inherent contrast. At a library sale I picked up a catalog of pictograms from Chinese oracle bones that had fascinating icons to predict the future with.

Oracle stones

That clinched the theme of an “oracle” divining the future of a musical performance rather than a musician executing a planned performance.

It has turned out to be really flexible for performances and is a crowd favorite, especially when I let people place the stones themselves.

Oracle at First Friday

Delphi

Smashed tv screen for Oracle
Looks cool, huh? I wish I could say it was intentional. I smashed the screen while loading the equipment for SubZERO this year. meh, I just went with it.

People give me things, usually broken things. I don’t collect junk though. I learned the hard way that some things take a lot of work to get going for very little payoff. Also, a lot of modern tech is mostly plastic with melted rivets and tabs instead of screws or bolts. They weren’t meant to be altered or repaired.

Big screen TVs are a good example. One of the ways they got so cheap is the modular way they get made with parts that weren’t meant to last. I got a fairly large one from Brian Eder at Anno Domini and was interested in getting it back up.

Unfortunately, a smashed HDMI board required some eBay parts and it took more time than expected. Once it was lit up again and taking signal I started running all kinds of content through the connector boards.

When hung vertical, it resembled one of those Point-of-Purchase displays you see in cell phone stores. I though about all the imagery they use to sell things and it gave me the idea of showing something more human and real.

In society that fetishizes youth culture and consumption, we tend to fear aging. I decided to find someone at a late stage of of life to celebrate and display four feet high.

That person turned out to be Frank Fiscalini. At 96 years old he has led a full rich life and is still in good health and spirits. It took more than a few conversations to explain why I wanted to film a closeup of his eyes and face, but he came around.

I set the TV up in my studio with his face looping for hours, slowly blinking. I had no real goal or idea of the end. I just lived with Mr. Fiscalini’s face for a while.

I thought a lot about time and how we elevate specific times of our lives over others. In the end, time just keeps coming like waves in the ocean. I happen to have a fair amount of ocean footage I shot with a waterproof camera.

With the waves projected behind his face, my studio was transformed into a quiet meditation on time and humanity.

Other contributions of building scaffolding and P.A. speakers formed the basis of a large-scale installation. Around this time, I had also been reading a strange history of the Oracle of Delphi.

At first the “oracle” was actually a woman whose insane rants were likely the result of hallucinations from living over gas events. A group of men interpreted what she said and ended up manipulating powerful leaders for miles.

Thus Delphi was formed conceptually. The parallels to modern politics seemed plain, but I’ve been thinking a lot about the futility of trying to control or predict the future. This felt like a good time for this particular project.

Balloon synth

The annual SubZERO Festival here in San Jose has been an anchor point for the past few years. One challenge I’ve faced is the strong breeze that blows through in the hour before sunset. For delicate structures and electronics on stands, it’s a problem. Instead of fighting it this year, I decided to make use of it.

I had an idea to put contact mics on balloons so when the wind blew, the balloons would bounce against each other. I thought they might be like bass bumping wind chimes.

Thanks to a generous donation by Balloonatics, I had 15 beautiful green balloons for the night of the festival. Hooked up to mics and an amplifier, they made cool sounds. But, it took a bit more force than the breeze to move them forcefully enough.

https://www.instagram.com/p/BycRPL2jQn-/?utm_source=ig_web_copy_link

Kids figured out they could bump and play with the balloons and they would make cool noises. Sure enough, it drew a huge crowd quickly. People came up to the balloons all night and punched and poked them to get them to make noise.

On the second night, though, the balloons were beat. Some rowdy crowds got too aggro and popped a bunch of them. Anyway, they were a big hit and it was fun to have something like that around.

Belle Foundation grant

An early surprise of the year was getting an envelope from the Belle Foundation with an award for one the year’s grants. I was stoked to be included in this group.

My application was simple and I talked a lot about SubZERO projects and working with older technology. In other words, what I actually do. To get chosen while being real about the art I make was refreshing.

Content Magazine profile

Before I moved back to California in 2012, I worked at an alt-weekly newspaper in Charleston, SC. I photographed all kinds of cultural events and wrote profiles of artists and musicians. But, I was always on the other side of the interview, as the interviewer.

Daniel Garcia from local magazine Content reached out in the beginning of this year and said they were interested in profiling of me and my work. The tables had turned.

Content Magazine spread
Opening portrait and write-up in Content

Writer Johanna Hickle came by my Citadel art studio and spent a generous amount of time listening to me ramble about tech and such. Her write-up was solid and she did a good job distilling a lot of info.

Content Magazine spread
Collage and write-up in Content magazine

It was nerve-wracking for me, though. I knew the power they had to shape the story in different directions. I was relieved when it came out fine and had fun showing it to people.

Norcal Noisefest

In 2017, I went to the Norcal Noisefest in Sacramento. It had a huge impact on my music and approach to anything live. I came back feeling simultaneously assaulted and enlightened.

Over the past two years, I’ve built a variety of live sound sculptures and performed with most of them. This year the focus was on the new Wheel of Misfortune. I reached out to Lob Instagon, who runs the festival, and signed up for a slot as a performer at Norcal Noisefest in October.

Coincidentally, I met Rent Romus at an Outsound show in San Francisco and told him about performing at Noisefest. Rent puts on all kinds of experimental shows in SF and he suggested a preview show at the Luggage Store.

So I ended up with a busy weekend with those shows and an installation at First Friday.

Norcal Noisefest was a blast and I got see a bunch of rad performances. My set sounded like I wanted, but I have a ways to go when it comes to stage presence. Other people were going off. I have to step things up if I going to keep doing noise shows

Flicker glitch

I have been making short-form abstract videos for the past few years. Most have a custom soundtrack or loop I make. This year I collected the best 87 out of over 250 and built a nice gallery for them on this site.

Every once in a while I get requests from other groups and musicians to collaborate or make finished visuals for them. Most people don’t realize how much time goes into these videos and I’m generally reluctant to collaborate in such an unbalanced way.

I was curious about making some longer edited clips though. I responded to two people who reached out and made “music videos” for their pre-existing music. It wasn’t really collaborative, but I was ok with that because email art direction can be tricky.

The first, Sinnen, gave me complete freedom and was releasing an album around the same time. His video was a milestone in my production flow. It was made entirely on my iPhone 7, including original effects, editing and titles. I even exported at 1080p, which is a working resolution unthinkable for a small device just five years ago. They could shoot at that fidelity, but not manipulate or do complex editing like that.

The next video was much more involved. It was for a song by UK metal band Damim. The singer saw my videos on Instagram and reached out for permission to use some of them. I offered to to just make a custom video instead.

All the visuals were done on my iPhone, with multiple generations and layers going through multiple apps. I filled up my storage on a regular basis and was backing it up nightly. Really time consuming. Also, that project required the horsepower and flexibility of Final Cut Pro to edit the final results.

I spent six months in all, probably 50 hours for so. I was ok with that because it was a real world test of doing commissioned video work for someone else’s music. Now I know what it takes to produce a video like that and charge fairly in the future.

New photography

Yes, I am still a photographer. I get asked about it every once in awhile. This year I came out with two different small bodies of work shooting abstracts and digitizing some older work.

Photographs on exhibit at the Citadel
Grounded series at a Citadel show near downtown San Jose, CA

These monochromatic images are sourced from power wires for the local light rail (VTA) sub-station on Tasman Rd. I drove by this cluster everyday on a tech job commute for about a year. I swore that when the contract was over I was going to return and photograph all the patterns I saw overhead.

I did just that and four got framed and exhibited at Citadel. One was donated to Works gallery as part of their annual fundraiser.

Donated photograph at Works
Importance of being grounded at Works

The Polaroids come from a project I had in mind for many years. Back when Polaroid was still manufacturing SX-70 instant prints, I shot hundreds of them. I always envisioned enlarging them huge to totally blow out the fidelity (or lack of it).

Polaroids
Enlarged Polaroid prints

This year I began ordering 4 foot test prints on different mounting substrates. To that I ended up scanning a final edit of 14 from hundreds. To see them lined up on the screen ready for output was a fulfilling moment. Having unfinished work in storage was an issue for me for a long time. This was a convergent conclusion of a range of artistic and personal issues.

Passing it on

Now that I have a working art studio, I have a place to show people when visit from out of town. The younger folks are my favorite because they think the place is so weird and like because of that. I share that sentiment.

My French cousins Toullita and Nylane came by for a day and we made zines. Straight up old school xerox zines with glue and stickers and scissors. It was a rad day filled with weird music and messy work.

More locally, I had two younger cousins from San Francisco, Kieran and Jasmina, spend a day with me. They’ve grown up in a world immersed in virtual experiences and “smart” electronics. My choice for them was tinkering with Adafruit Circuit Playground boards.

Tinkering with Circuit Playgrounds
Cousins collaborating on code for apple triggered capacitance synths

They got to mess with Arduino programs designed to make noise and blink lights. At the end they each built capacitive touch synthesizers that we hooked up to apples. Super fun. Later that night we took them to a family dinner and they got to explain what they had made and put on a little demo.

Next up

The wolves are still howling and running. My longtime project to build standalone wolf projections made a lot of progress this year. I had hoped to finish it before the last First Friday of the year, but that wasn’t in the cards.

https://www.instagram.com/p/B4raBM0DS6Z/?utm_source=ig_web_copy_link

Getting something to work in the studio is one thing. Building it so it is autonomous, self-powered, small, and can handle physical bumps, is a whole different game. But, I do have the bike cargo trailer and power assembly ready. The young cousins even got a chance to help test it.

A new instrument I’ve been working on is a Mozzi driven Arduino synth enclosed in an old metal Camel cigarettes tin. It has been an evergreen project this year, offering low stakes programming challenges to tweak the sounds optimize everything for speed.

https://www.instagram.com/p/B5zjHqYDaMi/?utm_source=ig_web_copy_link

One need I had was a precise drill for specific holes. A hand drill could do it, but I had a cleaner arrangement in mind. As luck would have it, another cousin in San Luis Obispo had an extra drill press to donate. Problem was, it was in rough shape and rusted pretty bad.

I brought it back and doused it in PB B’Laster Penetrating Catalyst. That made quick work of the frozen bolts and a range of grinders and rotary brushes handled the other rust. It looks great and is ready to make holes for the Camel synth.

Finis

It’s been a good year artistically. I had some issues with living situations and money, but it all evened out. I’m grateful to have this kind of life and look forward to another year of building weird shit and making freaky noise.

MMXVIII: the year in review

Image of a performance of Sympathy at SubZERO, with noise and smoke.
Performing at SubZERO Festival in June. Photo by Lisa Teng

It’s been a prolific year for Lucidbeaming: multimedia art by Joshua Curry. Beginning with a new art studio and finishing up with a host of Winter projects.

The main theme has been expansion. I took my music and found ways to incorporate performance and sculptural elements. The video work has been scaled up to building size and fed into monitors for physical effect. I pushed my limits on public interaction by making 9(!) appearances with a booth at SubZERO/First Fridays.

Personally, I’ve found new creative friendships and nurtured existing ones. I have no interest doing any of this alone, even though my studio life is very private. It’s just more interesting to find other people also putting their energy into something non-commercial, independent, and fucking weird.

The Citadel Studio

I had to give up my apartment at the beginning of the year. Instead of trying to find another (expensive) combined live/work spot, I took the plunge and leased an art studio. It turned out to be a good decision because my creative environment has been stable while the sleep spots have come and gone.

It has generous storage up top and a separate room for music production. My whole workflow and process has grown because of the space. I feel very fortunate to have this.

Panorama my art studio at the Citadel Complex in downtown San Jose, CA
Just moved in.
Back wall of the studio with video projection
Video projection and staging area.

Wolves

I have a thing for wolves, especially wolves living at the Chernobyl nuclear disaster site. This work is the beginning of a long-term project about wolves that requires “vectorized” images of wolves in motion.

Making these digital drawings involves a variety of new skill-sets and hardware for me. I have worked with animators and graphic designers who have experience digitizing images and working with stylus devices, but never had much opportunity to dive in myself.

I couldn’t afford a high-end Wacom tablet or iPad Pro, but I did find an older tablet/laptop hybrid at my aunt’s house one Thanksgiving. She used it for teaching before her retirement. When it was new, hybrid tablet/PCs were novel and sounded great, in theory.

When I got it, the battery was dead and Windows 7 had been locked by security and update issues. I got a new battery and installed Linux Ubuntu. Setup was not flawless, but it has ended up working fine (including all the stylus/touchscreen features).

To do the rotoscoping of video footage, I exported all the video frames with ffmpeg and then used Inkscape to draw over the top of them. So far, so good. It’s time consuming and manual work, but meditative and interesting.

Rotoscoping wolf motion with an old laptop
Rotoscoping wolves on a tc4400 running Inkscape on Ubuntu.

Critters gets reviewed on Badd Press

I released my second full-length album, Critters, late last year. To promote it, I used more organic methods than with my first album, Spanner. Basically, I sent it out to a lot of blogs that cover ambient and experimental music. It’s tough to cut through the volume of submissions they get. One of the people who did respond was Kevin Press at Badd Press.

It was strange but gratifying to read his review when it got posted early this year. For many years, I worked at an alt-weekly newspaper in Charleston, SC and saw lots of bands and artists try to get reviewed or covered. I also saw lots of them get worked up about the reviews. I admit to feeling a little nervous about what he might say. His review was thoughtful and generous.

Cover image for the album Critters

Multimedia artist and experimental composer Joshua Curry in San Jose, California can lay claim to a unique accomplishment. His November release Critters is its own language. It is unlike anything we’ve heard. Mixing recordings of wildlife at sunset with synthesizers and a genuinely unique approach to composition, Curry has produced a phenomenal 15-track album.

— Kevin Press, Badd Press

Read his full review of Critters on the Badd Press website.

Critters on college radio

The first time I heard my music on the radio was in April of this year. I was on my way to the DMV to handle the smog certification for my vehicle. On the radio was KFJC, a local college radio station that has a huge broadcast reach in this valley. I heard a song that sounded really familiar and after a few seconds I realized it was mine. I got chills.

It was such a rad feeling to catch it on the radio at random. A month before, I had packed up 40 or so custom CDs of my album Critters and shipped them out to college radio stations across the U.S. and Canada. So much was going on at that time that I didn’t follow up to see if any of the stations played it.

A stack of Critters CDs ready to ship out to radio stations.
CDs ready to be shipped to college radio stations.

After some web searches later that night I found that lots of stations had picked it up and put it into regular rotation. I didn’t even know. KALX in Berkeley, WNYU in New York, KFJC here in San Jose, CISM in Montreal, KBOO in Portland and many more had been playing songs from Critters.

Radio station charts for Critters from KALX, WYNU, and KFJC
Songs from Critters on playlist charts from KALX, WYNU, and KFJC.

It’s hard to say what the tangible impact of the airplay really is, though. My Bandcamp sales and Spotify streams had bumps in their numbers, but not a huge amount.

One thing I can say is that I’ve learned the entire process from making music to getting it on the air. From recording and post-production to mastering and export for streaming and CD masters to online distribution and building radio mailing lists to packaging, UPC labeling, and shipping to verifying airplay.

That experience will probably come in handy in the future.

Neuroprinter

Well, it was an interesting failure.

Built with the SubZERO festival in mind, I thought Neuroprinter might be an interesting sculpture for people to interact with at an outside festival. I was able to complete it in time for the festival, but rushed through some of the fabrication and it showed.

The original idea was to build a back projection box for flickering film loops. It grew into a memory machine that took the process of memory imprinting and visualized it as a sci-fi prop. The final presentation lacked context and connection, but I learned a lot about the processes to execute the individual stages.

Although it wasn’t meant to be a piece of clean hi tech sculpture, the metalwork ended up being too rough and poorly supported. I intended to have a patched together kind of aesthetic, but it was too much.

People thought it was cool, but it required way too much explanation to survive as any kind of sculptural object. I have since dismantled the piece, but have plans for the components as individual pieces.

Animated GIF of the 8mm film projection
Clip of the projection.

MaChinE

This was a sleeper of a project that had been on my mind for years. Back in 2002, I made a Flash-based drum machine/sampler using scanned machine parts and sounds from circuit bent toys. It was produced for the E.A.A.A. (Electronic Arts Alliance of Atlanta) annual member show and lived on as a page on a lonely web server (now removed because Flash is dead).

Screenshot of MaChinE

I always thought it would be cool to build a kiosk for people to use it. Over the years, Flash was eventually phased out and my plans to port it to HTML5 were always deferred to something shinier and newer.

At surplus electronics stores this year, I noticed that they were dumping fairly nice flat-screen VGA monitors for peanuts. I picked one up and found some wire screen and miscellaneous junk to build an object base. It runs on a Raspberry Pi with an old version of Flash.

Tech folks see it as a novelty and laugh when I tell them it was made with Flash. Kids love it though and I’m glad to see out it the world with people playing with it.

Machine at SubZERO
The standalone piece on display at SubZERO.

Noise toys

Last year I built two Raspberry Pi based synthesizers using ZynAddSubFx and Fluidsynth. I still use them to make music, but they are more software based than hardware. They sound great, but don’t have external controls for LFO or filter changes.

Recent efforts are more tactile and simple. With more outboard effects and amplifiers available in the studio, I’ve focused more on basic sound generators and sequencers/timers. One of the noisier ones is a Velleman KA02 Audio Shield I picked up locally. It has some timing quirks that I took advantage of to generate some great percussive noise.

The memo recorder is cycling through a bunch of short recordings from a police scanner.
Getting close to permanent installation on the little Kustom amp I have.
I made some new patches for ZynAddSubFx so the Raspberry Pi synth I made was more relevant to the music I actually make. This is a a rhythmic glitch sound coming from an arpeggiation.

Krusher and Sympathy

Countdown timer for performance of Sympathy
Krusher on the left and Sympathy on the right, metal sculptures for noise performance.

Built from steel pipes, heavy duty compression springs, and contact mics, these metal sculptures are primal noise instruments. The smaller one, Krusher, was the first version. I wanted to build a kind of pile driver drum machine. After considering mechanical means of driving it, I had more fun just playing the damn thing through a cheap amp.

The tall one, Sympathy, came later and with more contact mics attached. After playing them together, an idea for a performance was born.

https://www.instagram.com/p/BeZmAh0nqo9/

SubZERO

The view from inside my booth at SubZERO.

For the past couple of years, SubZERO Festival and subsequent South FIRST FRIDAYS have become primary destinations for the kind of work I’m doing. It’s a great chance to gauge reaction to the work and motivation to finish projects.

It can also be nerve wracking and challenging. This year I chose an ambitious timeline and also debuted three distinct pieces and performances at the same time. In the end it all worked out, but things got pretty stressful towards the last minute. I had to take shortcuts with execution and I wasn’t happy with some of the consequences of those compromises.

Looks like something from Sanford and Son.
Projected imagery coming from the side of the booth.

The peak of the festival for me (and all of 2018, really) was the performance of the sculptures I had made, in a piece I simply called Sympathy. It was loud, intense, and had tons of multi-colored smoke. I did two cycles, one on each night of the festival. I also did one last performance in October, at the end of First Fridays.

80s skating

Back in the late 80s, I was living in south San Jose and was a skateboarder along with most of my friends. It was a huge part my life and my first professional work as a photographer was produced during that time. I went on to be a professional photographer and multimedia artist for the next 30 years.

Taking advantage of the foot traffic during Cinequest this year, I picked four skating photographs from 1988-1990 and had them printed as large scale posters. I chose images of Tony Henry, Brandon Chapman, Tommy Guerrero, and Jun Ragudo.

I talked to Bob Schmelzer, owner of Circle-A skateshop in downtown San Jose about hanging them in his windows temporarily. He was totally cool about it and the photos were seen by hundreds during that time.

I left the posters at his shop and when he finished some work on the back wall, he re-installed them to face 2nd Street. They are still there now and I’m stoked to walk downtown and seem them hanging.

One of the images of Tommy Guerrero was seen by Jim Thiebaud of Real Skateboards. He asked if he could use it on a limited edition deck to raise money for medical costs of the family of Refuge Skateshop. Of course, I said yes. They all sold out and the Refuge family got the funds. I also managed to snag a Real deck with my photo on it. Fucking rad.

Noise and Waffles

A couple of years ago I went to Norcal Noisefest in Sacramento. At that time, most of my exposure to live experimental music was around San Francisco and was electronic and tech oriented.

After seeing some videos of booked performers, I knew I had to check it out. I went for two days and had a mind blowing experience. I had never seen that level of pure volume and abstraction. It was more metal than any metal show I had seen.

Most importantly, I was impressed by the community. The noise scene around there is one of the last refuges of true experimental sound without institutional gatekeepers. Keeping everything together was Lob Instagon, the festival organizer.

When I got back to San Jose, my whole musical world was upside down because of that festival. I started to explore a much heavier side of sound. I also wanted to have something to perform live that wasn’t centered around a laptop or screen.

After building Krusher and Sympathy, I posted some video of me playing it that eventually got back to Lob. He reached out and invited me to perform at one the weekly Sacramento Audio Waffle shows he runs at the Red Museum in Sacramento.

I was stoked to say yes. The show was a lot of fun and I liked the other groups that played. Also, I got to hear Sympathy on a substantial P.A. with big bass cabinets. That shit rumbled the roof.

Poster for Sacramento Audio Waffle #47

Cassette

One of the things I noticed at Norcal Noisefest was how much they didn’t care about online distribution. Lots of tapes on tables and even some Vinyl releases. CDs were there but not as much as cassettes.

A limitation of the live performance at SubZERO was a lack of powerful amps to drive the bass tones. Lots of sub 75hz tones get generated by the steel pipes and springs.

So, I made some full range recordings of both and ran them through a little EQ and compression. Here is the tape of that effort, inspired by the noise heads in Sacramento. Fun to make.

Self-published cassette of recordings of Sympathy performances.

Teleprofundo

Having my own studio space has expanded the scale I can work in. The back wall gets used regularly for video projection experiments. Most of what I do with projection is pretty old school. I don’t use VJ tools or Final Cut or Adobe Premiere for this.

It’s just a few cheap office projectors, an old Canon Hi-8 camera for feedback, and a variety of video source footage. Now with the 8mm film projector, I can add even more footage to the mix.

I found an interesting source of public domain film footage, the New York Public Library. Their online archives are outstanding.

Recently, I picked up some monochrome security monitors and have been running all kinds of feedback and low-fi video signals through them.

Mixing stock film footage from 50s Hollywood
Multiple layers of feedback
Monochrome composite monitors chaining camera feedback

Macroglitch

While trying to find smooth ways of converting 24fps video to 30fps, I stumbled across niche online communities that are into high frame rates. I was looking for simple, but high fidelity, frame interpolation. They are into generating slow motion and high fps videos of video games.

One of the most interesting tools I found is Butterflow. It uses a combination of ffmpeg and OpenCV to generate impressive motion interpolated frame generation. Things got really interesting when I started running short, jumpy, and abstract video clips through the utility.

Below is a video clip I shot of a thistle from two inches away, at 24fps. With Butterflow and ffmpeg, I stretched it out more than 10X. It’s kind of like Paulstretch for video. The line effect is from a sobel filter in ffpmeg.

butterflow -s a=0,b=end,spd=0.125 in.mov --poly-s 1.4 -ff gaussian -r 29.97 -v -o out.mp4
An early test using a thistle.

Since then I’ve expanded this project in many directions. I’ve set up all kinds of table top macro video shots with plants, dead insects, shells, electronics, and more.

Generating so much stretched footage has taken days of rendering and filled terabytes of space. One of the first finished pieces I made was this music video for the song Aerodome. The audio waveforms were generated with ffmpeg.

https://vimeo.com/306120003

Short form abstract video

When I released Spanner, I found out how tricky it is to deal with audio on social media. Sharing things with SoundCloud kept the interest trapped in the SoundCloud ecosystem and people rarely visited my website or Bandcamp page. I used Audiograms for a while, but didn’t like the aesthetic.

So, I loaded the raw audio files on my phone and started using clips in 60 second videos I would make with video apps. That was two years ago. Since then, I’ve made around 200 videos for social, mostly Instagram.

I try to keep them unique and don’t use the presets that come with the apps. A lot of the videos represent multiple generations through multiple apps in different workflow. Also, most of the recent videos have custom audio tracks I make with soft synths and granular sample manglers.

I get asked how I make them all the time. So, here are all the “secrets”.

I start out with geometric images or video clips, like buildings or plants or something repetitive. Most of the time I capture in slow-motion. Then I import clips or images into LumaFusion and crop them square and apply all kinds of tonal effects like monochrome, hi/low key, halftone. For static images, I’ll apply a rotation animation with x/y movement.

Then I’ll make some audio in Fieldscaper, Synthscaper, Filtatron, or use a clip of one my songs. That gets imported into a track in LumaFusion. Then I trim the clip so it’s just below 60 seconds, which is the limit for Instagram and useful for others.

After exporting at 720×720, I open it in Hyperspektiv, Defekt, or maybe TiltShift Video. I pick a starting transformation and then export it, bringing it back into Lumafusion or maybe running it through more effects.

That process gets repeated a few times until I end up with something I like or the clip starts to get fried from too much recompression. They key is to keep working until I get something distinctive and not a iTunes visualizer imitation.

It’s funny that I have people who follow all these little videos and don’t realize I do all kinds of more substantial work. But, I’m glad to have something people enjoy and they are fun to make.

Where is Embers?

Embers is alive and well at Kaleid gallery in downtown San Jose, CA. It’s been there for a while now and I still enjoy going by the gallery to watch people interact with it.

The future is uncertain though. It’s a fairly large piece and made of lots of rice paper. I hope to find a permanent home for it this coming year.

https://www.instagram.com/p/Beo6yofHqWp/

Next year

I’m not really a a goal oriented planner. Most of my life and creative work is process oriented. I learn from doing and often there is something finished at the end of it.

I hope the upcoming year offers more chances to learn, get loud, and work with like-minded folks.

Running Fluidsynth on a Raspberry PI Zero W

One of the reasons I’ve spent so much time experimenting with audio software on Raspberry Pis is to build standalone music sculpture. I want to make machines that explore time and texture, in addition to generating interesting music.

The first soft synth I tried was Fluidsynth. It’s one of the few that can run headless, without a GUI. I set it up on a Pi 3 and it worked great. It’s used as a basic General MIDI synthesizer engine for a variety of packages and even powers game soundtracks on Android.




This video is a demo of the same sound set used in this project, but on an earlier iteration using a regular Raspberry Pi 3 and a Pimoroni Displayotron HAT. I ended up switching to the smaller Raspberry Pi Zero W and using a webapp instead of a display.

The sounds are not actually generated from scratch, like a traditional synthesizer. It draws on a series of predefined sounds collected and mapped in SoundFonts. The .sf2 format was made popular by the now defunct Sound Blaster AWE32 sound card that was ubiquitous on 90s PCs.

Back then, there was a niche community of people producing custom SoundFonts. Because of that, development in library tools and players was somewhat popular. Fluidsynth came long after, but benefits from the early community work and a few nostalgic archivists.

The default SoundFont that comes with common packages is FluidR3_GM. It is a full General Midi set with 128 instruments a small variety of drum kits. It’s fine for building a basic keyboard or MIDI playback utility. But, it’s not very high fidelity or interesting.

What hooked me was finding a repository of commercial SoundFonts (no longer active). That site has an amazing collection of 70s-90s synths in SoundFont format, including Jupiter-8, TB-303, Proteus 1/2/3, Memory Moog, and an E-MU Modular. The E-MU Modular sounds pretty rad and is the core of the sound set I put together for this. They’re all cheap and I picked up a few to work with. The sound is excellent.

Raspberry Pi Zero W

For this particular project, I ended up using a Raspberry Pi Zero W for its size and versatility. Besides running Fluidsynth, it also serves up a Node.js webapp over wifi for changing instruments. It’s controllable by any basic USB MIDI keyboard and runs on a mid-sized USB battery pack for around 6 hours. Pretty good for such a tiny footprint and it costs around $12.

Setting it up

If you want to get something working fast or just want to make a kid’s keyboard, setup is a breeze.

After configuring the Pi Zero and audio:

sudo apt-get install fluidsynth

That’s it.

But, if you want more flexibility or interactivity, things get a bit more complex. The basic setup is the same as what I laid out in my ZynAddSubFX post.

Download Jessie Lite and find a usable Micro SD card. The following is for Mac OS. Instructions for Linux are similar and Windows details can be found on the raspberrypi.org site.

Insert the SD card into your computer and find out what designation the OS gave it. The unmount it and write the Jessie Lite image to it.

diskutil list

/dev/disk1 (external, physical):
 #: TYPE NAME SIZE IDENTIFIER
 0: FDisk_partition_scheme *8.0 GB disk1
 1: Windows_FAT_32 NO NAME 8.0 GB disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

Pull the card out and reinsert it. Then, add two files to the card to make setup a little faster and skip a GUI boot.

cd /Volumes/boot
touch ssh

sudo nano wpa_supplicant.conf

Put this into the file you just opened.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
}

Put the card in the Pi Zero and power it up, then configure the box with raspi-config. One trick I learned was not to change the root password and expand the file system at the same time. I’m not sure what the problem is, but often it corrupts the ssh password to do both at the same time.

Update the Pi:

sudo apt-get update
sudo apt-get upgrade

Fluidsynth needs a higher thread priority than the default, so I use the same approach as setting up Realtime Priority. It might be overkill, but it’s consistent with the other Pi boxes I set up. Add the user “pi” to the group “audio” and then set expanded limits.

Pi commands

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

The file should be empty. Add this to it.

@audio - rtprio 80
@audio - memlock unlimited

If you’re not using an external USB audio dongle or interface, you don’t need to do this. But, after you hear what the built-in audio sounds like, you’ll want something like this.

sudo nano /boot/config.txt

Comment out the built-in audio driver.

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on
sudo nano /etc/asound.conf

Set the USB audio to be default. It’s useful to use the name of the card instead of the stack number.

pcm.!default {
 type hw card Device
 }
 ctl.!default {
 type hw card Device
 }

Reboot and then test your setup.

sudo reboot

aplay -l

lsusb -t

speaker-test -c2 -twav

A voice should speak out the left and right channels. After verifying that, it’s time to set up Fluidsynth.

The reason I compile it from the git repo is to get the latest version. The version in the default Raspbian repository used by apt-get is 1.1.6-2. The latest is 1.1.6-4. The reason we need this is Telnet.

That’s right, Fluidsynth uses Telnet to receive commands and as its primary shell. It’s a classic text based network communication protocol used for remote administration. Think Wargames.

Telnet

But, there’s a bug in the standard package that causes remote sessions to get rejected in Jessie. It’s been addressed in the later versions of Fluidsynth. I needed it to work to run the web app.

Grab the dependencies and then compile Fluidsynth. It’s not complicated, but there are some caveats.

sudo apt-get install git libgtk2.0-dev cmake cmake-curses-gui build-essential libasound2-dev telnet

git clone git://git.code.sf.net/p/fluidsynth/code-git

cd code-git/fluidsynth
 mkdir build
 cd build
 cmake ..
 sudo make install

The install script misses a key path definition that aptitude usually handles, so I add it manually. It’s needed so libfluidsynth.so.1 can be found. If you see an error about that file, this is why.

sudo nano /etc/ld.so.conf

Add this line:

/usr/local/lib

Then:

sudo ldconfig
 export LD_LIBRARY_PATH=/usr/local/lib

Now we need to grab the default SoundFont. This is available easily with apt-get.

sudo apt-get install fluid-soundfont-gm

That’s it for Fluidsynth. It should run fine and you can test it with a help parameter.

fluidsynth -h

Now to install Node.js and the webapp to change instruments with.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back into an ssh session. That makes nvm available.

nvm install v6.10.1

Grab the webapp from my repo and install it.

git clone https://github.com/lucidbeaming/Fluidsynth-Webapp.git fluidweb

cd fluidweb

npm install --save

Find the IP address of you Pi on your local network. Visit <ip address> port 7000 on any other device.

http://192.168.1.20:7000

If Fluidsynth isn’t running, it will display a blank page. If it is running, it will list all instruments available, dynamically. This won’t be much of a problem once the launch script is setup. It launches Fluidsynth, connects any keyboards attached through ALSA, and launches the webapp.

Create the script and add the following contents. It’s offered as a guideline and probably won’t work if copied and pasted. You should customize it according to your own environment, devices, and tastes.

sudo nano fluidsynth.sh
#!/bin/bash

if pgrep -x "fluidsynth" > /dev/null
then
echo fluidsynth already flowing
else
fluidsynth -si -p "fluid" -C0 -R0 -r48000 -d -f ./config.txt -a alsa -m alsa_seq &
fi

sleep 3

mini=$(aconnect -o | grep "MINILAB")
mpk=$(aconnect -o | grep "MPKmini2")
mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
then
aconnect 'Arturia MINILAB':0 'fluid':0
echo MINIlab connected
elif [[ $mpk ]]
then
aconnect 'MPKmini2':0 'fluid':0
echo MPKmini connected
elif [[ $mio ]]
then
aconnect 'mio':0 'fluid':0
echo Mio connected
else
echo No known midi devices available. Try aconnect -l
fi

cd fluidweb
node index.js
cd ..

exit

Note that I included the settings -C0 -R0 in the Fluidsynth command. That turns off reverb and chorus, which saves a bit of processor power and doesn’t sound good anyway.

Now, create a configuration file for Fluidsynth to start with.

sudo nano config.txt
echo "Exploding minds"
gain 3
load "./soundfonts/lucid.sf2"
select 0 1 0 0
select 1 1 0 1
select 2 1 0 2
select 3 1 0 3
select 4 1 0 4
select 5 1 0 5
select 6 1 0 6
select 7 1 0 7
select 8 1 0 8
select 10 1 0 9
select 11 1 0 10
select 12 1 0 11
select 13 1 0 12
select 14 1 0 13
select 15 1 0 14
echo "bring it on"

The select command chooses instruments for various channels.

select <channel> <soundfont> <bank> <program>

Note that channel 9 is the drumkit.

To get the launch script to run on boot(or session) it needs to have the right permissions first.

sudo chmod a+x fluidsynth.sh

Then, add the script to the end of .bash_profile. I do that instead of other options for running scripts at boot so that fluidsynth and node.js run as a user process for “pi” instead of root.

sudo nano .bash_profile

At the end of the file…

./fluidsynth.sh

Reboot the Pi Zero and when it gets back up, it should run the script and you’ll be good to go. If you run into problems, a good place to get feedback is LinuxMusicians.com. They have an active community with some helpful folks.

Raspberry Pi Zero W in a case

Here’s another quick demo I put together. Not much in terms my own playing, haha, but does exhibit some of the sounds I’m going for.




Embers: a breath powered interactive installation celebrating collaboration

Photo by Jerry Berkstresser

It started with incendiary memories: looking at a fading bonfire with friends at the end of a good day, stoking the fire in a pot belly stove, and watching Haitian women cooking chicken over a bed of coals.

I wanted to build something with modern technology that evoked these visceral feelings and associations. Without using screens or typical presentations, the goal was to create an artwork that a wide variety of people could relate to and connect with. It also had to be driven by their own effort.

The initial work began at the Gray Area Foundation for the Arts in San Francisco, during the 2017 Winter Creative Code Immersive. I was learning the mechanics of building interactive art and was looking for a project to bridge my past experience with modern tools.

In January, I travelled to Washington D.C. to photograph the Women’s March and the Presidential Inauguration. They were very different events, but I was struck by the collective effort that went into both. Ideological opposites, they were still the products of powerful collaboration.

When I got back, I heard a lot of fear and anxiety. I had worked in Haiti with an organization called Zanmi Lakay and it had blossomed into effectiveness through group collaboration. I wanted to harness some of that energy and make art that celebrated it.

Embers was born. The first glimpses came from amber hued blinking LEDs in a workroom at Gray Area. 4 months later, the final piece shimmered radiantly in front of thousands of people at the SubZERO art festival in San Jose, CA. In the end, the project itself was a practical testament to collaboration grounded in its conceptual beginnings.

Building the Prototype

For the Gray Area Immersive Showcase, I completed a working study with 100 individually addressable LEDs, 3 Modern Device Wind Sensors (Rev. C), an Arduino Uno, and 100 hand folded rice paper balloons as diffusers. I worked on it alone at my house and didn’t really know how people would respond to it.

When it debuted at the showcase, it was a hit. People were drawn to the fading and evolving light patterns and were delighted when it lit up as they blew on it. I didn’t have to explain much. People seemed to really get it.

The Dare

In early May, I showed a video clip of the study to local gallery owner Cherri Lakay of Anno Domini. She surprised me by daring me to make it for an upcoming art festival called SubZERO. I hesitated, mostly because I thought building the prototype had already been a lot of work. Her fateful words, “you should get all Tom Sawyer on it.”

So, a plan gestated while working on some music for my next album. It was going to be expensive and time consuming and I wasn’t looking forward to folding 1,500 rice paper balloons. A friend reminded me about the concept of the piece itself, “isn’t it about collaboration anyway? Get some people to help.”

I decided to ask 10 people to get 10 more people together for folding parties, with the goal of coming up with 150 balloons at each party. I would give a brief speech and demo the folding. The scheme was simple enough, but became a complex web of logistics I hadn’t counted on.

In the end, it turned out to be an inspiring and fun experience. 78 people helped out in all, with a wide range of ages and backgrounds.

Building Embers

The prototype worked well enough that I thought scaling it up would just be a matter of quantity. But, issues arose that I hadn’t dealt with in the quick paced immersive workshop. Voltage stabilization and distribution, memory limitations, cost escalation, and platform design were all new challenges.

The core of the piece was an Arduino Mega 2560, followed by 25 strands of 50-count WS2811 LEDs, 16 improved Modern Device wind sensors (Rev. P), and 300 ft. of calligraphy grade rice paper. Plenty of trips to Fry’s Electronics yielded spools of wire in many gauges, CAT6 cabling for the data squids, breadboards, and much more.

My living room was transformed into a mad scientist lab for the next month.

Installation

Just a few days before SubZERO, my house lit up in an amber glow. The LED arrays were dutifully glittering and waning in response to wavering breaths. The power and data squids had been laid out and the Arduino script was close to being finished.

I was confident it would work and was only worried about ambient wind at that point. A friend had built a solid platform table for the project and came over the day of the festival to pick up the project. We took it downtown and found my spot on First St. After unloading and setting up the display tent, I began connecting the electronics.

After a series of resource compromises, I had ended up with 1,250 LEDs and around 1,400 paper balloons. The balloons had to be attached to each LED by hand and that took a while. I tested the power and and data connections and laid out the sensors.

Winding the LED strands in small mounds on the platform took a long time and I was careful not to crush the paper balloons. It was good to have friends and a cousin from San Luis Obispo for help.

Lighting the Fire

I flipped the switches for the Arduino assembly, the LED power brick, and then the sensor array. My friends watched expectantly as precisely nothing happened. After a half hour of panicked debugging, it started to light up but with all the wrong colors and behavior. It wasn’t working.

I spent the first night of the two day festival with the tent flap closed, trying to get the table full of wires and paper to do what I had intended. It was pretty damn stressful. Mostly, I was thinking about all the people who had helped and what I’d tell them. I had to get it lit.

Around 10 minutes before midnight (when the festival closed for the night), it finally began to glow amber and red and respond to wind and breath. Around 10 people got to see it before things shut down. But, it was working. I was so relieved.

It turns out that a $6.45 breakout board had failed. It’s a tiny chip assembly that ramps up the voltage for the data line. I can’t recommend the SparkFun TXB0104 bi-directional voltage level translator as a result. The rest of what I have to say about that chip is pretty NSFW.

I went home and slept like a rock.

The next day was a completely different. I showed up a bit early and turned everything on. It worked perfectly throughout the rest of the festival.

People really responded to it and I spent hours watching people laugh and smile at the effect. They wanted to know how it worked, but also why I had made it. I had some great conversations about where it came from and how people felt interacting with it.

It was an amazing experience and absolutely a community effort.

Photo by Jerry Berkstresser

Photo by Joshua Curry

Thanks to all the people and organizations that helped make this a reality:

Grey Area Art Foundation for the Arts, Anno Domini, SubZERO, Diane Sherman, Tim, Brooklyn Barnard, Anonymous, Chris Carson, Leila Carson-Sealy, Cristen Carson, Jonny Williams, Michael Loo, Elizabeth Loo, Kieran Vahle, Jasmina Vahle, Peter Vahle, Kilty Belt-Vahle, Sara Vargas, Sydney Twyman, Annie Sablosky, Martha Gorman, Nancy Scotton, Melody Traylor, Morgan Wysling, Bianca Smith, Susan Bradley, Jen Pantaleon, Guy Pantaleon, Carloyn Miller, Paolo Vescia, Amelia Hansen, Maddie Vescia, Natalie Vescia, Cathi Brogden, Evelyn Lay Snyder, Alice Adams, Lisa Sadler-Marshall, Gena Doty Sager, Mack Doty, Mary Doty, James W. Murray, Greg Cummings, Vernon Brock, Jerry Berkstresser, Lindsey Cummings, Kyle Knight, Liz Hamm, Rebecca Kohn, Shannon Knepper, John Truong, DIane Soloman, Stephanie Patterson, Robertina Ragazza, Sarah Bernard, Jarid Duran, Deb Barba, Astrogirl, Tara Fukuda, CHristina Smith, Yumi Smith, NN8 Medal Medal, Gary Aquilina, Pamela Aquilina, Dan Blue, Chris Blue, Judi Shade, Dave Shade, Margaret Magill, Jim Magill, Brody Klein, Chip Curry, Jim Camp, Liz Patrick, Diana Roberts, Connie Curry, Tom Lawrence, Maria Vahle Klein, Susan Volmer, Jana Levic

 

Joshua Curry is on Instagram, Twitter, and Facebook as @lucidbeaming

Setting up a Raspberry Pi 3 to run ZynAddSubFX in a headless configuration

Most of my music is production oriented and I don’t have a lot of live performance needs. But, I do want a useful set of evocative instruments to take to strange places. For that, I explored the options available for making music with Raspberry Pi minicomputers.

The goal of this particular box was to have the Linux soft-synth ZynAddSubFX running headless on a battery powered and untethered Raspberry Pi, controllable by a simple MIDI keyboard and an instrument switcher on my phone.

Getting things to run on the desktop version of Raspbian and ZynAddSubFX was pretty easy, but stripping away all the GUI and introducing command line automation with disparate multimedia libraries was a challenge. Then, opening it up to remote control over wifi was a rabbit hole of its own.

But, I got it working and it sounds pretty amazing.




Setting up the Raspberry Pi image

I use Jessie Lite because I don’t need the desktop environment. It’s the same codebase without a few bells and whistles. When downloading from rasperrypi.org, choose the torrent for a much faster transfer than getting the ZIP directly from the site. These instructions below are for Mac OS X, using Terminal.

diskutil list

/dev/disk1 (external, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:        FDisk_partition_scheme                        *8.0 GB     disk1
1:                 DOS_FAT_32 NO NAME                 8.0 GB     disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

After the image gets written, I create an empty file on the boot partition to enable ssh login.

cd /Volumes/boot
touch ssh

Then, I set the wifi login so it connects to the network on first boot.

sudo nano wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
 }

The card gets removed from the laptop and inserted into the Pi. Then, after it boots up I go through the standard setup from the command line. The default login is “pi” and the default password is “raspberry”.

sudo raspi-config

[enable ssh,i2c. expand filesystem. set locale and keyboard.]

After setting these, I let it restart when prompted. When it comes back up, I update the codebase.

sudo apt-get update
sudo apt-get upgrade

Base configuration

Raspberry config for ZynAddSubFX

ZynAddSubFX is greedy when it comes to processing power and benefits from getting a bump in priority and memory resources. I add the default user (pi) to the group “audio” and assign the augmented resources to that group, instead of the user itself.

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

...
@audio - rtprio 80
@audio - memlock unlimited
...

The Raspbian version of Jessie Lite has CPU throttles, or governors, set to conserve power and reduce heat from the CPU. By default, they are set to “on demand”. That means the voltage to the CPU is reduced until general use hits 90% of CPU capacity. Then it triggers a voltage (and speed) increase to handle the load. I change that to “performance” so that it has as much horsepower available.

This is done in rc.local:

sudo nano /etc/rc.local
...
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
...

Note that it gets set for all four cores, since the Raspberry Pi is multi-core. For more info about governors and even overclocking, this is a good resource.

Virtual memory also needs to get downgraded so there is little swap activity. Zynaddsubfx is power hungry but doesn’t use much memory, so it doesn’t need VM.

sudo /sbin/sysctl -w vm.swappiness=10

Now, to set up the audio interface. For my ZynAddSubFX box, I use an IQaudio Pi-DAC+. I’ve also used a standard USB audio interface and have instructions for that in my post about the Pi Zero. Raspbian uses Device Tree overlays to load I2C, I2S, and SPI interface modules. So, instead of separate drivers to install, I just edit config.txt to include the appropriate modules for the Pi-DAC+. Note that I also disabled the crappy built-in audio by commenting out “dtparam=audio=on”. This helps later on when setting the default audio device used by the system.

sudo nano /boot/config.txt

...

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on

dtoverlay=i2s-mmap
dtoverlay=hifiberry-dacplus

...

For Jack to grab hold of the Pi-DAC+ for output, the default user (pi) needs a DBus security policy for the audio device.

sudo nano /etc/dbus-1/system.conf

...
<!-- Only systemd, which runs as root, may report activation failures. -->
<policy user="root">
<allow send_destination="org.freedesktop.DBus"
    send_interface="org.freedesktop.systemd1.Activator"/>
</policy>
<policy user="pi">
    <allow own="org.freedesktop.ReserveDevice1.Audio0"/>
</policy>
...

Next, ALSA gets a default configuration for which sound device to use. Since I disabled the built-in audio earlier, the Pi-DAC+ is now “0” in the device stack.

sudo nano /etc/asound.conf

pcm.!default {
 type hw card 0
 }
ctl.!default {
 type hw card 0
 }

sudo reboot

Software installation

ZynAddSubFX has thick dependency requirements, so I collected the installers in a bash script. Most of it was lifted from the Zynthian repo. Download the script from my Github repo to install the required packages and run it. The script also includes rtirq-init, which can improve performance on USB audio devices and give ALSA some room to breath.

git clone https://raw.githubusercontent.com/lucidbeaming/pi-synths/master/ZynAddSubFX/required-packages.sh

sudo chmod a+x required-packages.sh

./required-packages.sh

Now the real meat of it all gets cooked. There are some issues with build optimizations for SSE and Neon (incompatible with ARM processors), so you’ll need to disable those in the cmake configuration.

git clone https://github.com/zynaddsubfx/zynaddsubfx.git
cd zynaddsubfx
mkdir build
cd build
cmake ..
ccmake .
[remove SSE parameters and NoNeonplease=ON]
sudo make install

Usually takes 20-40 minutes to compile. Now to test it out and get some basic command line options listed.

zynaddsubfx -h

Usage: zynaddsubfx [OPTION]

-h , –help Display command-line help and exit
-v , –version Display version and exit
-l file, –load=FILE Loads a .xmz file
-L file, –load-instrument=FILE Loads a .xiz file
-r SR, –sample-rate=SR Set the sample rate SR
-b BS, –buffer-size=SR Set the buffer size (granularity)
-o OS, –oscil-size=OS Set the ADsynth oscil. size
-S , –swap Swap Left <–> Right
-U , –no-gui Run ZynAddSubFX without user interface
-N , –named Postfix IO Name when possible
-a , –auto-connect AutoConnect when using JACK
-A , –auto-save=INTERVAL Automatically save at interval (disabled with 0 interval)
-p , –pid-in-client-name Append PID to (JACK) client name
-P , –preferred-port Preferred OSC Port
-O , –output Set Output Engine
-I , –input Set Input Engine
-e , –exec-after-init Run post-initialization script
-d , –dump-oscdoc=FILE Dump oscdoc xml to file
-u , –ui-title=TITLE Extend UI Window Titles

The web app

Webapp to switch ZynAddSubFX instruments

I also built a simple web app to switch instruments from a mobile device (or any browser, really). It runs on Node.js and leverages Express, Socket.io, OSC, and Jquery Mobile.

First, a specific version of Node is needed and I use NVM to grab it. The script below installs NVM.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back in to have NVM available to you.

nvm install v6.10.1

My Node app is in its own repo. The dependencies Express, Socket.io, and OSC will be installed with npm from the included package.json file.

git clone https://github.com/lucidbeaming/ZynAddSubFX-WebApp.git
cd ZynAddSubFX-WebApp
npm install

Test the app from the ZynAddSubFX-WebApp directory:

node index.js

On a phone/tablet (or any browser) on the same wifi network, go to:

http://<IP address of the Raspberry Pi>:7000

Image of webapp to switch instruments

You should see a list of instruments to choose from. It won’t do anything yet, but getting the list to come up is a sign of initial success.

Now, for a little secret sauce. The launch script I use is from achingly long hours of trial and error. The Raspberry Pi is a very capable machine but has limitations. The command line parameters I use come from the best balance of performance and fidelity I could find. If ZynAddSubFX gets rebuilt with better multimedia processor optimizations for ARM, this could change. I’ve read that improvements are in the works. Also, this runs Zynaddsubfx without Jack and just uses ALSA. I was able to get close to RTprio with the installation of rtirq-init.

#!/bin/bash

export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket

if pgrep zynaddsubfx
 then
 echo Zynaddsubfx is already singing
 exit 0
 else
 zynaddsubfx -U -A=0 -o 512 -r 96000 -b 512 -I alsa -O alsa -P 7777 -L "/usr/local/share/zynaddsubfx/banks/Choir and Voice/0034-Slow Morph_Choir.xiz" &
 sleep 4

   if pgrep zynaddsubfx
   then
   echo Zyn is singing
   else
   echo Zyn blorked. Epic Fail.
   fi

fi

mini=$(aconnect -o | grep "MINILAB")
 mpk=$(aconnect -o | grep "MPKmini2")
 mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
 then
 aconnect 'Arturia MINILAB':0 'ZynAddSubFX':0
 echo Connected to MINIlab
 elif [[ $mpk ]]
 then
 aconnect 'MPKmini2':0 'ZynAddSubFX':0
 echo Connected to MPKmini
 elif [[ $mio ]]
 then
 aconnect 'mio':0 'ZynAddSubFX':0
 echo Connected to Mio
 else
 echo No known midi devices available. Try aconnect -l
 fi

exit 0

I have 3 MIDI controllers I use for these things and this script is set to check for any of them, in order of priority, and connect them with ZynAddSubFX. Also, I have a few “sleep” statements in there that I’d like to remove when I find a way of including graceful fallback and error reporting from a bash script. For now, this works fine.

I add this line to rc.local to launch Zynaddsubfx automatically on boot and connect MIDI.

su pi -c '/home/pi/zynlaunch.sh >> /tmp/zynaddsubfx.log 2>&1 &'

Unfortunately, Node won’t launch the web app from rc.local, so I add some conditionals to /home/pi/.profile to launch the app after the boot sequence.

if pgrep zynaddsubfx
then
echo Zynaddsubfx is singing
fi

if pgrep node
then
echo Zyn app is up
else
node /home/pi/ZynAddSubFX-WebApp/index.js
fi

Making music

This ended up being a pad and drone instrument in my tool chest. ZynAddSubFX is really an amazing piece of software and can do much more than I’m setting up here. The sounds are complex and sonically rich. The GUI version lets you change or create instruments with a deep and precise set of graphic panels.

For my purposes, though, I want something to play live with that has very low resource needs. This little box does just that.

Raspberry Pi 3 with Pi-DAC+