MMXXV: Exit Berlin

I left Berlin at the end of this year. I had a specific vision for living in Berlin and I feel like I manifested that. I asked Berlin for a lot and it gave me plenty.

This post is part of a series (2022, 2023, 2024) that I made while there. Although it focuses on projects of this year, I have some closing thoughts on living in Berlin in general.

Slitscan

I made use of a custom slitscan camera this year. It was an interesting shift in my approach to photography. I don’t usually go for exotic photographic processes, but this intrigued me.

Last year, I met a guy named Ralph Nivens at the Experimental Photography Festival. His demo of perception bending slitscan photos was a highlight of the festival. I approached him afterward to find out how he made the cameras. He offered to make me one and it finally arrived in January of this year.

The basic idea of the camera is that it captures images in a series of very thin slices and then assembles them as a single coherent image. Each frame of the camera is 1 pixel wide by 4096 pixels high. It’s very close to a flatbed scanner, but upright and with a lens attached. Nivens wrote the firmware for the camera himself and 3D printed the body.

The exposures are quite long, 10 seconds to 2 minutes. The individual slice captures are pretty fast though, almost standard shutter speed of 1/60 of a second. Because the exposure is so long and the final picture is assembled, you get some crazy time dilation artifacts. Anything moving in front of the camera gets compressed or expanded depending on the speed of the movement. Below is a street scene and the tall stripes are people. They have been compressed because they walked quickly in the opposite direction of the capture order.

Street scene

In general, Germans don’t like to be photographed in public and are very conscious of privacy. This was true when I was doing my street exposures. People gave me grumpy looks and one woman was particularly angry. Open street photography is quite legal in Germany (I looked it up). So, I tried to argue with her, but she insisted that I stop and leave. I didn’t, so she tried to grab the camera. My limited German is not good enough for conflict like that and I also didn’t want to hurt her. Fortunately, others on the street heard the commotion and talked her down. We both walked away and the drama ended with that.

Walkway in snow

I kept making images throughout the winter and decided to experiment with the files generated by the camera. They are monochromatic, so as an RGB image all color channels are the same value. R:200 G:200 B:200 and so on. It’s possible to swap and blend channels from different images to yield unique color combinations.

I wrote a Python script to swap all the channels of folders full of images. That yielded some spectacular color results. It also generates images with unaesthetic collisions of values. My approach was to keep generating the swaps until interesting image combinations happened. Below are combinations of 2 and 3 channel variants.

A larger group of these ended up in my solo show later in the year. The prints were tricky because some of the colors were way out of the printer’s color range.

Echospheric Workshops

I was part of the Echospheric sound art collective this year. We organized and taught multiple workshops at HB55 in Lichtenberg. My part was to teach Arduino-based synthesizer workshops. I have been building little synths for a few years and have a good recipe for experimental sound boxes. The first workshop was just a presentation but the second was a fully developed fabrication workshop.

A group of seven smiling people
(l to r) Lutz Gallmeister, Bipin Rao, Samantha Tiussi, Berenice Llorens, Jolon Dixon, Joshua Curry, Simon Hill (not pictured: Samaquias Lorta)

I put together kits for each student so they could build the synths without soldering. That simplified the in-person experience and still yielded a versatile sound device they could take home.

I had to tune the instruction to a wide variety of technical expertise. Some had never done anything like this and others were fairly advanced. Keeping all skill levels engaged and informed was challenging. But, it worked and they all produced squawking sound machines. The response from the students was excellent and I’m glad I had the chance to teach like that.

An overhead view of a table with laptops and students working on electronics
Students of the main synth workshop

Singing Heart

My friend Benjamin Kjellman-Chapin is a blacksmith in Nes Verk, Norway. We went to the Atlanta College of Art together back in the nineties. I visited him last year and saw his expansive setup there. He is doing some amazing work.

This year he was putting together some work for a solo show and asked if I wanted to collaborate on one his pieces, Singing Heart. The idea was to craft a large steel heart and put electronics inside that made sound when the heart was moved. It was an unusual and fantastic project.

My part of the project was complicated because of the limits of the chips I bought. I picked devices that looked good on paper, but I hadn’t used them before. When they arrived, there were problems with the different driver libraries. Just because something is theoretically possible doesn’t mean it’s easy. I ran into issues that could only be solved by using a different chip set.

It also needed to be rechargeable and that had its own needs and limitations. For instance, buying raw lithium batteries and having them shipped in Europe is a special customs arrangement.

I also had to come up with the actual singing. I didn’t have access to a choir or professional singers, but I do have Reason. It’s a platform for making music on a computer and both of my albums were made with it. But, it’s more for electronic music than high fidelity synthetic singing. I found a plug-in that came pretty close and composed some basic melodies and voice blends. It was psychedelic to be in my little apartment and making hours and hours of angelic singing.

I finished the first prototype and sent him some videos. There were tweaks and adjustments and then I had a final assembly completed. I was happy with what I had made, but I know from experience that these things have to survive unpredictable environments. I did my best to seal it up and reinforce the connections.

The final result was interesting and engaging. Ben got some flattering comments from people at the show. For me, it was a great opportunity to make something cool with someone I have a lot of respect for.

Gelli Prints

I found my rhythm with a new process for printmaking. It is an effective bridge between photography, computer work, and traditional art practices. It’s called Gelli printing. Instead of a stone, it uses a slab of gelatin as a transfer surface. Acrylic paint is thinly applied on top and then removed, blended, or textured in a variety of ways. The result is a monotype: a one of a kind print.

It doesn’t require other specialized equipment like a press or engraving/cutting tools. The “plate” is actually a laser print that gets consumed with each print. The laser toner resists the paint and the paper absorbs it. When you lay a print on the applied paint, some paint is removed and some is left. The result is a fairly detailed image transfer in whatever color paint was used.

That can be repeated, layered, manipulated, and stenciled. The possibilities are diverse. I used a many layered technique that had a few detailed images blended with texture layers and surface manipulation. If you’re looking for fidelity and consistency, this is not the process you want. I did the exact same thing multiple times and got completely different results. You have to surrender to the process in many ways.

An overhead view of a kitchen filled with printmaking equipment
No dinner tonight
An overhead view of a clean kitchen counter
Finally get to make dinner

I used the kitchen counter of my tiny flat for printmaking. It would be set up like this for months at a time. That made it difficult to cook complex meals but I worked around it.

An overhead view of a printmaking area
Ready to print

Above is a gelatin block with paint applied. It’s ready for more layers or to get printed. All the newspaper was to make sure I didn’t ruin the counter. I wanted to get my deposit back eventually.

I spent around 3 months printing like this throughout the year. I think the main benefit of this process was that I didn’t need to go somewhere else to work. I would make prints or begin layers at the beginning of the day and then finish them off later. It became a daily practice that was responsive to different feelings and experiences. It wasn’t a diary, but something more like a sketchbook.

I produced over 300 monotypes this way. A small edit of those became the bulk of my solo show in Fall. The others are being cut, collaged, and re-used in all kinds of ways.

Finland

The Helsinki Biennial was an unexpected mind blower, but the real highlight was meeting my long lost uncle.

Helsinki Biennial

I didn’t know anything about the Helsinki Biennial before I decided to visit Finland. It came up as something to check out as a side quest while I was in Helsinki. It turned out to be an incredible collection of contemporary art and was vastly superior to what I saw at Documenta a few years ago.

Organized by HAM (Helsinki Art Museum), most of the event is on Vallisaari island in the bay of Helsinki. There were 37 artists and collectives represented in a vast array of installations on the island and also the main museum.

The level of craft, concept, and execution was exceptional. Many of the pieces incorporated sound art in well-thought-out ways.

Maija Lavonen

While exploring Helsinki, I visited ADmuseo (Architecture & Design Museum). It had an incredible show of fiber optic tapestries and sculptures by Maija Lavonen. They blend classic techniques of weaving from Finnish culture with the material that carries most of the internet around. These things had a real presence that was more than technical novelty. There was a lot of feeling and history.

Quote on wall: "The same principles apply in art as in the rest of life. Do the job as well as you can. Commit to seeing the task through to completion. Remain open-minded and alert to your surroundings. WHen you tap into your deepest creativity and find your philosophy, all you need to do is follow through on your principles."

Uncle Mauri

My uncle Mauri was only known through sparse stories in my family. On my mother’s side, I have 4 uncles. Altogether, they are the 5 children of Hannah, my grandmother. Most were born in Illinois and raised as an American family. Mauri was born in Finland, as the first child of Hannah. He grew up completely separate from the rest of the family.

Nobody had actually talked to him. There were a couple of letters in the 70s but that’s it. We knew where he was because Hannah’s ashes were sent to him when she passed. Since Helsinki was an easy flight from Berlin, I decided to meet the man.

I found an old phone number in a tax record online. I had a feeling it might work, but I spoke no Finnish. I found someone in Berlin who was from Finland and she agreed to call him for me. She came over and we sat on my couch and dialed the number.

He actually picked up right away and I was relieved to have a Finnish speaker there. He was initially skeptical (can you blame him?), but was willing to begin communications. We decided to exchange emails for a while because we could use online translation tools.

We emailed for 8 months and then I suggested a visit. He was receptive and we made the plan. I was excited to meet him, but had no idea what to expect.

Man with white hair holding a photo of himself younger
Young uncle Mauri

In Helsinki, he picked me up in his van and visited my grandmother’s gravesite that he had arranged. We didn’t know each other’s language so the 30 minute ride was in silence, except for the radio. It was very strange, but somehow totally familiar because of the long car rides I took with my uncles in Georgia as a child. I felt like I knew him.

We figured out how to use Google translate in conversation mode on my phone when we got to the cemetery. It’s not foolproof, but is fairly effective if you choose short and plain statements to make. After the grave visit we came back to Helsinki and sat in a coffee shop for a few hours, talking through the translator.

The connection was immediate and openhearted. We shared family histories and personal stories. Some were very difficult and others were funny and exciting. I’m not going put them in a blog post like this, but they covered the full range of human experience.

Man dancing on a stage

A twist of fate as a teenager brought Mauri to a ballet studio. The teacher needed him to support the female dancers. He gained a love of dance and spent the next 20 years as a professional dancer in Finland. He was in Hair, West Side Story, and many Finnish productions.

Man holding a ballet dance above his head

After a long dance career he ended up was a contractor and handyman and raised 4 kids. He had a barn with 3 workshops inside, with a whole array of carpentry projects.

Man sitting in a workshop

My time with Mauri was heartfelt and genuine. It’s amazing to make a family connection like that after so many years. Of all the things I came back from Berlin with, a new uncle is the most amazing.

Der Wendepunkt

I had a large solo show of recent work in early September. It was during Berlin Art Week and located at an unusual space inside the Alexanderplatz transit station. The whole experience was weird, fulfilling, and challenging.

Large radio tower over a train station
Alexanderplatz in front of the Fernsehturm

Back in May, I saw a post by Culterim about a new art space they were making available. It was an empty cosmetic store in Alexanderplatz. I saw the photos of the inside and knew it was perfect to show what I had made this year. I also saw an opportunity to have a non-traditional show during Berlin Art Week at a central location.

Alexanderplatz is not a prestigious place. In fact, many Berliners despise it because of the crowds and commercial vibe of the surrounding complex. When I told friends I was having a show there, most were confused.

To me, it represented a chance to show art directly to regular people in a humble environment. I knew I would get all kinds of people in there. Tens of thousands go through that station each day. It was a chance to reach a broad range of people outside of the Berlin art bubble.

Man smiling in art gallery
Hanging the show

There was no gallery staff or assistants. I had to handle every aspect of that show from hanging to marketing to sitting in the gallery itself during open hours. I hung that whole show in about 6 hours, with levels and magnets. That included carrying all the artwork down there on the tram.

The work I showed was monotypes, lino prints, photography, and some small installations. Most of it was made this year. It was cool to have a lot of work to choose from. This has been one of the most prolific times of my life.

Prints hanging in an art gallery
Main collection of monotypes
Decorative art print
Decorative art print
Decorative art print
Decorative art print
Decorative art print

On Saturday night, I did a noise performance with the synths I made in Berlin. Although short, it was loud. The sound reverberated around the entire station. People were streaming in the side doors to see what was happening. It probably sounded like one of the trains had crashed.

I look at this as my exit show for the whole Berlin cycle. It’s a good anchor point for this time in my life. I didn’t sell any work or make important gallery contacts. It wasn’t meant for that anyway. I got exactly what I wanted out of that experience: a chance to reflect my experience in Berlin back to the city itself. It wasn’t about achievement. It was about connection.

Buchstabenmuseum

I got another opportunity to use my 10X technique for making photos. Buchstabenmuseum was a museum in Berlin dedicated to classic backlit and neon signs from multiple eras of Berlin. They lost their space and announced a last chance to see the collection.

Making these reminded me of being in Reno, NV and making my first 10X photos during my cross-country drive. I like making connections like that now. They are reminders that beyond technique there is real human experience happening around all these images.

Paris

My last big trip within Europe was a visit to Paris. It was supposed to be amazing but it was just meh. But, I met some cousins there and being with them one last time was great.

I didn’t do much passive tourism in the 4 years I was in Europe. My trips were mostly about art events or personal connections. This time, I saw the sights and kinda cruised around.

For museums, I had an inside track. I was a member of a German artist union called the b.b.k. That had a few perks. One of those is free or discounted entrance to certain institutions. Not only did I get in free to the Louvre and Bourse de Commerce, I was let in through VIP entrances. Very fancy.

In fact, I was at the Louvre with my cousin Corrine just 12 hours before the infamous crown jewel heist. It was bizarre to read international news reports and see photos of where I had been standing the day before.

My favorite place was the Musée de la Chasse et de la Nature. It’s cross between a hunting lodge and a contemporary art museum. It’s unique among European art institutions. New art and old guns make a volatile mix.

Paris is an interesting city, but it was not the peak of my time in Europe. Too many people had told me I “have” to go there. I don’t see cities the way most people do and I’m much more interested in people than old buildings. I was in Europe to connect and participate, not to observe.

Final Thoughts

Now, I’m in San Francisco writing this blog post. After 4 years in Berlin, I moved back to California. It wasn’t sudden or dramatic. Nothing was wrong and nobody was in a hurry to have me back. It was just time.

I never intended to be an expat. I went to Berlin for a specific purpose and came back when I was done. Lots of people in Berlin and California asked me why I was moving back. Most were unsatisfied with my simple answer. They assumed there must be some drama behind it. Nope. No drama.

Most of the expat Americans I met in Berlin had no intention of returning. They had found their city and were settling in. I didn’t meet many that had lives I wanted. That’s not a judgment of them, just a recognition of my own values.

There is something else, though. Berlin is a special place when it comes to culture. There is lots of structure and funding and interested audiences. Those things are under pressure right now, but compared to America cities they have much more support in place. That structure is not portable. You can’t bring Berlin with you.

So, many artists stay there and want to be incubated. I wanted something to bring back. I wanted to learn how that ecosystem worked. I wanted to know what a city looks like when artists have a fighting chance at survival. I wanted to see how they organized and kept their communities alive.

There was no enlightenment at the mountaintop. I didn’t meet some guru that just laid it all out. What I found was a thousand artists living a thousand paths. But, that was enough.

I got to see all of it for myself. Then, I got multiple chances to take my turn. I had solo shows, group shows, online shows, performances, workshops, and collaborations. I made friends with other artists at all levels. I met gallery people (but not many) and talked to lots of institutional workers. I learned new techniques and then taught them to others. I got to experiment and fail without much drama. I just kept going. All that experience is coming back with me.

Self-determination. Collectivism. Experimentation. Ownership. That’s what it’s all about and that’s what I want to manifest in San Francisco.

Now comes the hard part. I picked one of the most expensive cities in the world to attempt all that. Way more expensive than Berlin. I don’t have some clever solution or plan. I’m just going to hack away at it month by month until I get something going. That’s what has worked so far.

I’ll close with my favorite German word, gelassenheit. It’s a heavy word with a light meaning. It signifies contentment or serenity. The root, lassen, is for letting, as in letting go. Gelassenheit is to be in the state of letting go.

Ich hoffe, die Zukunft bringt Gelassenheit.

MMXXIV: Turpentine and tea

As I write this, my apartment smells like turpentine and tea. The turpentine is for cleaning brushes and stencils I’m using for painting on recent prints. I bought the tea in Istanbul and have just a little stash left. The atmosphere feels like this is a place where things are made and life lived. It’s a good time to look back over the highlights of 2024.

Turner’s watercolors in Edinburgh

William Turner was an English painter that lived from 1775-1851. He was eccentric but influential and is a pivotal figure in English art history. A group of his watercolors are housed at the Royal Scottish Academy in Edinburgh, Scotland. They can only be seen once a year.

This collection of Turner watercolours was left to the nation in 1900 by the art collector Henry Vaughan. Since then, following Vaughan’s strict guidelines, they have only ever been displayed during the month of January, when natural light levels are at their lowest. Because of this, these watercolours still possess a freshness and an intensity of colour, almost 200 years since they were originally created.  

I planned a trip to see these paintings in person and spend a few days in Edinburgh as well. The weather in January was heavy and my trip was wet and windy, thanks to Storm Isha. I got to see these amazing watercolors though. There is an intensity to them that doesn’t translate to any book or digital image. Seeing them in person was worth it.

Coinciding with that show was a large exhibit by the Royal Scottish Society of Painters in Watercolour at the same place. I wasn’t expecting such a massive collection of contemporary watercolors. Based on that show, I learned that there is a whole tradition of Scottish painters that is still really active. They have their own online gallery of that exhibit that is worth a look.

Turner’s watercolors

Photos I made in Edinburgh

AI landscapes for a problematic project

An AI related art project I started back in 2020 came to fruition in January. It didn’t go smoothly.

In 2020, online image generators were much simpler than they are now. The results were primitive compared to what is common now. ChatGPT and Dall-E didn’t exist yet and there was no real public awareness of these processes. I began tinkering with a tool called Artbreeder because it had the novel ability to generate landscapes instead of just avatars. It also use a model trained on real paintings instead of using procedural techniques that came from 3D and video game software.

Instead of a text prompt, it had a panel of sliders that you moved around to get results you wanted. I spent a fair amount of time experimenting with that tool and came up with a set of landscapes and objects that were related to my own aesthetic and art practice. Those images were then edited down to a smaller set of coherent and conceptually related images. They reminded me of some the early Western photographs by Carleton Watkins. I wasn’t sure what direction the project was going to take, so I held onto them for later.

Initially, I considered commissioning painters from Dafen, China, to create large-scale paintings of the images. The results of that would have made a compelling story. But, the logistics would have been costly and time consuming. I was also very involved in the local San Jose, California art community and thought it would be interesting to collaborate with a local painter instead. Maybe they would have contributions I hadn’t thought of.

I saw a painter I knew at Kaleid Gallery and proposed the project to her. She was receptive and the collaboration began soon after. Her first painting (shown here) was a success and I saw good possibilities for a show in the future. I ended up moving to Berlin, but reached out later to see her progress. She had continued work and even made plans to exhibit the work at Kaleid. That show happened this January.

A few weeks before the show, we had a disagreement about attribution, ownership, and money. We had an informal agreement to split everything evenly. But, a last minute contract made very different claims. This is a prime example of the problems that arise from saying, “Don’t worry about it. We’ll work out the details later.” Those details were highly problematic. It was bad enough that I withdrew from the show and refused to participate. Kaleid director Cherri Lakey stepped in and rescued that show from the abyss. It wouldn’t have happened without her diplomacy and she deserves a lot of credit.

It’s a shame because it was the perfect time to stage a show like that. Lots of people were interested and bought some of the paintings. The show did well. If I had staged that here in Berlin, it would have been even more popular. Unfortunately, the paintings that did not sell were destroyed by the painter and our working relationship ended badly.

One of the digital originals I created.
Made by the painter. Some differences but essentially the same image.
Screenshot of the tool I used to make my originals.

New prints

I continued printmaking and tried some new techniques. I still have a long way to go.

I made linocut prints in my kitchen and tried some multiple color techniques. It was difficult to make a good ink impression and I had a lot of paper waste. I kept all the results of that, though. I also bought a Gelli plate and had some interesting success with multiple generation imagery. I’ll continue working with that and have a collection of images I’ll use with it.

A new toy I got to use was an AxiDraw V3 pen plotter. I borrowed it from a colleague at the Creative Code Berlin Meetup. It uses real ink pens to draw paths sent from a computer. The kind of file it uses is the same I used for Wolves, so I had a good technical foundation.

100 posts in 100 days: an Instagram experiment

I wanted to know if all this posting on Instagram was worth it. It’s not.

Having an online presence is a reality for most working artists. The vast majority choose Instagram to be their main platform. I have been on the internet since the mid-90s and tried all kinds of ways of showing my work online. I don’t like Instagram at all, but it is a necessary evil. I spent a lot of effort build a website but the way people look at content has totally changed in the past 10 years.

So, if I’m going to use it, I want to get some benefit from it. I don’t like the idea of just feeding an algorithm monster for its own profit. I’ve researched, experimented with free and paid solutions, and even paid to boost posts in the past.

This year’s experiment was to try a posting service so I didn’t have to deal with Instagram every day. I had a marketing job years ago and we used Buffer to schedule posts months in advance. It kind of worked in that context.

I put together 100 slideshows, posts, and videos of my art. Then, on Buffer, I scheduled a post every day to see what happened.

So, what was the result? Not much. I got 40 new followers and a bunch of likes. Most of the engagement I got come from people who already knew me. I’m sure they got tired of seeing all those posts.

My takeaway is that none of the paid ways of doing Instagram really matter for individuals. It probably helps for big brands like Pepsi, but it feels pointless for us regular folks.

It really feels like a big scam and I hate being a part of it. But, I have used it as a kind of contact manager for other artists. Many art shows have begun with an Instagram DM. That’s undeniable. But, the amount of time I’ve spent posting has not been very useful.

Layer Cake in Berlin

One of the best art shows I’ve seen since I moved to Berlin was Layer Cake at Urban Nation. It was a collaborative show with the main artist chopping, painting and re-assembling submissions from other artists. Apparently they are all famous street artists, but I didn’t know them. What I saw was at a very high level of aesthetics, though.

Superbooth24

The mother of all modular synth festivals is held each year in Berlin. It’s the Mecca of modular knob twiddlers.

I don’t own this kind of gear, but I like to play with it. I’m not big on commercial synths in general and prefer to build my own for specific audio aesthetics. But, I’m not gonna deny these machines are badass and look very cool. They sound cool, too.

Norwegian heavy metal

My friend Benjamin Kjellman-Chapin is a blacksmith that lives in Nes Verk, Norway. We went to the Atlanta College of Art together back in the mid 90s. Our paths diverged after school, and I didn’t see him again until this year.

Ben was immortalized back then in a painting my friend Neil Carver made, based on a photo I took of him. Neil made a bunch of paintings from my photos and I always thought the one of Ben was one of the best.

He ended up in Norway and has built an amazing life as a blacksmith with his wife Monica. They have a blacksmithing shop next to the Næs Jernverksmuseum a few hours outside of Oslo. Never in a million years did I think I would get to visit him. But, being in Berlin made that trip much more practical and realistic. We managed to arrange it so I was there for a regional blacksmithing festival. It was a whole other world, very different from my tech art life back in Berlin.

Skulls: a more successful AI project

I built a project to identify and track little plastic skulls and then play music based on their position. It won a prize.

The idea came from a more primitive version I made back in San Jose called Oracle. I am always looking forward to new ways of performing electronic music that don’t depend on a laptop or MIDI controller. This overall approach was inspired by seeing an oracle toss chicken bones to tell the future in a movie.

The earlier approach used simple image thresholding within a predefined grid. It worked but was susceptible to lighting changes. This approach used a Grove Vision AI Module V2 with a custom model I trained myself. It turned out pretty well.

To use it, I scatter a handful of little plastic teeth and skulls underneath a camera connected to an AI module. The module identifies the objects and sends their position to a Raspberry Pi, which interprets that and triggers a software synthesizer running internally on Linux.

This project won first prize in a competition sponsored by the manufacturer of that module.

Experimental Photography Festival in Barcelona

I had a solo show of recent experimental photography in Barcelona.

The Experimental Photography Festival is an analog-centric gathering of people who use alternative processes. That used to mean non-silver based processes like cyanotype but has expanded to include all kinds photographic techniques. This group features few computer-based images and definitely no AI work. It was refreshing and inspiring to be surrounded by such genuine experimentation..

It also attracted a truly international group. I met people from Japan, Hungary, Peru, and more.

The festival curators chose my 10X images to be one of the solo exhibits at the festival. These images are made in camera with 10 multiple exposures. I started this group on my cross-country drive back in 2021. I’ve continued to explore the different composition and color combinations possible. It’s an eclectic group of images now.

I show these as large prints and it was a challenge to get them to Barcelona ready to hang. The prints for this show had been specially made in the U.S.A. and shipped just in time in a large mailing tube. I wanted them to hang flat at the festival so I decided to fly with them in a large flat portfolio case. That turned out to be a big mistake.

As the case was too large for the cabin, I checked it as special baggage. Unfortunately, airline baggage handling systems aren’t designed for flat items, even within size limits. The case missed my connecting flight and was subsequently lost in Frankfurt, beginning a three-day ordeal to locate and deliver the prints to Barcelona.

It was really stressful, but the festival organizers were helpful and understanding. We had some cheap prints made locally as placeholders and hung those until my prints were found. When they finally arrived I had a whole crew of people helping to hang them quickly.

Besides all that, it was a fantastic experience. I made some great connections that turned out to be useful the next month (see below).

Once lost, now found.
Explaining my work at the opening of the festival
The temporary prints we hung (on the right)

Phasenpunkte

This was a last minute art show, organized and curated by me at HilbertRaum in Berlin. I got the offer to do it just a few days after returning from Barcelona.

The news came on August 7. The show was organized, hung, and then opened by September 7. That’s very fast for all those logistics, especially international. It was a minor miracle that it even happened. I ended up reaching out to 10 people to be in the show, mostly through direct messages on Instagram and Telegram. I knew some of them personally but found the rest through contacts from the recent Experimental Photography Festival.

8 artists ended up being in the show: Me, Samantha Tiussi, Hilde Maassen, Sofia Nercasseau, Gábor Ugray, Cecilia Pez, Hajnal Szolga, and Daniel Kannenberg.

On opening night, I performed music with the new synths I built, and Samantha Tussi performed with glass clothing fitted with microphones.

Here is the promotional website I made for it.

Overall, it was a huge success. I was exhausted afterward, but how often will I get the chance to do something like that? Life is short.

Phasenpunkte (“phase points”) is a reference to the points at which different materials change phase, like water turning to steam. It is a used as a metaphor to relate to the transition points between organic human experience and virtual and synthetic spaces. The human phase points are our feelings, thoughts, and imagination. Our consciousness is the membrane between the virtual and the real. This show is an aesthetic response to that idea. It’s not about technology, it’s about being human.

Programming glass robots

One of the artists in Phasenpunkte, Samantha Tussi, asked for some help programming the controllers for her glass sculptures. She constructed stepper motor assemblies that hung from the ceiling and raised and lowered pieces of glass according to her instructions. The glass was arranged as human figures and the movements conveyed emotions and a kind of slow dancing.

She used Arduino boards to send signals to the controllers and needed help with the code that ran on the boards. Although she has some technical background, she relied on ChatGPT to generate most of the code. While this initially worked, it proved difficult to modify when she wanted to make custom changes.

I agreed to help and we had coding sessions at her studio in Berlin. Even though tools like ChatGPT and generate functioning code, it can look like gibberish when a programmer is trying to read it. I ended up rewriting large chunks of that code to be able to make the customization she needed.

A big reward was getting to see her final show performed at the Acker Stadt Palast in central Berlin. It was a touching and interesting show. I was proud to use my coding skills for something like that.

These are all the changes I made to fix the code that ChatGPT had generated. It was a lot of work.

Skateboarding at SFMOMA

After all these years, what got me into the San Francisco Museum of Modern Art is one of my skateboarding photos I took when I was 17.

Jeffrey Chung contacted me about an upcoming show called Unity through Skateboarding. Tommy Guerrero had mentioned my photo of him riding a board with ‘End Racism’ written on the bottom—a photo that has resurfaced online over the years and garnered attention whenever Tommy shares it on social media.

I took that particular image on a trip to San Francisco with my friend Tony Henry. I took the photo at Bryce Kanights’ ramp in his San Francisco warehouse. I was only 17 at the time, using a camera I had just bought to replace one that had been stolen. I took many photos back then, working to support the cost of film, equipment, and road trips. I was convinced I would have a career as a skateboard photographer.

Back then, I didn’t get much support for that kind of photography. Most adults thought it was frivolous and my friends had no idea how much all that cost. I did get published though and made a little money. Most importantly, I got an amazing life out of it that nobody in my high school could compare with.

Now, 35 years later, that photo is hanging at SFMOMA. I haven’t even seen it yet, as I’ve been in Berlin during the organization and opening. It’s funny how things work out.

Synthesizer brain transplants

I rewrote the sound engines for some recent synths I built. They sound pretty cool now.

My performance setup for Phasenpunkte

Berghainbox

Starbox

Habanos

Modern Istanbul

I’ve been following Olafur Eliasson’s work for many years, even before moving to Berlin. I’m particularly interested in his light installations using glass and reflections, as well as his career trajectory. He doesn’t create traditional art objects, and some of his installations are incredibly complex and likely expensive—a scale and resource level I aspire to work at one day.

He has a large solo show at Istanbul Modern right now. I made plans to go see it so I could get a sense of how he staged it all.

Istanbul, Turkey is relatively close to Berlin but the flight is still 3 hours. It was at the beginning of Winter, so crowds weren’t nearly as large as normal.

The show was great, but Istanbul was interesting in itself. Visiting that place was the first time I had been in an Islamic country. That wasn’t radically different, but I was definitely aware of it. The call to prayer happens multiple times a day throughout the city.

Images I made on the streets of Istanbul

Olafur Eliasson at Istanbul Modern

Olafur Eliasson’s first solo exhibition in Türkiye, “Your unexpected encounter,” reflects the artist’s deep interest in light, color, perception, movement, geometry, and the environment. The artworks also reveal the network of relationships the artist forges between broad areas of research and his multidisciplinary practice. As well as following the personal journey of the artist, the exhibition addresses navigation and orientation on a wider scale, inspired by the site of the museum and its maritime location by the Bosphorus.

Making prints at Kunstquartier Bethanien

A long planned appointment to work at Bethanien finally happened.

The limits of my kitchen printingmaking studio were too much. I just couldn’t get enough pressure on the lino plates to see consistent results. Also, shutting my kitchen down for printing means going without cooking for 4-5 days. That’s a pain.

I am a member of the B.B.K. artist union here in Berlin. One of the perks of that is access to the Druckwerkstatt im Kunstquartier-Bethanien. Among other things, it is a well-maintained and world class printmaking facility.

For my skill level it’s way over the top. But, I did want to use the lino presses they have. Appointments are made far in advance, so I did that. The time came after Istanbul and I went in. The prints I got were far superior in every way. I also learned new approaches and got ideas for different work.

In addition to those lino prints I started a new series of collages using the older mistakes I made back in January. Instead of tossing those, I cut them and up and used parts for graphic elements. I exhibited a group of those collages in Phasenpunkte and got some really positive feedback about them. More of those will come soon.

Glide path

2025 will be my last year in Berlin. It was always the plan to return after a few years. My vision for being here is complete, but I have an opportunity to work on my art full-time in the first half of the year. I’ll stay in Berlin to take full advantage of that work window. After that, I think it’s time to go back to California.

Skulls: composing music with computer vision and a custom YOLO5 AI model

A few years ago, I built a primitive computer vision music player (Oracle) using analog video and a basic threshold detector with an Arduino. Since then, outboard AI vision modules have gotten much more specialized and powerful. I decided to try an advanced build of Oracle using the new Grove Vision AI Module V2.

This post describes the approach and build, as well as a few pitfalls to avoid. Seeed sent me one of their boards for free and that was the motivation to try this out. Ultimately, I want to use the lessons learned here to finish a more comprehensive build of Oracle with more capability. This particular project is called Skulls because of the plastic skulls and teeth used for training and inference targeting.

The components are a Grove Vision AI Module V2 (retails for about $26) with an Xiao ESP32 C3 as a controller and interface. The data from the object recognition gets passed to an old Raspberry Pi 3 model A+ using MQTT. The Pi runs Mosquitto as an MQTT broker and client, as well as Fluidsynth to play the resulting music. A generic 8226 board is used as a WiFi access point to connect the two assemblies wirelessly.

What worked

Assembling the hardware was very simple. The AI Module is very small and mated well with an ESP32. Each board has a separate USB connector. The AI Module needs that for uploading models and checking the camera feed. The ESP32 worked with the standard Arduino IDE. I added the custom board libraries from Seeed to ensure compatibility.

In the beginning I did most of the AI work directly connected to the AI Module and not through the ESP32. It was the only way to see a video feed of what I was getting.

One of the reasons I put so much effort into this project was to use custom AI models. I wasn’t interested in doing yet another demo of a face recognition or pets or whatever. I’m interested in exploring new human-machine interfaces for creative output. This particular module has the ability to use custom models.

So, I tried to follow the Seeed instructions for creating a model. It was incredibly time consuming and there were many problems. The most effective tip I can offer is to use the actual camera connected to the board to generate training images AND to clean-up those images in Photoshop or Gimp. I went through a lot of trial and error with paramters and context. Having clean images fixed a lot of the recognition issues. I generated and annotated 176 images for training. That took 5-6 hours and the actual training in the Collab notebook took 2-3 hours with different options.

Here is my recipe:

  • Use a simple Arduino sketch to record jpegs from the camera onto an SD card.
  • In an image editor, apply Reduce Noise and Levels to the images to normalize them. Don’t use “Auto Levels” or any other automatic toning.
  • The images will be 240px X 240px. Leave them that size. Don’t export larger.
  • In Roboflow choose “Object Detection”, not “Instance Segmentation”, for the project.
  • When annotating, choose consistent spacing between your bounding box and the edges of your object.
  • Yes, you can annotate multiple objects in a single image. It’s recommended.
  • For preprocessing, I chose “Filter Null and “Grayscale”.
  • For augmentation, I chose “Rotate 90”, “Rotation”, and “Cutout”. I did NOT use “Mosaic” as recommended in the Seeed Wiki. That treatment already happens in the Collab training script.
  • I exported the dataset using JSON > COCO. None of the other options were relevant.
  • The example Google Collab notebook I used was the rock/paper/scissors version (Gesture_Detection_Swift-YOLO_192). I only had a few objects and it was the most relevant.
  • I left the image size at 192×192 and trained for 150 epochs. The resulting TFLite INT8 model was 10.9mb.
  • I used the recommend web tool to connect directly to the AI Module and upload the model. It took multiple tries.
  • On the ESP32 I installed MQTT and used that to transmit the data to my Raspberry Pi. I did not use the on-board wifi/MQTT setup of the AI Module.

This was a difficult project because of very confusing and incomplete documentation at mutiple stages. It’s clear to me that the larger companies don’t actually want us to be able to do all this ourselves. There were times it felt intentionally obfuscated to force me to buy a premium tier and some unrelated commercial application. I’m glad I did it though, because I learned some important concepts and limitations of AI training.

Demo

A demo of different sounds and arrangements produced by the assembly.

Conclusion

I’ll use this knowledge to finish a new build of the actual Oracle music composition platform I started. This particular demo is interesting, but a somewhat unpredictable and technically fragile. I found the research on generative music to be the most interesting part. As for the AI, I’m sure all this will be simplified and optimized in the future. I just hope the technology stays open enough for artists to use independently.

MMXX: signals, sounds, sights

I spent most of the year in my art studio while the city around me contracted and calcified due to Covid. I was fortunate that my plans coincided with the timing and degree of changes in the world. It could have very easily gone the other way, as I’ve seen firsthand. Lots of my friends in the art community are struggling.

My work this year reflects more studio and internet based processes. Previous years always included public festivals, performances, and collaborations. Some of that change was to save money, but it was also an effort to make use of what I had around me. It was to stay present and maintain momentum with ongoing projects.

I did actually manage to pull off a few public projects, including a portable projection piece that had animated wolves running on rooftops. I savored that experience and learned a lot from the constraints of lock-down art performances.

Looking back on this year, I see new priorities being formed. While the coding and online projects were effective, the amount of screen time required took a toll. I relished the drawing projects I had and hope to keep working in ways that make a huge mess.

Sightwise

My studio complex has a co-op of artists called FUSE Presents. We hold regular group art shows in normal times and for each show, two artists get featured. I was one of the featured artists for the March 2020 show. That meant I got extra gallery space and special mention in marketing materials.

The work I picked was drawn from a variety of efforts in the previous two years. As a grouping, it represented my current best efforts as a multimedia artist. I worked hard to finalize all the projects and really looked forward to the show.

It combined abstract video, traditional photography, sculptural video projection, installation work, and works on paper.

I designed the show’s poster in open source software called Inkscape.

Unfortunately, the show happened right as the first announcements about the local spread of Covid had begun. People were already quarantined and we heard about the first deaths in our county. That news didn’t exactly motivate people to come out to the art show. Attendance was sparse at best. But, all that work is finished now and ready for future exhibits.

Camel

I found a cigarette tin that had been used as a drug paraphernalia box and decided to build a synthesizer out of it. I had been experimenting with a sound synthesis library called Mozzi and was ready to make a standalone instrument with it. I spent about a month on the fabrication and added a built-in speaker and battery case to make it portable. Sounds pretty rad.

I released my code as open source in a Github repo and a follower from Vienna, Austria replicated my synth using a cake box from Hotel Sacher. (apparently famous for their luxury cakes?)

Wolves

The Wolves project was a major undertaking that took place over 2 years. It began with an interest in the Chernobyl wolves that became a whole genre of art for me.

I began hand digitizing running wolves from video footage and spent a year adding to that collection. I produced hundreds and hundreds of hand drawn SVG frames and wrote some javascript that animated those frames in a variety of ways. I got to the point where I could run a Raspberry Pi and a static video projector with the wolves running on it. I took a break from the project after that.

By the time I returned to the project, the Covid lockdown was in full swing and American city streets looked abandoned. We all started seeing footage of animals wandering into urban areas. It made sense to finish the Wolves project as an urban performance, projecting onto buildings from empty streets.

Building a stable, self-powered and portable rig that could be pulled by bicycle turned out to be harder than I thought. There were so many details and technical issues that I hadn’t imagined. Every time I thought I was a few days from launch, I would have to rebuild something that added weeks.

The first real ride with this through Japantown in northern San Jose was glorious. Absolutely worth the effort. I ended up taking it out on the town many times in the months to come.

Power up test in the backyard
San José City Hall
Japantown, north of downtown San José

The above video is from Halloween, which was amazing because so people were outside walking around. That’s when the most people got to see it in the wild.

But, my favorite moment was taking it out during a power blackout. Whole neighborhoods were dark, except for me and my wolves. I rode by one house where a bunch of kids lived and the family was out in the yard with flashlights. The kids saw my wolves and went crazy, running after them and making wolf howl sounds while the parents laughed. Absolute highlight of the year.

Videogrep

Videogrep is a tool to make video mashups from the time markers in closed captioning files. It’s the kind of thing where you can take a politician’s speech and make him/her say whatever you want by rearranging the parts where they say specific words. It was a novelty in the mid-2000s that was seen on talk shows and such, as a joke. Well, the computer process behind the tool is very useful.

I didn’t create videogrep, Sam Lavigne did and released his code on Github. (BTW, the term “grep” in videogrep comes from a Unix utility (grep) used to search for things) What I did do is use it to find other things besides words, such as breathing noises and partial words. I used videogrep to accentuate mistakes and sound glitches as much as standalone speech and words.

Here is a typical series of commands I would use:

videogrep --input videofile.mp4 -tr

cat videofile.mp4.transcription.txt | tr -s ' ' '\n' | sort | uniq -c | sort -r | awk '{ print $2, $1 }' | sed '/^[0-9]/d' > words.txt

videogrep -i videofile.mp4 -o outputvideo.mp4 -t -p 25 -r -s 'keyword' -st word

ffmpeg -i outputvideo.mp4 -filter_complex "frei0r=nervous,minterpolate='fps=120:scd=none',setpts=N/(29.97*TB),scale=1920:1080:force_original_aspect_ratio=increase,crop=1920:480" -filter:a "atempo=.5,atempo=.5" -r 29.97 -c:a aac -b:a 128k -c:v libx264 -crf 18 -preset veryfast -pix_fmt yuv420p if-stretch-big.mp4

Below is a stretched supercut of the public domain Orson Welles movie The Stranger. I had videogrep search for sounds that were similar to speech but not actual words or language. Below that clip is a search of a bunch of 70s employee training films for the word “blue”. Last is a supercut of one the Trump/Biden debates where the words “football and “racist” are juxtaposed.

Specific repeated words used in a 2020 Presidential Debate: fear, racist, and football

Vid2midi

While working on the videos produced by videogrep, I found a need for soundtracks that were timed to jumps in image sequences. After some experimenting with OpenCV and Python, I found a way to map various image characteristics to musical notation.

I ended up producing a standalone command-line utility called vid2midi that converts videos into MIDI files. The MIDI file can be used in most music software to play instruments and sounds in time with the video. Thus, the problem of mapping music to image changes was solved.

It’s now open source and available on my Github site.

The video above was made with a macro lens on a DSLR and processed with a variety of video tools I use. The soundtrack is controlled by a MIDI file produced by vid2midi.

Bad Liar

This project was originally conceived as a huge smartphone made from a repurposed big screen TV. The idea is that our phones reflect our selves back to use, but as lies.

It evolved into an actual mirror after seeing a “smart mirror” in some movie. The information in the readout scrolling across the bottom simulates a stock market ticker. Except, this is a stock market for emotions. The mirror is measuring your varying emotional states and selling them to network buyers in a simulated commodities exchange.

Screen test showing emotional stock market
Final demo in the studio

Hard Music in Hard Times

TQ zine is an underground experimental music zine from the U.K. I subscribed a few years ago after reading a seminal essay about the “No audience underground”. I look forward to it each month because it’s unpretentious and weird.

They ran an essay contest back in May and I was one of the winners! My prize was a collection of PCBs to use in making modular synthesizers. I plan to turn an old metal lunchbox into a synth with what I received.

Here is a link to the winning essay:

Lunetta Synth PCB prizes from @krustpunkhippy

Books

I spent much of my earlier art career as a documentary photographer. I still make photographs but the intent and subject matter have changed. I’m proud of the photography I made throughout the years and want to find good homes for those projects.

Last year I went to the SF Art Book Fair and was inspired by all the publishers and artists. Lots of really interesting work is still being produced in book form.

Before Covid, I had plans to make mockups of books of my photographs and bring them to this year’s book fair to find a publisher. Of course, the fair was cancelled. I took the opportunity to do the pre-production work anyway. Laying out a book is time consuming and represents a standalone art object in itself.

I chose two existing projects and one new one. American Way is a collection of photos I made during a 3 month American road trip back in 2003. Allez La Ville gathers the best images I made in Haiti while teaching there in 2011-13 and returning in 2016. The most recent, Irrealism, is a folio of computer generated “photographs” I made using a GAN tool.

It was a thrill to hold these books in my hands and look through them, even if they are just mockups. After all these years, I still want my photos to exist in book form in some way.

Allez La Ville, American Way, Irrealism

Art Review Generator

Working on the images for the Irrealism book mentioned above took me down a rabbit hole into the world of machine learning and generative art. I know people who only focus on this now and I can understand why. There is so much power and potential available from modern creative computing tools. That can be good and bad though. I have also seen a lot of mediocre work cloaked in theory and bullshit.

I gained an understanding of generative adversarial networks (GAN) and the basics of setting up Linux boxes for machine learning with Tensorflow and PyTorch. I also learned why the research into ML and artificial intelligence is concentrated at tech companies and universities. It’s insanely expensive!

My work is absolutely on a shoestring budget. I buy old computer screens from thrift stores. I don’t have the resources to set up cloud compute instances with stacked GPU configurations. I have spent a lot of time trying to figure out how to carve a workflow from free tiers and cheap hardware. It ain’t easy.

One helpful resource is Google Collab. It lets “researchers” exchange workbooks with executable code. It also offers free GPU usage (for now, anyway). That’s crucial for any machine learning project.

When I was laying out the Irrealism book, I wanted to use a computer generated text introduction. But, the text generation tools available online weren’t specialized enough to produce “artspeak”. So, I had the idea to build my own art language generator.

The short story is that I accessed 57 years of art reviews from ArtForum magazine and trained a GPT-2 language model with the results. Then I built a web app that generates art reviews using that model, combined with user input. Art Review Generator was born.

This really was a huge project and if you’re interested in the long story, I wrote it up as a blog post a few months ago. See link below.

See examples of generated results and make your own.

Kiosk

Video as art can be tricky to present. I’m not always a fan of the little theaters museums create to isolate viewers. But, watching videos online can be really limited in fidelity of image or sound. Projection is usually limited by ambient light.

I got the idea for this from some advertising signage. It was seeded with a monitor donation (thanks Julie Meridian!) and anchored with a surplus server rack I bought. The killer feature is the audio level rises and falls depending on whether is someone is standing in front of it or not. That way, all my noise and glitch soundtracks aren’t at top volume all the time.

This plays 16 carefully selected videos in a loop and runs autonomously. No remote control or start and stop controls. Currently installed at Kaleid Gallery in downtown San Jose, CA.

Holding the Moment

Hanging out in baggage claim with no baggage or even a flight to catch

In July, the San José Office of Cultural Affairs announced a call for submissions for a public art project called Holding the Moment. The goal was to showcase local artists at Norman Y. Mineta San José International Airport.

COVID-19 changed lives everywhere — locally, nationally, and internationally. The Arts, and individual artists, are among those most severely impacted. In response, the City of San José’s Public Art Program partnered with the Norman Y. Mineta San José International Airport to offer local artists an opportunity to reflect, comment, and on of this global crisis and the current challenging time. More than 327 submissions were received, and juried by a prominent panel of Bay Area artists and arts professionals. Ultimately 96 artworks by 77 San José artists were awarded a $2,500 prize and a place in this six-month exhibition.

SAN JOSE OFFICE OF CULTURAL AFFAIRS

Two of my artworks were chosen for this show and they are on display at the airport until January 9. They picked some challenging pieces, PPE and Mask collage, with interesting back stories of their own.

Here are the stories of the two pieces they chose for exhibition.

PPE

The tale of this image begins in Summer of 1998. I had a newspaper job in Louisiana that went badly. One of the few consolations was a box of photography supplies I was able to take with me. In that box was a 100′ bulk roll of Ilford HP5+ black and white film. My next job happened to involve teaching digital photography so I stored that bulk roll, unopened and unused, for decades. I kept it while I moved often, always thinking there would be some project where I would need a lot of black and white film.

Earlier this year, I was inspired to buy an old Nikon FE2 to make some photos with. I just wanted to do some street photography. After Covid there weren’t many people in the streets to make photos of. But, I did break out that HP5+ that I kept for decades and loaded it onto cassettes for use in the camera I had bought. I also pulled out a Russian Zenitar 16mm f2.8 that I used to shoot skateboarding with.

This past Summer, I went to Alviso Marina County Park often. It’s a large waterfront park near my house that has access to the very bottom of San Francisco bay. People would wear masks out in the park and I even brought one with me. It was absolutely alien to wear protective gear out in a huge expanse like that.

So, my idea was to make a photo that represented that feeling. I brought my FE2 with the old film and Zenitar fisheye to the park, along with a photo buddy to actually press the button. People walking by were weirded out by the outfit, but that’s kind of the desired effect.

This image was enlarged and installed in the right-hand cabinet at the airport show.

An interesting side note to this project was recycling the can that the old film came in. Nowadays that would be made of plastic but they still shipped bulk film in metal cans back then. I took that can and added some knobs and switches to control a glitching noisemaker I had built last year. So, that old film can is now in use as a musical instrument.

The film can that used to hold 100′ of Ilford HP5+ is now a glitch sound machine

Mask Collage

Face masks are a part of life now but a lot of people are really pissed that they have to wear them. I was in the parking lot of a grocery store and a guy in front of me was talking to himself, angry about masks. Turns out he was warming up to argue with the security guard and then the manager. While I was inside shopping (~20 minutes) he spent the whole time arguing loudly with the manager. It was amazing to me how someone could waste that much time with that kind of energy.

When I got back to my studio I decided to draw a picture of that guy in my sketchbook. That kicked off a whole series of drawings over the next month.

I have a box of different kinds of paper I have kept for art projects since the early 90s. In there was a gift from an old roommate: a stack of blank blood test forms. I used those forms as the backgrounds for all the drawings. Yellow and red spray ink from an art colleague who moved away provided the context and emotional twists.

The main image is actually a collage of 23 separate drawings. It was enlarged and installed in the left-hand cabinet at the airport show.

Internet Archive

A few weeks ago, my video Danse des Aliénés won 1st place in the Internet Archive Public Domain Day film contest. It was made entirely from music and films released in 1925.

Danse des Aliénés

Film and music used:

In Youth, Beside the Lonely Sea

Das wiedergefundene Paradies
(The Newly Found Paradise)
Lotte Lendesdorff and Walter Ruttmann

Jeux des reflets et de la vitesse
(Games on Reflection and Speed)
Henri Chomette

Koko Sees Spooks
Dave Fleischer

Filmstudie
Hans Richter

Opus IV
Walther Ruttmann

Joyless Street
Georg Wilhelm Pabst

Danse Macabre Op. 40 Pt 1
(Dance of Death)
Camille Saint-Saëns
Performed by the Philadelphia Symphony Orchestra

Danse Macabre Op. 40 Pt 2
(Dance of Death)
Camille Saint-Saëns
Performed by the Philadelphia Symphony Orchestra

Plans? What plans?

Vaccines are on the way. Hopefully, we’ll see widespread distribution in the next few months. Until then, I’ll still be in my studio working on weird tech art and staying away from angry mask people.

I am focused on future projects that involve a lot of public participation and interactivity. I think we will need new ways of re-socializing and I want to be a part of positive efforts in that direction.

I also have plans for a long road trip from California to the east coast and back again. It will be a chance to rethink the classic American photo project and find new ways to see. But, that depends on how things work out with nature’s plans.

Hard Music in Hard Times

This essay was my winning entry in a TQ zine essay contest back in June of 2020. TQ is an underground music zine that hails from Northumberland, England.

We live in a golden age of irreverent and unsentimental hard music, released by armies of atonal warriors onto Bandcamp, Soundcloud, and cassette tape. We can listen to hundreds of hours of crushed white noise decorated with screams and clipped crunches.

Performers can boldly destroy any expectation of comfort or familiarity. It’s a full body embrace of the anxiety and struggle that people feel in a society that produces so much disconnected sound pollution in service of consumption.

But where does it fit when we live in a pandemic, with people suffering and dying? Should we still be making harsh music in harsh times? Who is it for if so many people are flocking to feel-good music and movies, nostalgia, and any other cultural salve they can find? Are folks really spending months in isolation, listening to Merzbow?

Abso-fucking-lutely.

Plenty of crazy bastards are not only listening, they are making more of it. What else is there to do, watch more vapid bullshit on the internet?

If you spend any time around firefighters, you’ll notice many of them smoke cigarettes. Seems strange to do something you have to avoid while doing your job. It is a way of getting use to something you will have to deal with eventually. Firefighters don’t get to hold their breath and ignore the smoke while fighting fires.

The noise of city life, cheap vehicles, and expensive phones surround us. Brooms brush and scrape. Street machines move us back and forth between to places to earn money to buy new machines. A new nowness is needed to remember to listen. Listen to all these things around us instead of filtering them out. Use the noise to feed noise.

In 2015, Tasha Howe from Humboldt State University published a paper about the midlife status of metalheads from the 80s. It reported they were better adjusted as adults than a similar cohort of non-metal fans.

I’ve found that to be true in my own life and among friends I grew up around. Anecdotally, I’ve seen friends that were into heavy shit in adolescence end up as healthy and interesting adults. More importantly, they tend to have a bit more empathy than the people I knew who were into pop music. I have no real explanation of that other than a belief that people who confront struggle and pain in their lives do much better in emotional maturity than people who ignore the same.

That Humboldt study was of adolescence, though. How about grown folks having a hard time in the middle of a pandemic? If not already a fan of challenging music, listening during a painful time probably won’t do much for them. Telling them about the noise project you’re into on social media probably won’t get a whole lot of interest either.

Performers cry, bleat, and moan about their metrics. Nobody gave a shit before the pandemic, why would they now? Hahahaha. N.A.U., motherfuckers.

I have been building a lot of noise instruments during the lockdown. Playing them is fulfilling and liberating. There is a physicality and connection to them and the sounds they make. Even a lousy day around that kit is so much better than any Marvel movie or Game of Drones mental mush.

I could probably put out a decent full-length of well constructed ambient right now. Something soothing and somatic. It would get more likes and downloads I suppose. But, I don’t feel that way. This idle time and solitude has inspired a visceral reaction.

I want to make sure my mind stays alive. Opting for intensity keeps me in the now with an undeniable sonic force. I don’t want to tune the world out. When they announce that more people are hurting or have died, I want to know that and feel it for real.

Ignoring the news and letting Netflix hijack my empathy with the melodramas of fictional people can only lead to something bad down the line. I plan on retaining my emotional life.

So, here’s to feedback, squelches, cracks, and booms. All of it. Snip some diodes in your pedals and point your amps at each other. Turn off every screen you can find. Smoke nothing. Drink nothing. Be radically present. Say everything you think out loud into a microphone. Then say it again louder. Scream it.

Pulverize craniums boldly. Celebrate the resonance of the real and serenade the suffering. Let go of irony and cleverness. Record nothing. Play for your plants and animals. Liberate your intent from ego.

Above all, stay human. Keep feeling. Live loudly.

Prizes included PCBs galore

One of the rules was that the first three words of at least three paragraphs had to start with P, C, and B. The lengths was also set at minimum 650 words.

I was thrilled to win this because the prizes included a bunch of electronics components for building modular audio gear. My plan is to turn this metal lunch box I bought at a thrift store into a portable synthesizer rig.

Running Fluidsynth on a Raspberry PI Zero W

One of the reasons I’ve spent so much time experimenting with audio software on Raspberry Pis is to build standalone music sculpture. I want to make machines that explore time and texture, in addition to generating interesting music.

The first soft synth I tried was Fluidsynth. It’s one of the few that can run headless, without a GUI. I set it up on a Pi 3 and it worked great. It’s used as a basic General MIDI synthesizer engine for a variety of packages and even powers game soundtracks on Android.

This video is a demo of the same sound set used in this project, but on an earlier iteration using a regular Raspberry Pi 3 and a Pimoroni Displayotron HAT. I ended up switching to the smaller Raspberry Pi Zero W and using a webapp instead of a display.

The sounds are not actually generated from scratch, like a traditional synthesizer. It draws on a series of predefined sounds collected and mapped in SoundFonts. The .sf2 format was made popular by the now defunct Sound Blaster AWE32 sound card that was ubiquitous on 90s PCs.

Back then, there was a niche community of people producing custom SoundFonts. Because of that, development in library tools and players was somewhat popular. Fluidsynth came long after, but benefits from the early community work and a few nostalgic archivists.

The default SoundFont that comes with common packages is FluidR3_GM. It is a full General Midi set with 128 instruments a small variety of drum kits. It’s fine for building a basic keyboard or MIDI playback utility. But, it’s not very high fidelity or interesting.

What hooked me was finding a repository of commercial SoundFonts (no longer active). That site has an amazing collection of 70s-90s synths in SoundFont format, including Jupiter-8, TB-303, Proteus 1/2/3, Memory Moog, and an E-MU Modular. The E-MU Modular sounds pretty rad and is the core of the sound set I put together for this. They’re all cheap and I picked up a few to work with. The sound is excellent.

Raspberry Pi Zero W

For this particular project, I ended up using a Raspberry Pi Zero W for its size and versatility. Besides running Fluidsynth, it also serves up a Node.js webapp over wifi for changing instruments. It’s controllable by any basic USB MIDI keyboard and runs on a mid-sized USB battery pack for around 6 hours. Pretty good for such a tiny footprint and it costs around $12.

Setting it up

If you want to get something working fast or just want to make a kid’s keyboard, setup is a breeze.

After configuring the Pi Zero and audio:

sudo apt-get install fluidsynth

That’s it.

But, if you want more flexibility or interactivity, things get a bit more complex. The basic setup is the same as what I laid out in my ZynAddSubFX post.

Download Jessie Lite and find a usable Micro SD card. The following is for Mac OS. Instructions for Linux are similar and Windows details can be found on the raspberrypi.org site.

Insert the SD card into your computer and find out what designation the OS gave it. The unmount it and write the Jessie Lite image to it.

diskutil list

/dev/disk1 (external, physical):
 #: TYPE NAME SIZE IDENTIFIER
 0: FDisk_partition_scheme *8.0 GB disk1
 1: Windows_FAT_32 NO NAME 8.0 GB disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

Pull the card out and reinsert it. Then, add two files to the card to make setup a little faster and skip a GUI boot.

cd /Volumes/boot
touch ssh

sudo nano wpa_supplicant.conf

Put this into the file you just opened.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
}

Put the card in the Pi Zero and power it up, then configure the box with raspi-config. One trick I learned was not to change the root password and expand the file system at the same time. I’m not sure what the problem is, but often it corrupts the ssh password to do both at the same time.

Update the Pi:

sudo apt-get update
sudo apt-get upgrade

Fluidsynth needs a higher thread priority than the default, so I use the same approach as setting up Realtime Priority. It might be overkill, but it’s consistent with the other Pi boxes I set up. Add the user “pi” to the group “audio” and then set expanded limits.

Pi commands

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

The file should be empty. Add this to it.

@audio - rtprio 80
@audio - memlock unlimited

If you’re not using an external USB audio dongle or interface, you don’t need to do this. But, after you hear what the built-in audio sounds like, you’ll want something like this.

sudo nano /boot/config.txt

Comment out the built-in audio driver.

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on
sudo nano /etc/asound.conf

Set the USB audio to be default. It’s useful to use the name of the card instead of the stack number.

pcm.!default {
 type hw card Device
 }
 ctl.!default {
 type hw card Device
 }

Reboot and then test your setup.

sudo reboot

aplay -l

lsusb -t

speaker-test -c2 -twav

A voice should speak out the left and right channels. After verifying that, it’s time to set up Fluidsynth.

The reason I compile it from the git repo is to get the latest version. The version in the default Raspbian repository used by apt-get is 1.1.6-2. The latest is 1.1.6-4. The reason we need this is Telnet.

That’s right, Fluidsynth uses Telnet to receive commands and as its primary shell. It’s a classic text based network communication protocol used for remote administration. Think Wargames.

Telnet

But, there’s a bug in the standard package that causes remote sessions to get rejected in Jessie. It’s been addressed in the later versions of Fluidsynth. I needed it to work to run the web app.

Grab the dependencies and then compile Fluidsynth. It’s not complicated, but there are some caveats.

sudo apt-get install git libgtk2.0-dev cmake cmake-curses-gui build-essential libasound2-dev telnet

git clone git://git.code.sf.net/p/fluidsynth/code-git

cd code-git/fluidsynth
 mkdir build
 cd build
 cmake ..
 sudo make install

The install script misses a key path definition that aptitude usually handles, so I add it manually. It’s needed so libfluidsynth.so.1 can be found. If you see an error about that file, this is why.

sudo nano /etc/ld.so.conf

Add this line:

/usr/local/lib

Then:

sudo ldconfig
 export LD_LIBRARY_PATH=/usr/local/lib

Now we need to grab the default SoundFont. This is available easily with apt-get.

sudo apt-get install fluid-soundfont-gm

That’s it for Fluidsynth. It should run fine and you can test it with a help parameter.

fluidsynth -h

Now to install Node.js and the webapp to change instruments with.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back into an ssh session. That makes nvm available.

nvm install v6.10.1

Grab the webapp from my repo and install it.

git clone https://github.com/lucidbeaming/Fluidsynth-Webapp.git fluidweb

cd fluidweb

npm install --save

Find the IP address of you Pi on your local network. Visit <ip address> port 7000 on any other device.

http://192.168.1.20:7000

If Fluidsynth isn’t running, it will display a blank page. If it is running, it will list all instruments available, dynamically. This won’t be much of a problem once the launch script is setup. It launches Fluidsynth, connects any keyboards attached through ALSA, and launches the webapp.

Create the script and add the following contents. It’s offered as a guideline and probably won’t work if copied and pasted. You should customize it according to your own environment, devices, and tastes.

sudo nano fluidsynth.sh
#!/bin/bash

if pgrep -x "fluidsynth" > /dev/null
then
echo fluidsynth already flowing
else
fluidsynth -si -p "fluid" -C0 -R0 -r48000 -d -f ./config.txt -a alsa -m alsa_seq &
fi

sleep 3

mini=$(aconnect -o | grep "MINILAB")
mpk=$(aconnect -o | grep "MPKmini2")
mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
then
aconnect 'Arturia MINILAB':0 'fluid':0
echo MINIlab connected
elif [[ $mpk ]]
then
aconnect 'MPKmini2':0 'fluid':0
echo MPKmini connected
elif [[ $mio ]]
then
aconnect 'mio':0 'fluid':0
echo Mio connected
else
echo No known midi devices available. Try aconnect -l
fi

cd fluidweb
node index.js
cd ..

exit

Note that I included the settings -C0 -R0 in the Fluidsynth command. That turns off reverb and chorus, which saves a bit of processor power and doesn’t sound good anyway.

Now, create a configuration file for Fluidsynth to start with.

sudo nano config.txt
echo "Exploding minds"
gain 3
load "./soundfonts/lucid.sf2"
select 0 1 0 0
select 1 1 0 1
select 2 1 0 2
select 3 1 0 3
select 4 1 0 4
select 5 1 0 5
select 6 1 0 6
select 7 1 0 7
select 8 1 0 8
select 10 1 0 9
select 11 1 0 10
select 12 1 0 11
select 13 1 0 12
select 14 1 0 13
select 15 1 0 14
echo "bring it on"

The select command chooses instruments for various channels.

select <channel> <soundfont> <bank> <program>

Note that channel 9 is the drumkit.

To get the launch script to run on boot(or session) it needs to have the right permissions first.

sudo chmod a+x fluidsynth.sh

Then, add the script to the end of .bash_profile. I do that instead of other options for running scripts at boot so that fluidsynth and node.js run as a user process for “pi” instead of root.

sudo nano .bash_profile

At the end of the file…

./fluidsynth.sh

Reboot the Pi Zero and when it gets back up, it should run the script and you’ll be good to go. If you run into problems, a good place to get feedback is LinuxMusicians.com. They have an active community with some helpful folks.

Raspberry Pi Zero W in a case

Here’s another quick demo I put together. Not much in terms my own playing, haha, but does exhibit some of the sounds I’m going for.

Setting up a Raspberry Pi 3 to run ZynAddSubFX in a headless configuration

Most of my music is production oriented and I don’t have a lot of live performance needs. But, I do want a useful set of evocative instruments to take to strange places. For that, I explored the options available for making music with Raspberry Pi minicomputers.

The goal of this particular box was to have the Linux soft-synth ZynAddSubFX running headless on a battery powered and untethered Raspberry Pi, controllable by a simple MIDI keyboard and an instrument switcher on my phone.

Getting things to run on the desktop version of Raspbian and ZynAddSubFX was pretty easy, but stripping away all the GUI and introducing command line automation with disparate multimedia libraries was a challenge. Then, opening it up to remote control over wifi was a rabbit hole of its own.

But, I got it working and it sounds pretty amazing.

Setting up the Raspberry Pi image

I use Jessie Lite because I don’t need the desktop environment. It’s the same codebase without a few bells and whistles. When downloading from rasperrypi.org, choose the torrent for a much faster transfer than getting the ZIP directly from the site. These instructions below are for Mac OS X, using Terminal.

diskutil list

/dev/disk1 (external, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:        FDisk_partition_scheme                        *8.0 GB     disk1
1:                 DOS_FAT_32 NO NAME                 8.0 GB     disk1s1

diskutil unmountDisk /dev/disk1

sudo dd bs=1m if=2017-04-10-raspbian-jessie-lite.img of=/dev/rdisk1

After the image gets written, I create an empty file on the boot partition to enable ssh login.

cd /Volumes/boot
touch ssh

Then, I set the wifi login so it connects to the network on first boot.

sudo nano wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="<your_ssid>"
 psk="<your_password>"
 }

The card gets removed from the laptop and inserted into the Pi. Then, after it boots up I go through the standard setup from the command line. The default login is “pi” and the default password is “raspberry”.

sudo raspi-config

[enable ssh,i2c. expand filesystem. set locale and keyboard.]

After setting these, I let it restart when prompted. When it comes back up, I update the codebase.

sudo apt-get update
sudo apt-get upgrade

Base configuration

Raspberry config for ZynAddSubFX

ZynAddSubFX is greedy when it comes to processing power and benefits from getting a bump in priority and memory resources. I add the default user (pi) to the group “audio” and assign the augmented resources to that group, instead of the user itself.

sudo usermod -a -G audio pi

sudo nano /etc/security/limits.d/audio.conf

...
@audio - rtprio 80
@audio - memlock unlimited
...

The Raspbian version of Jessie Lite has CPU throttles, or governors, set to conserve power and reduce heat from the CPU. By default, they are set to “on demand”. That means the voltage to the CPU is reduced until general use hits 90% of CPU capacity. Then it triggers a voltage (and speed) increase to handle the load. I change that to “performance” so that it has as much horsepower available.

This is done in rc.local:

sudo nano /etc/rc.local
...
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo "performance" > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
...

Note that it gets set for all four cores, since the Raspberry Pi is multi-core. For more info about governors and even overclocking, this is a good resource.

Virtual memory also needs to get downgraded so there is little swap activity. Zynaddsubfx is power hungry but doesn’t use much memory, so it doesn’t need VM.

sudo /sbin/sysctl -w vm.swappiness=10

Now, to set up the audio interface. For my ZynAddSubFX box, I use an IQaudio Pi-DAC+. I’ve also used a standard USB audio interface and have instructions for that in my post about the Pi Zero. Raspbian uses Device Tree overlays to load I2C, I2S, and SPI interface modules. So, instead of separate drivers to install, I just edit config.txt to include the appropriate modules for the Pi-DAC+. Note that I also disabled the crappy built-in audio by commenting out “dtparam=audio=on”. This helps later on when setting the default audio device used by the system.

sudo nano /boot/config.txt

...

# Enable audio (loads snd_bcm2835)
# dtparam=audio=on

dtoverlay=i2s-mmap
dtoverlay=hifiberry-dacplus

...

For Jack to grab hold of the Pi-DAC+ for output, the default user (pi) needs a DBus security policy for the audio device.

sudo nano /etc/dbus-1/system.conf

...
<!-- Only systemd, which runs as root, may report activation failures. -->
<policy user="root">
<allow send_destination="org.freedesktop.DBus"
    send_interface="org.freedesktop.systemd1.Activator"/>
</policy>
<policy user="pi">
    <allow own="org.freedesktop.ReserveDevice1.Audio0"/>
</policy>
...

Next, ALSA gets a default configuration for which sound device to use. Since I disabled the built-in audio earlier, the Pi-DAC+ is now “0” in the device stack.

sudo nano /etc/asound.conf

pcm.!default {
 type hw card 0
 }
ctl.!default {
 type hw card 0
 }

sudo reboot

Software installation

ZynAddSubFX has thick dependency requirements, so I collected the installers in a bash script. Most of it was lifted from the Zynthian repo. Download the script from my Github repo to install the required packages and run it. The script also includes rtirq-init, which can improve performance on USB audio devices and give ALSA some room to breath.

git clone https://raw.githubusercontent.com/lucidbeaming/pi-synths/master/ZynAddSubFX/required-packages.sh

sudo chmod a+x required-packages.sh

./required-packages.sh

Now the real meat of it all gets cooked. There are some issues with build optimizations for SSE and Neon (incompatible with ARM processors), so you’ll need to disable those in the cmake configuration.

git clone https://github.com/zynaddsubfx/zynaddsubfx.git
cd zynaddsubfx
mkdir build
cd build
cmake ..
ccmake .
[remove SSE parameters and NoNeonplease=ON]
sudo make install

Usually takes 20-40 minutes to compile. Now to test it out and get some basic command line options listed.

zynaddsubfx -h

Usage: zynaddsubfx [OPTION]

-h , –help Display command-line help and exit
-v , –version Display version and exit
-l file, –load=FILE Loads a .xmz file
-L file, –load-instrument=FILE Loads a .xiz file
-r SR, –sample-rate=SR Set the sample rate SR
-b BS, –buffer-size=SR Set the buffer size (granularity)
-o OS, –oscil-size=OS Set the ADsynth oscil. size
-S , –swap Swap Left <–> Right
-U , –no-gui Run ZynAddSubFX without user interface
-N , –named Postfix IO Name when possible
-a , –auto-connect AutoConnect when using JACK
-A , –auto-save=INTERVAL Automatically save at interval (disabled with 0 interval)
-p , –pid-in-client-name Append PID to (JACK) client name
-P , –preferred-port Preferred OSC Port
-O , –output Set Output Engine
-I , –input Set Input Engine
-e , –exec-after-init Run post-initialization script
-d , –dump-oscdoc=FILE Dump oscdoc xml to file
-u , –ui-title=TITLE Extend UI Window Titles

The web app

Webapp to switch ZynAddSubFX instruments

I also built a simple web app to switch instruments from a mobile device (or any browser, really). It runs on Node.js and leverages Express, Socket.io, OSC, and Jquery Mobile.

First, a specific version of Node is needed and I use NVM to grab it. The script below installs NVM.

curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | sh

Logout and log back in to have NVM available to you.

nvm install v6.10.1

My Node app is in its own repo. The dependencies Express, Socket.io, and OSC will be installed with npm from the included package.json file.

git clone https://github.com/lucidbeaming/ZynAddSubFX-WebApp.git
cd ZynAddSubFX-WebApp
npm install

Test the app from the ZynAddSubFX-WebApp directory:

node index.js

On a phone/tablet (or any browser) on the same wifi network, go to:

http://<IP address of the Raspberry Pi>:7000

Image of webapp to switch instruments

You should see a list of instruments to choose from. It won’t do anything yet, but getting the list to come up is a sign of initial success.

Now, for a little secret sauce. The launch script I use is from achingly long hours of trial and error. The Raspberry Pi is a very capable machine but has limitations. The command line parameters I use come from the best balance of performance and fidelity I could find. If ZynAddSubFX gets rebuilt with better multimedia processor optimizations for ARM, this could change. I’ve read that improvements are in the works. Also, this runs Zynaddsubfx without Jack and just uses ALSA. I was able to get close to RTprio with the installation of rtirq-init.

#!/bin/bash

export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket

if pgrep zynaddsubfx
 then
 echo Zynaddsubfx is already singing
 exit 0
 else
 zynaddsubfx -U -A=0 -o 512 -r 96000 -b 512 -I alsa -O alsa -P 7777 -L "/usr/local/share/zynaddsubfx/banks/Choir and Voice/0034-Slow Morph_Choir.xiz" &
 sleep 4

   if pgrep zynaddsubfx
   then
   echo Zyn is singing
   else
   echo Zyn blorked. Epic Fail.
   fi

fi

mini=$(aconnect -o | grep "MINILAB")
 mpk=$(aconnect -o | grep "MPKmini2")
 mio=$(aconnect -o | grep "mio")

if [[ $mini ]]
 then
 aconnect 'Arturia MINILAB':0 'ZynAddSubFX':0
 echo Connected to MINIlab
 elif [[ $mpk ]]
 then
 aconnect 'MPKmini2':0 'ZynAddSubFX':0
 echo Connected to MPKmini
 elif [[ $mio ]]
 then
 aconnect 'mio':0 'ZynAddSubFX':0
 echo Connected to Mio
 else
 echo No known midi devices available. Try aconnect -l
 fi

exit 0

I have 3 MIDI controllers I use for these things and this script is set to check for any of them, in order of priority, and connect them with ZynAddSubFX. Also, I have a few “sleep” statements in there that I’d like to remove when I find a way of including graceful fallback and error reporting from a bash script. For now, this works fine.

I add this line to rc.local to launch Zynaddsubfx automatically on boot and connect MIDI.

su pi -c '/home/pi/zynlaunch.sh >> /tmp/zynaddsubfx.log 2>&1 &'

Unfortunately, Node won’t launch the web app from rc.local, so I add some conditionals to /home/pi/.profile to launch the app after the boot sequence.

if pgrep zynaddsubfx
then
echo Zynaddsubfx is singing
fi

if pgrep node
then
echo Zyn app is up
else
node /home/pi/ZynAddSubFX-WebApp/index.js
fi

Making music

This ended up being a pad and drone instrument in my tool chest. ZynAddSubFX is really an amazing piece of software and can do much more than I’m setting up here. The sounds are complex and sonically rich. The GUI version lets you change or create instruments with a deep and precise set of graphic panels.

For my purposes, though, I want something to play live with that has very low resource needs. This little box does just that.

Raspberry Pi 3 with Pi-DAC+