Categories
Hardware Music Science Technology

Happy new year! A happy recap and future plans

We wish you a very happy new year! We had quite a year ourselves, and the future of the EEGsynth looks very promising indeed. I would like to give a short recap of where we are now, and where we will be focussing in terms of hardware/software development in the next couple of months.

It was only about a year ago that we started with a vague idea but vivid inspiration to collaborate on projects that would break us out of our professional work, while still using our personal knowledge and experience. We realized that this would involve a special kind of collaboration where we needed each other to materialize our imagination. We knew we needed to rely on each other’s knowledge and creativity, and trust that in doing so 1+1 would become more than 2, more than we could imagine alone. Most of all, we wanted to have fun and already knew we could rely on each other for that.

During a long drive along the Brazilian coast, after the three of us (Jean-Louis Huhta, Per Huttner and me) did an EEG, music and theatre collaboration with the Sau Paulo theatre school (post), Per and I made the first draft of such a statement. We wanted to continue the natural chemistry and strange mix of backgrounds that the three of us had. Not much later Per was able to secure us a small grant from Kulturbryggan (Stockholm) to start working out our plans.

Per and Jean-Louis knew each other since their teens, and I knew Per for many years as well, having travelled the globe together with OuUnPo. Jean-Louis and I got to know each other, visiting each other’s studio/lab, and talking more about the brain and music. We started with recording Jean-Louis’ brain activity using magnetoecephalography (MEG) at the National Swedish MEG facility (Stockhom) and converting it to sound (post), using it later in a performance in the largest Club in Stockholm (post). We realized that Jean-Louis’ work with analogue synthesizers related to my work in neuroscience on several levels that we have since been exploring. For example,  the analysis (de-composition) of physiological time series (e.g. EEG and MEG measured from the brain) is conceptually opposite from that of sound synthesis (composition). However, they often use similar technical and mathematical solutions. In other words, we realized that underneath it all we shared a common language we both were speaking in our own way. However, while sound synthesis uses the same principles as neuroscience, analogue synthesisers in particular, have an instant hands-on character that analyses in academic neuroscience simply lacks. In neuroscience we can spend weeks grinding down data to get a couple of static, 2D (but often colourful) images. But with some quick patches of cables and turning of knobs, an analogue synthesiser can drive a sine-wave through grating complexities in an instance, creating signal characteristics not dissimilar from brain-activity that we try to decode.

The opportunity soon came to see whether other neuroscientists would be as excited as me to see their analysis come to life, for the first time listening to the effects of filters and envelopes, creating frequency coupling, amplitude modulations, etc. – all those things neuroscientists think but only read about. I was invited to organise a research retreat for a German neurology group in a beautiful French châteaux. One of the events I organised consisted of an introduction to sound synthesis followed by a workshop in which neuroscientists and neurologists ‘simulated’ brain processes using two analogue synthesizers purchased and borrowed for the purpose (post). Indeed, it was a lot of fun and very interesting, validating our ideas about the underlying common language, and showing us what a playful didactic tool a synthesiser can be (post, post).

Later that summer we decided we needed to have a short retreat together to fully focus on reconnecting and see if we could get to a tangible start with the EEGsynth: actually making music with brain activity! We convened at a friends place near Athens, spending the mornings and evenings working, breaking up the day with nice long lunches at the sunny beach (post). At this point, we had purchased an OpenBCI EEG device. OpenBCI is a wonderful crowd-funded open-spource project that recently completed their hardware. However, their software development is relatively less organised and mature at this point. At the same time, at work I spend my days analysing data with the open-source FieldTrip MATLAB toolbox, which is developed for and by neuroscientists. Even better, their main developer, Robert Oostenveld is a friend and colleague and actively develops very smart and flexible real-time solutions for FieldTrip. It was not hard to get Robert enthusiastic about our project! So while in Athens I coded, using the FieldTrip framework, some first real-time analyses, that with Jean-Louis and Per developed into ways that that could be used for real-time sound synthesis and performance (post). To communicate with the analogue synthesizers we had started using MIDI. While MIDI is relatively standard and supported across the board (it worked, although a bit buggy initially), we realized it did not fit so well within the artistic style that is the strength of analogue modular synthesis: Modules in a synthesizers rack communicate via continuous Control Voltages (CV) and on/off switches (Gate, also implemented as a continuous voltages of 0 or 5 volts). This allows maximum flexibility, creativity and complexity. In the end, we knew we could not compromise on this aspect and needed to generate CV/Gate signals directly ourselves. In any case, besides a mountain of work ahead, after Athens we had a first beginning of an EEGsynth.

By now we received a second grant from Innovativ Kultur (Stockholm) and were working towards a first public performance. Carima Neusser, a dancer, spend many evenings with us trying out the EEGsynth and discussing its role within the context of contemporary dance (post, post). Finally, we performed at Jean-Louis’ studio for a selected audience from the world of neuroscience, art, dance and music (post). The performance was a success, and even more so the discussion that followed afterwards. I’ve never before witnessed such a wonderful enthusiastic interaction between different disciplines.

At the end of the year I was exchanging Stockholm for Paris, but before leaving was able to spend a weekend in Stockholm brainstorming with Robert, whom by now was fully engaged in the project. It was undeniable to us by now that while the FieldTrip implementation in MATLAB allowed for rapid development, it would never result in a easy usable, stand-alone, plug-and-play device. We already were thinking of implementing the EEGsynth in a Raspberry Pi but now Robert identified that we also had to deal with the fact that we had to design a code infrastructure that while allowing a device that would be easy to use by laymen, it could also be developed quickly, interactively and in a scalable way, both in terms of complexity as well as the number of people. In the end we want to release this project in the neuroscience and art DIY community, as well as being able to hand over plug-and-play devices to artists for music and artistic experimentation. After watching the many hours of I Dream Of Wires, we also were quite geeked out about the possibility of implementing the EEGsynth as a Eurorack module which we knew the analogue synthesizer community would embrace with both arms.

After breaking it all down, and after many discussions, Robert was able to conceive of a theoretical code- and real-time processing infrastructure that not only covered all these issues, but which we were able to benchmark before he left. In short, the EEGsynth will be coded in Python, and run from a Raspberry Pi. Importantly, the code will be split up into separate parallel modules, similar as in a modular synthesiser. Communication between modules will be done via simulated (virtual) CV/Gate signals, allowing different modules to be patched in many different ways. Although we will start with the basic necessary virtual modules, the number of modules can be extended and developed independently. E.g. one of those modules will take care of generating CV/Gate signals with a USB connected serial-to-CV/Gate converter that Robert already made and now has a 1, 4 and 8 channel version of (post).

While I’ve been busy moving and changing jobs, Robert has made enormous progress in both hardware and software. This week, while visiting the Netherlands for Christmas, we met at his house in Nijmegen where EEGsynth parts could be seen strewn all around. As you can see from his recent blogposts he has been doing a lot of cutting, engineering and soldering recently. On top of this, to demonstrate that the underlying code infrastructure works as planned, he made a ‘simple’ virtual sequencer that is virtually patched with a virtual synthesiser. The virtual sequencer is furthermore patched to a MIDI keyboard, while the virtual synthesiser can be controlled with a MIDI controller – changing wave forms, LFO, VCA and even an ADSR envelope! See the video and images below.

It is safe to say we came a long way this year. In the next couple of months we are planning the following:

  1. We will meet for several days in Paris in February where we will record our first live tracks based on real-time EEG
  2. Several public performances in the USA are in the planning for spring (West coast and Arizona)
  3. First stand-alone prototype implemented as a Eurorack module, in the next couple of months
  4. Finally decide whether we will call it the BrainSynth or EEGsynth and work on some consistency 🙂

We are looking much forward to a new year of fun, art and science and hope to meet you on the way!

Leave a Reply

Your email address will not be published. Required fields are marked *