Categories
Art Hardware Learning Music Software

Athens working session – technical proceedings

During our work meeting in Athens, we explored and tested the EEG-synth: EEG recordings controlling analogue synthesizers. What follows is a summary of how we solved the main technical and practical issues we encountered and those that still need to be dealt with. More about the output of the meeting will come in the next couple of days.

DSC09400 - Copy - Copy

Design decisions

I suggest that the EEGsynth project becomes a collection of different “patches” that are executed and run independently: Each with individual settings on common parameters such as window-size of analysis (see below), MIDI output channel, MIDI output and input codes, as well as settings specific for the patch, such as frequency of interest for frequency based patches and % amplitude for threshold detection on time-courses. The reasons for this are to:

  • Simplify the use by only requiring settings/configurations of those patches that are used.
  • Make most efficient use of processor and memory as not more is used than what is required by the user.
  • Similar modular approach as what it connects to – the modular synth
  • Simplify development by splitting up the code is the most manageable which is efficient and workable with multiple developers, especially those in the wider community while we can maintain and develop the main ‘shell’, and put out examples and specific implementations (patches).

Currently, for practical and development purposes, the MATLAB implementation uses a common data input pipeline (first cfg), after which a cfg struct (chancfg{i}) will determine which patch is run and with which parameters. These patches will run iteratively, and are displayed together in subplots. Two patches seem generic enough to cover most initial use:

  • Display of FFT range, and output of power of specific frequency (chancfg{i}.type = ‘pow2cv’)*
  • Display of timecourse, and output of trigger when amplitude crosses threshold value chancfg{i}.type = ‘amp2gate)*
  • (*) I know that ‘amplitude’ and ‘power’ do not distinguish the types of analyses. Suggestions are welcome.

After testing these two and imagining other foreseeable ones, the following pipeline is implemented:

  1. New data is read in as small as possible blocks (cfg.blocksize). This is important to make the process as real-time as possible. This is especially apparent when using threshold detection as in ‘amp2gate’, which needs to be as close as possible to the actual event of e.g. a heartbeat or eye-blink. A blocksize of 50-100ms seems ideal, although currently this might be unattainable due to the openbci.ft speed and delay due to analysis time.
  2. The new data will update a longer buffer, on which the analysis is done (cfg.windowsize). This seems critical and very useful because
    • Allows a reasonable frequency resolution
    • Allows preprocessing such as demeaning, polyremoval, and filtering on timecourses that fluctuate, drift, etc.
    • Allows a user-friendly and understandable display of a timecourse
    • Creates a natural smoothing and stabilization of frequency analysis.
  3. A ‘history’ is kept of each analysis window, at each iteration (i.e. after loading new data). This seems critical and very useful because it:
    • Allows automatic calibration of threshold values or normalization of the amplitude of time-courses (as implemented in the current ‘amp2gate’).
    • It might serve a similar function in certain variations of ‘pow2cv’, although it seemed more flexible and reliable to use a manual scaling – especially once MIDI control is integrated.
  • The gate control signal is so far implemented only as a ‘trigger’, i.e. a MIDI note out when a threshold is reached, after which it is quickly turned off. Gate can also be used more temporally, i.e. using the duration of ‘on’ and ‘off’. Food for thought.
  • An issue that is very much open for further development is the implementation of the CV control signal. We have tested two options: using either the pitch value or the velocity value:
    • The pitch value has the benefit clarity and simplicity. It seems easier to convert with MIDI-to-CV modules and can be converted directly to a corresponding pitch on the synthesizer. However, it suffers from a very limited range: there are only a couple of octaves that can be used (5 octaves makes 60 pitch values), which are not necessarily clearly audible, or pleasant. Restricting the range even more will even more limit the dynamic range.
    • The velocity has a larger MIDI range of 0-127. It seems a bit more tricky sometimes to convert it to CV, and harder to use it for pitch modulation somehow. These probably are just things that need to be explored a bit more, with manuals and talking to some other experts.
    • Given this issue of limited range and the seemingly clunky need for MIDI-to-CV conversion made us really appreciate the possibility of a direct DA conversion.
DSC09437
DSC09407
DSC09447

Testing

  • We tested ECG recordings. These work fabulously, especially with the current implementation of automatic normalization of the data, which makes it a really robust patch.
  • We also tested EMG recordings. It is very clunky at the moment because the most robust way seems to set a manual threshold. The use of a real-time MIDI controller will make it a very user-friendly. It was absolutely necessary to use a sliding time-window, i.e. the use of the larger analysis window, to make the measurements more stable and less jumpy. However, this might not be enough. In addition we might consider smoothing the output. This is also done is several commercial EEG headsets. It would be easy by using the output of the last iteration to limit the range of the next. Interestingly, using the MicroBrute’s glide function, I could glide from one pitch to another (doesn’t work with velocity), making it more smooth.
  • The EEG measurements were hard to judge for reasons explained next. However, the attaching of the electrodes seemed easy and stable with the Ten20 paste. A headband would provide a bit of extra stability (we used an adjustable cotton hat now, which worked fine) as well as making the wiring more convenient.
  • We were able to output MIDI control signals to the analogue synthesizer module, as well as make live recordings using Ableton. The latter would also allow us to record, edit and play back brain/body derived control signals.

We encountered three critical problems, possibly related:

  • The output of the openbci2ft buffer does not work as expected when it comes to the speed of output: buffers are reported to come in at 50 samples blocks, but this seems to go much slower as expected. Chunks of 50 samples at 250 hz would correspond to 5 chunks per second, but the text output seems rather to suggest 2 chunks at most per second.
  • Data is not coming in contiguously, but instead seems to miss large parts of the data, even when configurations are set to read the first new data. This makes me suspect a bug in openbci2ft, although I cannot find anything in the sourcecode.
    • Addendum: I now realize that the problem of missing data is probably related to the expected buffersize which is either smaller than expected, or slower than expected, and that the problem will occur most clearly when – in the current implementation – more than one patch is run iteratively.
  • The script does not run fast enough, or as fast as expected. It is not 100% certain that this is not because the processing in MATLAB is too slow, but it seems that speeding it up has no effect. Hard to be certain though. My money is on a problem in the IO between the the openBCI board and the FieldTrip buffer.
  • These speed issues suggest that we need to focus on the following:
    • Clarify above problem – identify openbci2ft issue and clarify the sequence of buffers from the EEG board, to the Bluetooth, to the serial port, and the FT buffer.
    • Improve the MATLAB code for speed. Someone else might find ways to speed it up.
    • The speed would be improved by allowing parallel processing of several patches at the same time. Let’s think on how to do this in MATLAB and later in Python.
    • Perhaps object oriented programming might help.
    • How fast can we expect the patches to run on the Raspberry Pi? I cannot imagine it not being fast enough, but then again it might not be currently either.
    • We have not tested MIDI control of analysis parameters yet. We have everything we need though. The patches make clear what would be needed, but it would be good to make an explicit list of desired control parameters, e.g. freq and thresholds.
DSC09409 - Copy (2) - Copy
DSC09448

The next steps

  • The openBCI board can be configured in many ways, several of the being important for accurate use such as (dis)abeling channels, changing the reference from common reference to bipolar, etc. These should be accessible to the user. Currently openbci2ft does not permit these options to be used because we cannot send commands to the port while openbci2ft is running, and it resets the settings at startup. One option would be to allow board commands to be loaded via a text file at startup. It has to be noted that settings might not be compatible between different usages, i.e. with common vs. bipolar references.
  • Throughout our deliberations on the use of the EEGsynth it became clear that the biological signals (ECG, EMG, EOG) can provide stable signals that allow an interesting combination of both voluntary and involuntary control signals. For this purpose the use of cup electrodes is not optimal. Instead clip-on leads and large disposable adhesive electrodes will be more convenient, such as here and here.
  • It is time to make a container for the openBCI board and battery pack. It seems most convenient to attach those to an arm strap. To allow the use of different leads and different setups of common and bipolar leads, 8 (electrodes) x 2 (bipolar) + 2 (pos. and neg. reference) + Bias/Ground = 19-20 touch proof connectors. Even with small sockets such as these, this would still pose some size issues. Let’s see how small we can make it! Also, this would mean that the standard openBCI cup electrodes cannot be used as they terminate in the board clamps.

Per will soon post more about our brainstorm sessions and out future plans concerning our performances and sound creation. All in all a very productive working session, especially given the hellish Hellenistic heat!

Leave a Reply

Your email address will not be published. Required fields are marked *