Huescore




Introduction

Huescore is a system that allows you to make music by drawing shapes or placing colored objects in front of a camera.
It has been developed for the 2015 August Wondershock project in Korea.




Here is one of the strange tracks kids recorded with it :




download src (2015 08 29 version, including a pretty big one-shots sample pack)
github page (with only a few test samples)

It's made of two different things : a pure data patch and a processing applet.
The processing applet is used to detect graphical features in each frame of a webcam stream while pure data is used to play sounds (= playing samples with some variations of pitch and envelope), they communicate with each other using the OSC protocol.

How to use it


- download it from the link above and unzip it (the directory name should not contain any funny character)
- install processing and pure data
- make sure you have a webcam connected
- launch matriseq/matriseq.pde and puredata/matriseq.pd
- if the processing applet complains about the camera, try changing the value of the cameraIndex variable
- on the pure data patch click on "controls" to open the user panel
- point your camera to an empty space, make sure it doesn't move, then on the pd user panel click on the bang above "s requestBackgroundCapture"
- now try placing colorful objects in front of the camera, that should work already

The processing window


The horizontal axis represents the time (from left to right). The position of the left border of an object triggers his "note on" and the right border is the "note off".
The color of an object defines the sample that will be played (there can be up to 3 different instruments, the associated colors are automatically adjusted to the ones that seem to make sense on the picture).
The average size of an object it mapped to its gain.
The average position of an object on the vertical axis is defining the pitch (with no quantization, so quarter tones or smaller intervals are allowed).
Any group of adjacent pixels with the same color is currently considered to be only one note with only one set of pitch/gain/timing/sample information, so don't expect a pitch slide when drawing a diagonal line.
It uses mono samples only. Instruments are being spread evenly on the stereo axis. It should be fairly easy to add more of them with a bit of hacking.


1 - Vertical and horizontal lines over a colored zone represents the average position of a note.
2 - The current beat is colored in transparent white.
3 - The small circle at the left side of a note indicates where the note really starts (quantized to the closest previous beat).

The pure data user panel




Here is a description of the elements you can interact with in this pure data control panel:

"player" section


- "play/pause" triggers playhead movement
- "static cursor" asks the playhead to loop through a number of beats independant from the bar length
- "retrigger length" defines the looping section length used when activating "static cursor"
- "record next bar" will output exactly one bar (= one picture) to a file
- "record / stop" will record any desired length. The resulting file will be stored in the puredata/result directory
- the toggle below "currently recording" lets you know if the sound is currently being sampled to a file or not
- "gain" controls the overall gain

"metronome" section


- "tempo" sets the speed of the playhead movement (in column per minute)
- "swing" shifts every even column timing from 0 (no change) to 1 (combined with next column)
- "time signature" sets the number of columns in the bar

"samples" section


- "pick three other samples" will modify samples for every instrument
- each button below "pick other samples" will modify one specific instrument
- numbers below "pitch offsets" will separately shift the pitch for each instruments up or down
- "pitch range" will define how much the position of an elements will have an influence on its pitch (in semitones)

image processing controls


This section will send messages to processing in order to calibrate the image recognition process
Values are normally defaulted to typical settings but you're welcome to experiment with it
- "requestBackgroundThreshold" defines how much a given picture needs to resemble the known empty background to be ignored
- "requestSaturationThreshold" defines how much parts of the picture need to have a saturated color in order to be considered a note
- "requestMaxTokens" defines the maximum number of notes (very high values might cause crashes)
- "requestSizeThreshold" defines the minimum number of adjacent pixels that may be considered a note
- "requestClosenessLimit" defines how likely two almost similar colors will be considered to be the same instrument
- "requestDetectionW" defines the processed capture width
- "requestDetectionW" defines the processed capture height
- "requestCaptureProcessing" defines if the image is being analyzed (if not, the sound will loop through the last known bar no matter what happens in front of the camera)
- "requestBackgroundPicture" should be banged when there is nothing in front of the camera and helps processing separate the background (no need for it to be white or solid, though it probably helps, but the camera should not move) and the elements

Other usefull things


It's possible to synchronize it to other things using midi or OSC messages.
To use this, disable the automated playhead ("play/pause" button in the user panel) and open the "oscHandle" subpatch in pure data for more informations, you want to use the OSC "/triggerBeat" message followed by an integer or MIDI notes from 0 to n to trigger the specified column.

In order to get more informations about the elements that are being "seen" on camera, set the focus on the processing applet, then press the "TAB" key multiple times to go through each parameter.

The system is filled by default with many typical MIDI soundfont samples, if you want to use your own samples, then replace any of them by other .wav files in the puredata/samples directory and press "pick three other samples" (which will also refresh the sample list) on the pure data user panel.

How to play with it


There are a few interesting live uses of this thing.

Temporarily disabling the detection while switching drawings or moving objects can help maintain a clean pattern.


Setting a retrigger size of 1 will get the cursor stuck at some point where a single object can moved around and played as an instrument.


When going back from retrigger mode to normal playing mode, the cursor will catch up and go back to the beat that would have been normally playing there. That can be used to create nice breaks.



Anything colored can be used as a note... I've tried with drawings, fabric ribbons, candles, electronic components, human beings.
The main difficulty with big objects is that they are more likely to cast shadows, and the shadow is often being interpreted as a note itself.

Possible future improvements (not very likely to happen but who knows...)


Many people expect the pitch to slide when drawing diagonal lines, so I might try to do something about it (maybe with a fixed number of pitch updates for each note).
Using hands on camera is fun but sometimes I don't want them to trigger notes, so I might try to add an option to ignore a given (hand, or anything else) color.
Shadows are often a problem, I might want to do something about it as well.
The association between colors and instruments is redefined at each frame since colors are subject to change. But a single color often tends to jump from one index to another with few apparent reason, I'd like the process to be more consistent. <- this is now partially fixed !
It's quite difficult to compose actual song structures with this tool, you might try having several sheets of paper at hand or improvising with the objects, but calling presets, adding voices or recording many separate patterns into a single file could be a good thing.
Saving current detection parameters in a text file.

Anything to say ? ->
cheers