In 2019 we had a workshop idea with Yassine that we ended up not doing but here are some tests about it.
The idea was to somehow generate custom fanzines during an event based on random collections of images and texts.
What we had in mind was something like that :
The the system would not necessarily use the pics but extrapolate and use other documents based on a possibly common topic and everyone's subjectivity, that meant we needed some kind of semi-automatic classification.
So before a visitor could generate their own fanzine, they would have to help a classifier first by hand-picking pictures that they thought were related to each other in some way.
I made a test to see if I could do a very simple dimensionnal reductor, instead of pictures I started with solid colors and made this test software were you got to iteratively pick three colors out of nine perceived as similar and the software could try to map all of them on a 2d plane by simply getting the similar ones closer to each other then normalizing the space.
Several interesting things can be observed here.
Although the games regularly introduces new dots with a random color (c = color(random(0x100), random(0x100), random(0x100));) some colors such as pink are usually perceived as very distinctive and rarely groupable with any of the eight presented colors.
Also even though colorspaces have several independant parameters (for instance hue, saturation, brightness) and two final dimensions are used, the space usually eventually boils down to a single diagonal.
The map at the beginnig :
The map after a few iterations :
That work is somehow related to my mostly unconvincing attempts at classifing nodes from my hyperduels game.
If I were more serious about this I would probably be using more proven techniques such as t-sne or Umap.