AI Sound Residency @Curiosibot, Valencia, Spain
As part of Culture Moves Europe by the Goethe Institute, I spent 4 weeks under the apprenticeship of Alayna Hughes from Curiosibot exploring AI-based sound design tools and workflows.
During this residency, I documented and experimented with various VST plug-ins, cloud-based platforms like Google Colab, and attempted training models using RAVE and Dance Diffusion.
This process expanded my understanding of current AI technology for sound creation, especially in relation to datasets and generative possibilities. It was a learning curve—filled with both experimentation and creative output—leading me to build a rich sample library from synthesizer-based compositions and my own audio explorations.
1. Cloud Platforms
2. VST Plug-ins
The experimentation resulted in a diverse collection of ambient, percussive, and melodic samples that showcase the possibilities of AI-assisted sound design.
I have a little sample pack that I can provide (specially if you use Ableton), feel free to ask me.
Training models is time-intensive: with a powerful machine, you can train a model within 2–3 days, requires a lot of patience and the results didn’t seem like something I would actually use, but the fact that you can actually train models and the logic behind how to do so, was really interesting.
I also participated in a workshop with the Ben Cantil CTO of Datamind Audio, further expanding my understanding of AI for sound creation.
Combobulator: An AI style-transfer audio plugin that processes input audio through neural networks trained on artist-created datasets, generating new music textures.
Great tool for abstract sound design.
Audiocipher
Concatenator: An AI-driven audio mosaicing plugin that enables seamless concatenative synthesis, allowing users to transform collections of recordings and samples into playable instruments. (Here I was mostly playing with my vocals as an instrument)
For Fun
I also got to play with other technologies such as Playtronica with Artist Fabiana Cruz, where I explored simulation a midi out of mandarins that you can see it everywhere in Valencia.
As part of Culture Moves Europe by the Goethe Institute, I spent 4 weeks under the apprenticeship of Alayna Hughes from Curiosibot exploring AI-based sound design tools and workflows.
During this residency, I documented and experimented with various VST plug-ins, cloud-based platforms like Google Colab, and attempted training models using RAVE and Dance Diffusion.
This process expanded my understanding of current AI technology for sound creation, especially in relation to datasets and generative possibilities. It was a learning curve—filled with both experimentation and creative output—leading me to build a rich sample library from synthesizer-based compositions and my own audio explorations.
![]()
Tools & Processes
1. Cloud Platforms
- Google Colab: A cloud-based environment running Python scripts. It’s a useful, albeit slow, tool for experimentation due to hardware limitations. The free version often disconnects, making processes like model training challenging. However, it’s still accessible for ambient sound exploration.
2. VST Plug-ins
- RAVE & Neutone: Excellent tools for sound design, particularly for creating noisy, textured layers.
- Magenta, Ableton’s Harmony & Rhythmic Probability: Effective for automating beats and generating melodies from existing audio.
Notably, each tool has its specificities—some work primarily with Audio or MIDI, others integrate better in Ableton Session View or Performance View.
Experiments & Results
- Samples made with Google Play:
- Samples created using RAVE VST:
- Textures generated with Neutone:
The experimentation resulted in a diverse collection of ambient, percussive, and melodic samples that showcase the possibilities of AI-assisted sound design.
Library
During these weeks I also had developed my own sample library that I developed from using Electroacoustic samples and passing them in Ableton Live with Plug-ins such as Neutone, Rave VST, Magenta DDSP to create unexpected sounds, such as NASA Austranauts conversation, Latin percussion drums, and other abstract noises that in my day to day Sound Design projects can be used for background, and touches.I have a little sample pack that I can provide (specially if you use Ableton), feel free to ask me.
Training AI Models
Training models is time-intensive: with a powerful machine, you can train a model within 2–3 days, requires a lot of patience and the results didn’t seem like something I would actually use, but the fact that you can actually train models and the logic behind how to do so, was really interesting.
- Dance Diffusion is a promising framework but demands patience. Running it through free Google Colab versions often causes connection interruptions, and sometimes the processed datasets yield unexpected or null outputs.
Workshop Highlights
I also participated in a workshop with the Ben Cantil CTO of Datamind Audio, further expanding my understanding of AI for sound creation.
Great tool for abstract sound design.
For Fun
I also got to play with other technologies such as Playtronica with Artist Fabiana Cruz, where I explored simulation a midi out of mandarins that you can see it everywhere in Valencia.