Abstract (EN):
This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decision-making during performance by revealing musical patterns and temporal organizations of the corpus.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
20