Professional musicians manipulate sound properties such as timing, energy, pitch and timbre in order to add expression to their performances. However, there is little quantitative information about how and in which context this manipulation occurs. This is particularly true in Jazz music where learning to play expressively is mostly acquired intuitively. We propose to develop a machine learning approach to investigate expressive music performance in Jazz guitar music. We extract symbolic features from audio performances and apply machine learning techniques to induce expressive computational models for embellishment, timing, and energy transformations. Finally, we apply concatenative synthesis techniques in order to generate expressive performances of new scores using the learnt computational models.
Master Thesis by Sergio Giraldo (2012)