Computational Performance Style Analysis from Audio Recordings
Sprache der Bezeichnung:
The project aims at investigating the fascinating, but elusive phenomenon of individual artistic music performance style with quantitative, computational methods. In particular, the goal is to discover and characterise significant patterns and regularities in the way great music performers (classical pianists) shape the music through expressive timing, dynamics, articulation, etc., and express their personal style and artistic intentions.
The starting point is a unique and unprecedented collection of empirical measurement data: recordings of essentially the complete works for solo piano by Frederic Chopin, made by a world-class pianist (Nikita Magaloff) on the Bösendorfer computer-controlled SE290 grand piano. This huge data set, which comprises hundreds of thousands of played notes, gives precise information about how each note was played, including precise onset time, duration, and loudness. State-of-the-art methods of intelligent data analysis and automatic pattern discovery will be applied to these data in order to derive quantitative and predictive models of various aspects of performance, such as expressive timing, dynamic shaping, articulation, etc. This will give new insights into the performance strategies applied by an accomplished concert pianist over a large corpus of music. Moreover, by automatically matching these precisely measured performances against sound recordings by a large number of famous concert pianists, comparative studies will be performed which, for the first time, will permit truly quantitative statements about individual artistic performance style.
All this requires extensive research into new methods for intelligent audio analysis (e.g., extraction of expressive parameters from audio, and precise alignment of different sound recordings) and intelligent data analysis and modelling (e.g., sequential pattern discovery, hierarchical probabilistic models, etc.).