I am final-year PhD student at the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM). The goal of my research is to bring machines closer to human-like audio understanding. To achieve this, my work focusses on developing multimodal deep learning methods to enable automatic music understanding by learning richer representations from language and audio.
I am currently on the industry job market. If you are looking for a researcher to work on topics such as multimodal modelling, representation learning, or audio-language understanding (especially but not exclusively for music), feel free to reach out!