Deactivation performed! Your visits to this website will no longer be collected by the web analytics. Please note that the Matomo deactivation cookie of this website will also be deleted if you remove the cookies stored in your browser. In addition, if you use a different computer or a different web browser, you will need to complete the deactivation procedure again.

  Your visit to this website is not currently collected by Matomo web analytics. Check this box to opt-in.
X

Music identification

Advanced Big Data algorithms for the identification of originals from cover and live songs

Prof. Dr. Peter Mandl

New research project with research partner IT4IPM

Robust and scalable algorithms for cover and live version recognition based on vocal tracks

Background

Music is played everywhere, as live music at concerts, in a variety of ways on the Internet and on traditional channels. Collecting societies like GEMA need to know exactly the uses of each musical work. Only in this way are they able to assign the pieces used to the authors and distribute royalties accordingly. Especially in the case of cover or live versions, reporting these is still a complicated, sometimes manual process. An automated reporting process would bring a significant reduction in effort for the collecting societies and their members. However, currently available solutions and algorithms are not yet able to robustly and efficiently identify the original works of cover or live versions.

Goals

The focus of the joint research project is therefore to improve existing approaches for recognizing cover and live versions, with the goal of developing a scalable and more reliable new algorithm for this task based on existing algorithms. Initially, vocal tracks are used to identify the original work. The complexity of the problem requires advanced deep learning techniques. Our first approach is to use vocal traces instead of melodic input based on classical fingerprinting algorithms. Another approach is to apply state-of-the-art speech-to-text networks to extract song lyrics and match them with a lyrics database to identify covers or live versions. Training neural networks requires a large amount of data. Therefore, a method that can handle this amount of data needs to be developed. It is also envisioned to work with available metainformation to automatically augment the existing cover datasets. In addition, augmentation methods from other research areas will be used to further extend the cover dataset.

Outlook

Currently, the project is in an early development phase; by the end of the year, existing approaches will be evaluated and the foundation for the described approaches will be laid. After that, the goal is to develop a cover detection prototype and improve it iteratively.

Methods, technologies and tools used

Web crawling, CNNs, RNNs, Transformer-Neural-Networks, Deep Learning Embeddings, Fingerprinting, Nearest-Neighbor-Search

Funding type

Foundation PhD

Duration of the project

2021 until 2025

Cooperation partner

IT4IPM - IT FOR INTELLECTUAL PROPERTY MANAGEMENT GmbH