System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Analysing Musical Performance in Videos Using Deep Neural Networks
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.ORCID iD: 0000-0002-6756-0147
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.ORCID iD: 0000-0003-4029-6574
University of Zaragoza, Spain.
Luleå University of Technology, Department of Arts, Communication and Education, Music, media and Theatre.ORCID iD: 0000-0001-9685-4702
Show others and affiliations
2020 (English)In: Proceedings of the 1st Joint Conference on AI Music Creativity, AIMC, Stockholm, Sweden, 2020Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a method to facilitate labelling of music performance videos with automatic methods (3D-Convolutional Neural Networks) instead of tedious labelling by human experts. In particular, we are interested in the detection of the 17 musical performance gestures generated during the performance (guitar play) of musical pieces which have been video-recorded. In earlier work, these videos have been annotated manually by a human expert according to the labels in the musical analysis methodology. Such a labelling method is time-consuming and would not be scalable to big collections of video recordings. In this paper, we use a 3D-CNN model from activity recognition tasks and adapt it to the music performance dataset following a transfer learning approach. In particular, the weights of the first blocks were kept and only the later layers as well as additional classification layers were re-trained. The model was evaluated on a set of 17 music performance gestures and reports an average accuracy of 97.9% (F1:77.8%) on the training set and 85.7% (F1:38.6%) on the test set. An additional analysis shows which gestures are particularly difficult and suggest improvements for future work.

Place, publisher, year, edition, pages
2020.
Keywords [en]
music performance, machine learning, guitar
National Category
Computer Sciences Music
Research subject
Machine Learning; Musical Performance
Identifiers
URN: urn:nbn:se:ltu:diva-82244DOI: 10.5281/zenodo.4285388OAI: oai:DiVA.org:ltu-82244DiVA, id: diva2:1515916
Conference
1st Joint Conference on AI Music Creativity, AIMC, Online, 2020, October 19-23
Available from: 2021-01-11 Created: 2021-01-11 Last updated: 2022-10-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Simistira Liwicki, FoteiniLiwicki, MarcusVisi, FedericoÖstersjö, Stefan

Search in DiVA

By author/editor
Simistira Liwicki, FoteiniLiwicki, MarcusVisi, FedericoÖstersjö, Stefan
By organisation
Embedded Internet Systems LabMusic, media and Theatre
Computer SciencesMusic

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1174 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf