Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 10) Show all publications
Visi, F., Basso, T., Greinke, B., Wood, E., Gschwendtner, P., Hope, C. & Östersjö, S. (2024). Networking concert halls, musicians, and interactive textiles: Interwoven Sound Spaces. Digital Creativity, 35(1), 52-73
Open this publication in new window or tab >>Networking concert halls, musicians, and interactive textiles: Interwoven Sound Spaces
Show others...
2024 (English)In: Digital Creativity, ISSN 1462-6268, E-ISSN 1744-3806, Vol. 35, no 1, p. 52-73Article in journal (Refereed) Published
Abstract [en]

Interwoven Sound Spaces is an interdisciplinary project which brought together telematic music performance, interactive textiles, interaction design, and artistic research. A team of researchers collaborated with two professional contemporary music ensembles based in Berlin, Germany, and Piteå, Sweden, and four composers, with the aim of creating a telematic distributed concert taking place simultaneously in two concert halls and online. Central to the project was the development of interactive textiles capable of sensing the musicians’ movements while playing acoustic instruments, and generating data the composers used in their works. Musicians, instruments, textiles, sounds, halls, and data formed a network of entities and agencies that was reconfigured for each piece, showing how networked music practice enables distinctive musicking techniques. We describe each part of the project and report on a research interview conducted with one of the composers for the purpose of analysing the creative approaches she adopted for composing her piece.

Place, publisher, year, edition, pages
Taylor & Francis, 2024
Keywords
e-textiles, interactive wearables, internet of musical things, internet of things, Telematic music performance
National Category
Music
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-104600 (URN)10.1080/14626268.2024.2311906 (DOI)2-s2.0-85186244852 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-05-21 (joosat);

Full text license: CC BY;

Funder: Kulturstiftung des Bundes (German Federal Cultural Foundation); Programme for Digital Interactions [grant number DIV.0725]; Einstein Center Digital Future;

Available from: 2024-03-14 Created: 2024-03-14 Last updated: 2024-05-21Bibliographically approved
Westberg, E., Östersjö, S., Visi, F., Ek, R., Petersson, M., Unander-Scharin, Å. & Unander-Scharin, C. (2023). Monteverdi e più: Erik westbergs vokalensemble i nytolkningar och nya verk för kör, dansare och interaktiv hyperorgel.
Open this publication in new window or tab >>Monteverdi e più: Erik westbergs vokalensemble i nytolkningar och nya verk för kör, dansare och interaktiv hyperorgel
Show others...
2023 (Swedish)Artistic output (Unrefereed)
Keywords
hyperorgel, interaktiv musik, koreografi: körmusik, experimentell musik, interpretation
National Category
Music
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-103104 (URN)
Funder
Swedish Research Council, 422 322
Note

Godkänd;2023;Nivå 0;2023-12-20 (joosat);

Konserten producerades med stöd från Statens kulturråd, samt med medel från Kempestiftelsen och Vetenskapsrådet. Konstnärlig producent för produktionen var Gunnar Andersson som hade en central roll också i programsättningen. 

Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2024-05-07Bibliographically approved
Östersjö, S., Unander-Scharin, Å., Ek, R., Visi, F. & Petersson, M. (2023). Sestina: av Claudio Monteverdi.
Open this publication in new window or tab >>Sestina: av Claudio Monteverdi
Show others...
2023 (Swedish)Artistic output (Unrefereed)
Abstract [sv]

Sestina, av TCP/IP kvartett och Åsa Unander-Scharin är ett ca 40 minuter långt verk, där ny musik, som skapats genom en radikal omtolkning av Monteverdis partitur, omger varje sats av det ursprungliga verket. En central faktor är hur både en dansare och två musiker styr Studio Acusticums hyperorgel, och därigenom iscensätter en läsning av Monteverdi som stundtals går långt in i körverkets detaljer, och i andra delar söker ett fågelperspektiv och väver in körmusiken i en långt större form. 

Abstract [sv]

TCP/INDETERMINATE PLACE QUARTET bildades 2021 och består av Federico Visi, Robert Ek, Mattias Petersson och Stefan Östersjö. Klarinettisten Robert Ek och tonsättaren Mattias Petersson är båda doktorander vid Musikhögskolan i Piteå och knutna till forskningsklustret GEMM (Gesture, Embodiment and Machines in Music) där Federico Visi gjort ett postdoktoralt forskningsprojekt. Klustret leds av Stefan Östersjö, professor i Musikalisk gestaltning och ämnesföreträdare vid Musikhögskolan. Kvartetten tilldelades ”Best Music Award” först vid NIME (New Interfaces for Musical Expression) i Shanghai och sedan vid “Audio Mostly” i Graz. Båda prisen delades ut 2021, med hänvisning till kvartettens banbrytande spel med distansstyrda orglar, som utvecklats inom ramen för det internationella projekt Global Hyperorgan. 

National Category
Music
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-103189 (URN)
Note

Godkänd;2023;Nivå 0;2023-12-20 (joosat);

Konserten producerades med stöd från Statens kulturråd, samt med medel från Kempestiftelsen och Vetenskapsrådet. Konstnärlig producent för produktionen var Gunnar Andersson som hade en central roll också i programsättningen. 

Sponsorer och samarbetspartners: Sparbanken nord, Kempestiftelserna och Studio Acusticum, LTU.

Available from: 2023-12-04 Created: 2023-12-04 Last updated: 2024-01-12Bibliographically approved
Tetley, J. W., Holland, S., Caton, S., Donaldson, G., Georgiou, T., Visi, F. & Stockley, R. C. (2022). Using rhythm for rehabilitation: the acceptability of a novel haptic cueing device in extended stroke rehabilitation. Journal of Enabling Technologies (JET), 16(4), 290-301
Open this publication in new window or tab >>Using rhythm for rehabilitation: the acceptability of a novel haptic cueing device in extended stroke rehabilitation
Show others...
2022 (English)In: Journal of Enabling Technologies (JET), ISSN 2398-6263, E-ISSN 2398-6271, Vol. 16, no 4, p. 290-301Article in journal (Refereed) Published
Abstract [en]

Purpose

Restoration of walking ability is a key goal to both stroke survivors and their therapists. However, the intensity and duration of rehabilitation available after stroke can be limited by service constraints, despite the potential for improvement which could reduce health service demands in the long run. The purpose of this paper is to present qualitative findings from a study that explored the acceptability of a haptic device aimed at improving walking as part of an extended intervention in stroke rehabilitation.

Design/methodology/approach

Pre-trial focus groups and post-trial interviews to assess the acceptability of Haptic Bracelets were undertaken with seven stroke survivors.

Findings

Five themes were identified as impacting on the acceptability of the Haptic Bracelet: potential for improving quality of life; relationships with technology; important features; concerns; response to trial and concentration. Participants were interested in the haptic bracelet and hoped it would provide them with more confidence making them: feel safer when walking; have greater ability to take bigger strides rather than little steps; a way to combat mistakes participants reported making due to tiredness and reduced pain in knees and hips.

Originality/value

Haptic Bracelets are an innovative development in the field of rhythmic cueing and stroke rehabilitation. The haptic bracelets also overcome problems encountered with established audio-based cueing, as their use is not affected by external environmental noise.

Place, publisher, year, edition, pages
Emerald Group Publishing Limited, 2022
Keywords
Acceptability, Quality of life, Stroke rehabilitation, Entrainment, Haptic interaction, Safe walking
National Category
Occupational Therapy Human Computer Interaction
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-93018 (URN)10.1108/JET-01-2021-0003 (DOI)000842347300001 ()2-s2.0-85136456489 (Scopus ID)
Note

Validerad;2022;Nivå 2;2022-11-28 (sofila);

Funder: Greater Manchester Academic Health Science Network 

Available from: 2022-09-13 Created: 2022-09-13 Last updated: 2022-11-28Bibliographically approved
Zbyszyński, M., Di Donato, B., Visi, F. G. & Tanaka, A. (2021). Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis. In: Richard Kronland-Martinet; Sølvi Ystad; Mitsuko Aramaki (Ed.), Perception, Representations, Image, Sound, Music: 14th International Symposium CMMR 2019, Marseille, France, October 14–18, 2019, Revised Selected Papers. Paper presented at 14th International Symposium on Computer Music Multidisciplinary Research (CMMR), Marseille, France, October 14-18, 2019 (pp. 600-622). Springer Nature
Open this publication in new window or tab >>Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis
2021 (English)In: Perception, Representations, Image, Sound, Music: 14th International Symposium CMMR 2019, Marseille, France, October 14–18, 2019, Revised Selected Papers / [ed] Richard Kronland-Martinet; Sølvi Ystad; Mitsuko Aramaki, Springer Nature, 2021, p. 600-622Conference paper, Published paper (Refereed)
Abstract [en]

This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative synthesis, where small sound units are organized by derived timbral features. We use interactive machine learning in a mapping-by-demonstration paradigm to create regression models that map high-dimensional gestural data to timbral data without dimensionality reduction in three distinct workflows. First, by directly associating individual sound units and static poses (anchor points) in static regression. Second, in whole regression a sound tracing method leverages our intuitive associations between time-varying sound and embodied movement. Third, we extend interactive machine learning through the use of artificial agents and reinforcement learning in an assisted interactive machine learning workflow. We discuss the benefits of organizing the sound corpus using self-organizing maps to address corpus sparseness, and the potential of regression-based mapping at different points in a musical workflow: gesture design, sound design, and mapping design. These systems support expressive performance by creating gesture-timbre spaces that maximize sonic diversity while maintaining coherence, enabling reliable reproduction of target sounds as well as improvisatory exploration of a sonic corpus. They have been made available to the research community, and have been used by the authors in concert performance.

Place, publisher, year, edition, pages
Springer Nature, 2021
Series
Lecture Notes in Computer Science (LNISA), ISSN 0302-9743, E-ISSN 1611-3349 ; 12631
Keywords
Concatenative synthesis, Gestural interaction, Human-computer interaction, Interactive machine learning, Reinforcement learning, Sonic interaction design
National Category
Computer Sciences Musicology
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-94864 (URN)10.1007/978-3-030-70210-6_39 (DOI)2-s2.0-85103451121 (Scopus ID)
Conference
14th International Symposium on Computer Music Multidisciplinary Research (CMMR), Marseille, France, October 14-18, 2019
Funder
EU, Horizon 2020, 789825
Note

ISBN for host publication: 978-3-030-70209-0; 978-3-030-70210-6

Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2022-12-16Bibliographically approved
Harlow, R., Petersson, M., Ek, R., Visi, F. & Östersjö, S. (2021). Global Hyperorgan: a platform for telematic musicking and research. In: NIME 2021: . Paper presented at International Conference on New Interfaces for Musical Expression (NIME 2021), Shanghai, China and online, June 14-18, 2021. The International Conference on New Interfaces for Musical Expression (NIME)
Open this publication in new window or tab >>Global Hyperorgan: a platform for telematic musicking and research
Show others...
2021 (English)In: NIME 2021, The International Conference on New Interfaces for Musical Expression (NIME) , 2021Conference paper, Published paper (Refereed)
Abstract [en]

The Global Hyperorgan is an intercontinental, creative space for acoustic musicking. Existing pipe organs around the world are networked for real-time, geographicallydistant performance, with performers utilizing instruments and other input devices to collaborate musically through the voices of the pipes in each location. A pilot study was carried out in January 2021, connecting two large pipe organs in Piteå, Sweden, and Amsterdam, the Netherlands. A quartet of performers tested the Global Hyperorgan’s capacities for telematic musicking through a series of pieces. The concept of modularity is useful when considering the artistic challenges and possibilities of the Global Hyperorgan. We observe how the modular system utilized in the pilot study afforded multiple experiences of shared instrumentality from which new, synthetic voices emerge. As a long-term technological, artistic and social research project, the Global Hyperorgan offers a platform for exploring technology, agency, voice, and intersubjectivity in hyper-acoustic telematic musicking.

Place, publisher, year, edition, pages
The International Conference on New Interfaces for Musical Expression (NIME), 2021
Keywords
Global Hyperorgan, Hyperinstrument, Network performance, HCI, Live-coding, Assisted Interactive Machine Learning, AIML, Musicking, Telematic, Performance, Instrumentality
National Category
Music
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-86324 (URN)10.21428/92fbeb44.d4146b2d (DOI)2-s2.0-85150257797 (Scopus ID)
Conference
International Conference on New Interfaces for Musical Expression (NIME 2021), Shanghai, China and online, June 14-18, 2021
Note

License fulltext: CC-BY

Available from: 2021-07-09 Created: 2021-07-09 Last updated: 2023-10-24Bibliographically approved
Visi, F. G. & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In: Miranda, Eduardo Reck (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity: (pp. 771-798). Springer Nature
Open this publication in new window or tab >>Interactive Machine Learning of Musical Gesture
2021 (English)In: Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity / [ed] Miranda, Eduardo Reck, Springer Nature, 2021, p. 771-798Chapter in book (Refereed)
Place, publisher, year, edition, pages
Springer Nature, 2021
National Category
Music
Research subject
Musical Performance
Identifiers
urn:nbn:se:ltu:diva-98312 (URN)10.1007/978-3-030-72116-9_27 (DOI)2-s2.0-85160498769 (Scopus ID)
Note

ISBN för värdpublikation: 978-3-030-72115-2, 978-3-030-72118-3, 978-3-030-72116-9

Available from: 2023-06-14 Created: 2023-06-14 Last updated: 2023-06-14Bibliographically approved
Simistira Liwicki, F., Liwicki, M., Perise, P. M., Visi, F. & Östersjö, S. (2020). Analysing Musical Performance in Videos Using Deep Neural Networks. In: Proceedings of the 1st Joint Conference on AI Music Creativity, AIMC, Stockholm, Sweden: . Paper presented at 1st Joint Conference on AI Music Creativity, AIMC, Online, 2020, October 19-23.
Open this publication in new window or tab >>Analysing Musical Performance in Videos Using Deep Neural Networks
Show others...
2020 (English)In: Proceedings of the 1st Joint Conference on AI Music Creativity, AIMC, Stockholm, Sweden, 2020Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a method to facilitate labelling of music performance videos with automatic methods (3D-Convolutional Neural Networks) instead of tedious labelling by human experts. In particular, we are interested in the detection of the 17 musical performance gestures generated during the performance (guitar play) of musical pieces which have been video-recorded. In earlier work, these videos have been annotated manually by a human expert according to the labels in the musical analysis methodology. Such a labelling method is time-consuming and would not be scalable to big collections of video recordings. In this paper, we use a 3D-CNN model from activity recognition tasks and adapt it to the music performance dataset following a transfer learning approach. In particular, the weights of the first blocks were kept and only the later layers as well as additional classification layers were re-trained. The model was evaluated on a set of 17 music performance gestures and reports an average accuracy of 97.9% (F1:77.8%) on the training set and 85.7% (F1:38.6%) on the test set. An additional analysis shows which gestures are particularly difficult and suggest improvements for future work.

Keywords
music performance, machine learning, guitar
National Category
Computer Sciences Music
Research subject
Machine Learning; Musical Performance
Identifiers
urn:nbn:se:ltu:diva-82244 (URN)10.5281/zenodo.4285388 (DOI)
Conference
1st Joint Conference on AI Music Creativity, AIMC, Online, 2020, October 19-23
Available from: 2021-01-11 Created: 2021-01-11 Last updated: 2022-10-28Bibliographically approved
Visi, F., Östersjö, S., Ek, R. & Röijezon, U. (2020). Method Development for Multimodal Data Corpus Analysis of Expressive Instrumental Music Performance. Frontiers in Psychology, 11(576751)
Open this publication in new window or tab >>Method Development for Multimodal Data Corpus Analysis of Expressive Instrumental Music Performance
2020 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 11, no 576751Article in journal (Refereed) Published
Abstract [en]

Musical performance is a multimodal experience, for performers and listeners alike. This paper reports on a pilot study which constitutes the first step toward a comprehensive approach to the experience of music as performed. We aim at bridging the gap between qualitative and quantitative approaches, by combining methods for data collection. The purpose is to build a data corpus containing multimodal measures linked to high-level subjective observations. This will allow for a systematic inclusion of the knowledge of music professionals in an analytic framework, which synthesizes methods across established research disciplines. We outline the methods we are currently developing for the creation of a multimodal data corpus dedicated to the analysis and exploration of instrumental music performance from the perspective of embodied music cognition. This will enable the study of the multiple facets of instrumental music performance in great detail, as well as lead to the development of music creation techniques that take advantage of the cross-modal relationships and higher-level qualities emerging from the analysis of this multi-layered, multimodal corpus. The results of the pilot project suggest that qualitative analysis through stimulated recall is an efficient method for generating higher-level understandings of musical performance. Furthermore, the results indicate several directions for further development, regarding observational movement analysis, and computational analysis of coarticulation, chunking, and movement qualities in musical performance. We argue that the development of methods for combining qualitative and quantitative data are required to fully understand expressive musical performance, especially in a broader scenario in which arts, humanities, and science are increasingly entangled. The future work in the project will therefore entail an increasingly multimodal analysis, aiming to become as holistic as is music in performance.

Place, publisher, year, edition, pages
Lausanne: Frontiers Media S.A., 2020
Keywords
embodied music cognition, movement analysis, chunking, stimulated recall, coarticulation, expressive music performance, multimodal analysis
National Category
Physiotherapy Music
Research subject
Physiotherapy; Musical Performance
Identifiers
urn:nbn:se:ltu:diva-81687 (URN)10.3389/fpsyg.2020.576751 (DOI)000599575500001 ()33343452 (PubMedID)2-s2.0-85097849484 (Scopus ID)
Funder
Luleå University of TechnologyNorrbotten County Council
Note

Validerad;2021;Nivå 2;2021-01-08 (johcin)

Available from: 2020-11-29 Created: 2020-11-29 Last updated: 2022-02-10Bibliographically approved
Liwicki, F., Upadhyay, R., Chhipa, P. C., Murphy, K., Visi, F., Östersjö, S. & Liwicki, M.Deep Neural Network approaches for Analysing Videos of Music Performances.
Open this publication in new window or tab >>Deep Neural Network approaches for Analysing Videos of Music Performances
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Other Mechanical Engineering Computer Sciences
Research subject
Machine Learning; Musical Performance
Identifiers
urn:nbn:se:ltu:diva-94846 (URN)10.48550/arXiv.2205.11232 (DOI)
Available from: 2022-12-14 Created: 2022-12-14 Last updated: 2023-09-05
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9685-4702

Search in DiVA

Show all publications