Jump to content

Networked music performance

From Wikipedia, the free encyclopedia
(Redirected from Sonic Arts Research Centre)

A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room.[1] These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes.[2] Participants may be connected by "high fidelity multichannel audio and video links"[3] as well as MIDI data connections[1] and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression.[2] Remote audience members and possibly a conductor may also participate.[3]

History

[edit]

One of the earliest examples of a networked music performance experiment was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage.[4] The piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.”[4][5]

In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music.[6]

The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet.[3]The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over Ethernet to distributed locations.[6] However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”.[6] In 1998, there was a three-way audio-only performance between musicians in Warsaw, Helsinki, and Oslo dubbed “Mélange à trois”.[3][7] The early distributed performances all faced problems such as network delay, issues synchronizing signals, echo, and troubles with the acquisition and rendering of non-immersive audio and video.[3]

The development of high-speed internet over provisioned backbones, such as Internet2, made high quality audio links possible beginning in the early 2000s.[4] One of the first research groups to take advantage of the improved network performance was the SoundWIRE group at Stanford University's CCRMA.[8] That was soon followed by projects such as the Distributed Immersive Performance experiments,[3] SoundJack,[4] and DIAMOUSES.[2]

Awareness in musical performance

[edit]

Workspace awareness in a face-to-face situation is gathered through consequential communication, feedthrough, and intentional communication.[9] A traditional music performance setting is an example of very tightly-coupled, synergistic collaboration in which participants have a high level of workspace awareness. “Each player must not only be conscious of his or her own part, but also of the parts of other musicians. The other musicians' gestures, facial expressions and bodily movements, as well as the sounds emitted by their instruments [are] clues to meanings and intentions of others”.[10] Research has indicated that musicians are also very sensitive to the acoustic response of the environment in which they are performing.[3] Ideally a networked music performance system would facilitate the high level of awareness that performers experience in a traditional performance setting.

Technical issues in networked music performance

[edit]

Bandwidth demand, latency sensitivity, and a strict requirement for audio stream synchronization are the factors that make networked music performance a challenging application.[11] These factors are described in more detail below.

Bandwidth

[edit]

High definition audio streaming, which is used to make a networked music performance as realistic as possible, is considered to be one of the most bandwidth-demanding uses of today's networks.[11]

Latency

[edit]

One of the major issues with networked music performance is that latency is introduced into the audio as it is processed by a participant's local system and sent across the network. For interaction in a networked music performance to feel natural, the latency generally must be kept below 30 milliseconds, the bound of human perception.[12] If there is too much delay in the system, it will make performance very difficult since musicians adjust their playing to coordinate the performance based on the sounds they hear created by other players.[1] However, the characteristics of the piece being played, the musicians, and the types of instruments used ultimately define the tolerance.[3] Synchronization cues may be used in a network music performance system that is designed for long latency situations.[1]

Audio stream synchronization

[edit]

Both end systems and networks must synchronize multiple audio streams from separate locations to form a consistent presentation of the music.[11] This is a challenging problem for today's systems.

Objectives of a networked music performance system

[edit]

The objectives of a networked music performance can be summarized as:

  • It should allow musicians and possibly audience members and/or a conductor to collaborate from remote locations
  • It should create a realistic immersive virtual space for synchronous, interactive performance
  • It should support workspace awareness that allows participants to be aware of the actions of others in the virtual workspace and facilitate all forms of communication

Current research

[edit]

SoundWIRE at CCRMA, Stanford University

[edit]

The SoundWIRE research group explores several research areas in the use of networks for music performance including: multi-channel audio streaming, physical models and virtual acoustics, the sonification of network performance, psychoacoustics, and networked music performance practice.[7] The group has developed a software system, JackTrip, that supports multi-channel, high quality, uncompressed streaming audio for networked music performance over the internet.[7]

The Sonic Arts Research Centre

[edit]

The Sonic Arts Research Centre (SARC) at Queen's University Belfast has been a major player in carrying out network performances since 2006 and has been active in the use of networks as both collaborative and performance tools.[13] The network team at SARC is led by Prof Pedro Rebelo and Dr Franziska Schroeder with varying set-ups of performers, instruments and compositional strategies. A group of artists and researchers has emerged around this field of distributed creativity at SARC and this has helped create a broader knowledge base and focus for activities. As a result, since 2007 SARC has a dedicated team of staff and students with knowledge and experience of network performance, which SARC refers to as "distributed creativity".[citation needed]

Regular performances, workshops and collaborations with institutions such as SoundWire, CCRMA Stanford University, and RPI,[14] led by composer and performer Pauline Oliveros, as well as with the University of São Paulo, have helped strengthen this emerging community of researchers and practitioners. The field is related to research on distributed creativity.[citation needed]

Distributed Immersive Performance (DIP) experiments

[edit]

The Distributed Immersive Performance project is based at the Integrated Media Systems Center at the University of Southern California.[15] Their experiments explore the challenges of creating a seamless environment for remote, synchronous collaboration.[3] The experiments use 3D audio with correct spatial sound localization as well as HD or DV video projected onto wide screen displays to create an immersive virtual space.[3] There are interaction sites set up at various locations on the University of Southern California campus and at several partner locations such as the New World Symphony in Miami Beach, Florida.[3]

DIAMOUSES

[edit]

The DIAMOUSES project is coordinated by the Music Informatics Lab at the Technological Education Institution of Crete in Hellas.[16] It supports a wide range of networked music performance scenarios with a customizable platform that handles the broadcasting and synchronization of audio and video signals across a network.[2]

Wireless Music Studio (WeMUST)

[edit]

The A3Lab team at Polytechnic University of the Marches conducts research on the use of the wireless medium for uncompressed audio networking in the NMP context.[17] A mix of open source software, ARM platforms and dedicated wireless equipment have been documented, especially for outdoor use, where buildings of historical importance or difficult environments (e.g. sea) can be explored for the performance. A premiere of the system have been conducted with musicians playing a Stockhausen composition on different boats over the coast in Ancona, Italy. The project also aims at shifting music computing from laptops to embedded devices.[18]

See also

[edit]

References

[edit]
  1. ^ a b c d Lazzaro, J.; Wawrzynek, J. (2001). "Proceedings of the 11th international workshop on Network and operating systems support for digital audio and video - NOSSDAV '01". NOSSDAV '01: Proceedings of the 11th international workshop on Network and operating systems support for digital audio and video. ACM Press New York, NY, USA. pp. 157–166. doi:10.1145/378344.378367. ISBN 1581133707.
  2. ^ a b c d Alexandraki, C.; Koutlemanis, P.; Gasteratos, P.; Valsamakis, N.; Akoumianakis, D.; Milolidakis, G.; Vellis, G.; Kotsalis, D. (2008). "Towards the implementation of a generic platform for networked music performance: The DIAMOUSES approach". EProceedings of the ICMC 2008 International Computer Music Conference (ICMC 2008). pp. 251–258.
  3. ^ a b c d e f g h i j k Sawchuk, A.; Chew, E.; Zimmermann, R.; Papadopoulos,C.; Kyriakakis,C. (2003). "Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence - ETP '03". ETP '03: Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence. ACM Press New York, NY, USA. pp. 110–120. doi:10.1145/982484.982506. ISBN 1581137753.
  4. ^ a b c d Alexander, C; Renaud, A.; Rebelo, P. (2007). "Networked music performance: state of the art". AES 30th International Conference. Audio Engineering Society.
  5. ^ Pritchett, J. (1993). The Music Of John Cage. Cambridge University Press, Cambridge, UK.
  6. ^ a b c Bischoff, J.; Brown, C. "Crossfade". Retrieved 2009-11-26.
  7. ^ a b c "SoundWIRE research group at CCRMA, Stanford University". Archived from the original on 2004-02-22. Retrieved 2009-11-23.
  8. ^ Chafe, C.; Wilson, S.; Leistikow, R.; Chisholm, D.; Scavone, G. (2000). "A simplified approach to high quality music and sound over IP". Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-00).
  9. ^ Gutwin, C.; Greenberg, S. (2001). "The Importance of Awareness for Team Cognition in Distributed Collaboration". Report 2001-696-19. Dept Computer Science, University of Calgary, Alberta, Canada. pp. 1–33.
  10. ^ Malhotra, V. (1981). "The Social Accomplishment of Music in a Symphony Orchestra: A Phenomenological Analysis". Qualitative Sociology. 4 (2): 102–125. doi:10.1007/bf00987214. S2CID 145680081.
  11. ^ a b c Gu, X.; Dick, M.; Noyer, U.; Wolf, L. (2004). "IEEE Global Telecommunications Conference Workshops, 2004. Globe Com Workshops 2004". Global Telecommunications Conference Workshops, 2004. GlobeCom Workshops 2004. IEEE. pp. 176–185. doi:10.1109/GLOCOMW.2004.1417570. ISBN 0-7803-8798-8.
  12. ^ Kurtisi, Z; Gu, X.; Wolf, L. (2006). "Enabling network-centric music performance in wide-area networks". Communications of the ACM. 49 (11): 52–54. doi:10.1145/1167838.1167862. S2CID 1245128.
  13. ^ Schroeder, Franziska; Rebelo, Pedro (August 2007). "Addressing the Network: Performative Strategies for Playing Apart". Paper Presented at International Computer Music Conference (ICMC 2007), Denmark.: 133–140. Retrieved 31 August 2022.
  14. ^ "About Us – The Center For Deep Listening". Rensselaer Polytechnic Institute. Retrieved 31 August 2022.
  15. ^ "Distributed Immersive Performance". Retrieved 2009-11-23.
  16. ^ "DIAMOUSES". Retrieved 2009-11-22.
  17. ^ "A3Lab - WeMUST Research page". Retrieved 2015-02-24.
  18. ^ Gabrielli, L; Bussolotto, M; Squartini, S (2014). "Reducing the Latency in Live Music Transmission with the BeagleBoard xM Through Resampling". EDERC 2014, Milan, Italy. IEEE.
[edit]