FONT AND SOUND AS PART OF A UNIFIED MULTIMEDIA EXPERIENCE

Stolbova Polina(1)
(1) Russian State Institute of Performing Arts

Abstract

Generative soundscapes play an essential role in shaping holistic audiovisual experiences, where sound and typography operate together as a unified expressive medium. Their interplay not only reinforces visual imagery but also deepens content perception and strengthens the audience’s emotional engagement. The synergy between auditory and typographic elements creates layered compositions that leave a lasting impact on viewers, fostering unique multimodal experiences. Generative techniques, including synthetic audio generators, randomness-based algorithms, and parametric control make it possible to design adaptive soundscapes that respond dynamically to user interaction or shifts in visual context. These methods are increasingly applied in interactive media, digital art, web installations, and experimental design. Future development is expected to be driven by neural network–based machine learning, spatial (immersive) sound systems, and sensory interfaces. Such innovations will enable the creation of more expressive and personalized audiovisual environments, where sound and typography do not merely complement one another but function as integral narrative elements, shaping the rhythm, mood, and meaning of the overall composition..

Full text article

Generated from XML file

References

Bregman, A. S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: The MIT Press.

Dunn, D. (1996). Music Language and Environment: Environmental Sound Works 1973–1985. Innova.

Eigenfeldt, A., & Pasquier, P. (2011). Negotiated content: Generative soundscape composition by autonomous musical agents in Coming Together: Freesound. International Conference on Innovative Computing and Cloud Computing.

Feld, S. (1982). Sound and Sentiment: Birds, Weeping, Poetics, and Sound in Kaluli Expression. Philadelphia, PA: University of Pennsylvania Press.

Finney, N., & Janer, J. (2009). Autonomous generation of soundscapes using unstructured sound databases.

Fournel, N. F. (2017). Procedural audio for video games: Are we there yet? [Presentation]. https://www.gdcvault.com/play/1012645/ProceduralAudio-for-Video-Games

(Accessed: 28 February 2025).

Misra, A., & Cook, P. R. (2009). Toward synthesized environments: A survey of analysis and synthesis methods for sound designers and composers. International Conference on Mathematics and Computing.

Planet9. (n.d.). https://planet9.ru/

(Accessed: 28 February 2025).

Rogozinsky, G. G., Cherny, E., & Osipenko, I. (2016). Making mainstream synthesizers with Csound. ArXiv, abs/1610.04922.

Schafer, R. M. (1977). The soundscape: Our sonic environment and the tuning of the world.

Schwarz, D. (2011). State of the art in sound texture synthesis. Proceedings of the 14th International Conference on Digital Audio Effects (DAFx-11), Paris, France, September 19–23.

Serafin, S., & Serafin, G. (2004). Sound design to enhance presence in photorealistic virtual reality. Proceedings of the International Conference on Auditory Display (ICAD 04), Sydney, Australia, July 6–9.

Southworth, M. (1969). The sonic environment of cities. Environment and Behavior, 1(1).

Stevens, R. S., & Raybould, D. R. (2016). Game audio implementation: A practical guide using the Unreal Engine. UK: Focal Press.

Truax, B. (2001). Acoustic communication (2nd ed.). Norwood, NJ: Ablex.

Authors

Stolbova Polina
stolbova2001@mail.ru (Primary Contact)
Author Biography

Stolbova Polina

Lecturer
Faculty of Audiovisual Arts

ORCID: 0009-0007-9138-8336 

FONT AND SOUND AS PART OF A UNIFIED MULTIMEDIA EXPERIENCE. (2025). Foreign Languages ​​in Uzbekistan (JOURNALFLEDU.COM), 63(4), 48-69. https://doi.org/10.36078/1757914163

Article Details

How to Cite

FONT AND SOUND AS PART OF A UNIFIED MULTIMEDIA EXPERIENCE. (2025). Foreign Languages ​​in Uzbekistan (JOURNALFLEDU.COM), 63(4), 48-69. https://doi.org/10.36078/1757914163