Metadata for VR Narratives
Richard Snyder and I submitted our essay, “Metadata for Access: VR and Beyond,” to forthcoming volume of The Future of Text, edited by Frode Hegland.
As we write in our abstract
Interacting with virtual reality (VR) environments requires multiple sensory modalities associated with time, sight, sound, touch, haptic feedback, gesture, kinesthetic involvement, motion, proprioception, and interoception––yet metadata schemas used for repositories and databases do not offer controlled vocabularies that describe VR works to visitors. This essay outlines the controlled vocabularies devised for the Electronic Literature Organization’s museum/library The NEXT. Called ELMS (Extended eLectronic Metadata Schema), this framework makes it possible for physically disabled visitors and those with sensory sensitivities to know what kind of experience to expect from a VR work so that they can make informed decisions about how best to engage with it. In this way accessibility has been envisioned so that all visitors are equally enabled to act upon their interest in accessing works collected at The NEXT.
For our proof of concept we focused on Caitlin Fisher’s Everyone at this party is dead / Cardamom of the Dead (2014), applying the controlled vocabulary that we developed along with Erika Fülöp, PhD, U of Toulouse; Jarah Moesch, RPI; Karl Hebenstreit, Jr., MS, Dept. of Education during 2022 Triangle SCI last month. Here is the way the work will be described on its exhibition space at The NEXT.
As you can see, visitors are alerted to the fact that the work contains fleeting text that appears briefly and then disappears. They also learn that text moves across the environment, and the reading time is also brief. Visitors who are color-blind may not be able to differentiate easily the color of the pins and of other objects such as the cedar tree, many of which carry important information for navigating the experience. They are also aware that much of the poetic content is communicated over audio, and that the sound oscillates between soft and loud and, so, challenging to sensitive visitors. Visitors also know in advance that the work requires the use of a controller and that vibrations occur to signal that visitors have successfully targeted a green pin. Head movements are also required. Some of the work’s meaning is communicated spatially via perception of artificial depth. Finally, visitors are alerted that they may be affected with internal sensations, such as nausea or dizziness, due to the VR experience.
We end our essay by saying that our extended ELMS metadata schema starts with the premise that all visitors to The NEXT need some type of accommodation to access the born-digital works held in its collections, whether it is information relating to the hardware a hypertext novel needs to function or the sensory modalities it evokes as it is experienced. Visitors who use screen readers, for example, should know in advance that they will need this technology to access a net art piece that requires sight; likewise, those who do not have access to an Oculus Rift headset will be informed when a work, like Fisher’s, requires one. In this way all visitors are equally enabled to act upon their interest in accessing works collected at The NEXT.