空 挡 广 告 位 | 空 挡 广 告 位

VAnnotatoR: A Framework for Generating Multimodal Hypertexts

Note: We don't have the ability to review paper

PubDate: July 2018

Teams: Goethe University Frankfurt

Writers: Alexander Mehler;Giuseppe Abrami;Christian Spiekermann;Matthias Jostock

PDF: VAnnotatoR: A Framework for Generating Multimodal Hypertexts

Abstract

We present VAnnotatoR, a framework for generating so-called multimodal hypertexts. Based on Virtual Reality (VR) and Augmented Reality (AR), VAnnotatoR enables the annotation and linkage of semiotic aggregates (texts, images and their segments) with walk-on-able animations of places and buildings. In this way, spatial locations can be linked, for example, to temporal locations and Discourse Referents (ranging over temporal locations, agents, objects, or instruments etc. of actions) or to texts and images describing or depicting them, respectively. VAnnotatoR represents segments of texts or images, discourse referents and animations as interactive, manipulable 3D objects which can be networked to generate multimodal hypertexts. The paper introduces the underlying model of hyperlinks and exemplifies VAnnotatoR by means of a project in the area of public history, the so-called Stolperwege project.

您可能还喜欢...

Paper