Documenting Life: Videography and Common Sense. Proceedings: IEEE International Conference on Multimedia and Expo (ICME'03),July 6-9 2003, Vol. 2, pp. 197 - 200. 2003-07-00 00:00:00
This paper introduces a model for producing common sense metadata during video capture and describes how this technique can have a positive impact on content capture, representation, and presentation. Metadata entered into the system at the moment of capture is used to generate suggestions designed to help the videographer decide what to shoot, how to compose a shot and how to index their video material to best support their communication requirements. An approach and first experiments using a common sense database and reasoning techniques to support a partnership between the camera and videographer during video capture are presented.
Programming Narrative Proceedings: IEEE Symposium on Visual Languages, pg. 380 - 386. 1997-09-00 00:00:00
With the introduction of the computer, narrative experiences can be found in new media applications as diverse as MUDs, arcade games and 3D immersive environments -- and new applications are being created all the time. The form these narrative experiences take are as diverse as their mediums; from the experiential stories of MUDs to the intricate branching plot paths of adventure games. But like with the introduction of television after decades of radio, a new medium calls for a new aesthetic, a new method of writing for that medium. Good functional models are needed to help define this aesthetic and specialized tools required to help build the work. The writing tool described in this paper, Agent Stories, is software currently under development for visually designing non-linear cinematic stories for new digital media.
Agent Stories: Authoring Computational Cinematic Stories MENO Workshop on Narrative and Hypermedia, April 1997. 1997-04-00 00:00:00
Writers of stories for both print and screen have a deeply ingrained tendency to construct their stories in ways geared toward experiencing the finished work in a linear fashion. With the exception of some videodisc experiments and a few recent video game applications, storeis for the screen are usually writeen, produced, assembled and viewed in and for the linear form. Although viewing a story must always be linear, as a linear sequence of pictures and sounds conveying some meaning, it should be possible to structure and produce a story in a non-linear way for the purpose of providing many different linear play outs.
Do Story Agents Use Rocking Chairs? The Theory and Implementation of One Model for Computational Narrative Proceedings: fourth ACM international conference on Multimedia (Boston, Massachusetts) November 18 - 22, 1996, pp. 317-28. 1996-11-00 00:00:00
Writers of stories for both print and screen have a deeply ingrained tendency to construct stories for an audience to experience the finished work in a fixed linear fashion. Although there are starting to be some examples of fixed non-linear multimedia works, viewing a cinematic story must always be linear, as a linear sequence of pictures and sounds conveying some meaning. However, it should be possible to structure a story non-sequentially for the purpose of providing many different sequential playouts. Computational processes can assist and affect both production and viewing. With this purpose in mind, this paper examines cinematic story construction through the use of computer based storytelling systems. Questions guiding this research are:
M-Views: A System for Location-Based Storytelling ACM UbiComp 2003, (Seattle, WA, October 12-15, 2003). 2003-10-00 00:00:00
M-Views is a system for creating and participating in context-sensitive, mobile cinematic narratives. A Map Agent detects participant location in 802.11 enabled space and triggers a location in 802.11 enabled space and triggers a location appropriate video message which is sent from the server to the participant's "in" box.
Very Distributed Media Stories: Presence, Time, Imagination 4th International Euro-Par Conference on Parallel Processing, Proceedings. June 1998. p. 47 - 54. 1998-06-00 00:00:00
The action of stories is always grounded and contextualized in a specific place and time. For centuries, artists seeking places worthy of representation have found inspiration in both the natural landscape and in man-made surrounds. This inspiration traditionally dwells on the scenic aspects of place and situation, in styles ranging from photorealistic to impressionistic. Sometimes, as in Australian aboriginal "dreamtime maps," the real, the historical, and the spiritual components of a place are simultaneously depicted with equal weightings. Sometimes, as in road maps and contour maps, super-simplified representations are enhanced with integrated or overlaid technical measurements; constructed artifacts, such as roads and airports, share equal billing with natural landmarks, such as lakes and rivers. The scale, focus, point-of-view, and narrative content of landscapes are chosen and manipulated to suit the artist's (and the audience's) specific purposes: they embody affordances which exert great influence over a work's final use...
Encounters in DreamWorld: A Work In Progress Proceedings: CaiiA, University of Wales College, Wales (July 1997). 1997-07-00 00:00:00
Digital media and networked, two-way communication channels are rapidly transforming our access to knowledge, our inventions, and our artistic messages. In the past, artistic practice focused on the construction of fixed, static "expressive objects" which stood as intermediaries between the artist and her audience. Today, technology allows us to make computer-assisted artworks "with a sense of themselves." These works incorporate behaviors and real-time responses patterned after those of living things: creatures, communities, and ecosystems.
Dexter and the Evolving Documentary 4th Annual ACM conference on Multimedia, proceedings, pp. 441 - 442. 1996-11-00 00:00:00
The "Evolving Documentary" encapsulates a story concept and digital presentation methods for a class of media stories. This story form combines an extensible collection of media materials and content annotation. The form is particularly well suited to on-going stories -- wars, political campaigns, urban change -- as well as to biographical stories to which people other than the author might contribute. Dexter is a Java Interface tool which takes as input the name and descriptors of the content elements as well as the graphical elements of the interface, and generates a state sensitive interface map which the viewer can use to coherently navigate bits and pieces of content. The graphic design is based on the idea of relating state and spreading energy to a descriptive mechanism. The method maximizes the impression of continuity between segments.
ConText: towards the Evolving Documentary 3rd Annual ACM conference on Multimedia, proceedings, pg. 381 - 389. November 1995. 1995-11-00 00:00:00
The advent of digital technologies has enabled the emergence of a new type of database or "Media Bank" that supports an evolving collection of media elements. This opportunity suggests the design of ConText, a system by which content, description, and presentation are separated into interconnected pieces, redefining the relationship between the story, the viewer and the author. For the viewer, repetition and revisitation of the story experience is encouraged and no constraints are placed on the duration of a session. For the author, the tasks of content gathering and sequencing take on new dimensions because the content base is extensible, and the author is separated programmatically from the exponentially complex task of explicitly sequencing the material for each viewer visitation. We call this new form the "Evolving Documentary." A new and crucial authorial role becomes defining the core methodology that governs the story presentation and viewer interaction. The foundations of our proposed model have been developed and implemented in conjunction with an evolving story about urban change in Boston. This story features a 7 Billion Dollar public works project to rebuild the Central Artery (I-95) and the project's impact on surrounding neighborhoods. The project will be on-going through 2004 which makes it a practical story for an Evolving Documentary investigation.
Weather Stories: New Dimensions in Collaboration ArtSci2002, NY Dec 6-8, 2002. 2002-12-00 00:00:00
Digital Storytelling, Multimedia and Digital Arts Multimedia XXI, Eurographics 2002, Lisbon, Portugal. 2002-00-00 00:00:00
1001 Electronic Story Nights: Interactivity and the Language of Storytelling Australian Film Commission's Language of Interactivity Conference, Summer 1996. 1996-00-00 00:00:00
This conference focuses on interactivity. I have worked with interactive cinematic projects since 1980. In this talk, I will discuss some of my current thinking about "the language of interactivity," and show you some of the recent work we have been doing at the MIT Media Lab. I am not concerned that you understand every detail about the inner workings of these pieces -- some of them will be available out in the lobby later for your examination. In discussing these examples, I will emphasize general features and concepts. If any of you have burning questions during this "show and tell," wave your hand around and I'll try to take the occasional question.
Guided navigation of virtual environments Proceedings, 1995 ACM Symposium on Interactive 3D Graphics, Monterey, CA USA, pp. 103-104. 1995-04-09 00:00:00
This paper presents a new method for navigation virtual environments called "The River Analogy." This analogy provides a new way of thinking about the user's relationship to the virtual environment; guidign the user's continuous and direct input within both space and time allowing a more narrative presentation. The paper then presents the details fo thos whit analogy was applied to a VR experience that is now part of the permanent collection at the Chicago Museum of Science and Industry."
Cutaneous Grooves: Composing for the Sense of Touch Proceedings, 2002 Conference on New Instruments for Musical Expression (NIME02), Dublin, Ireland May 24-26, 2002. 2002-05-24 00:00:00
This paper presents a novel coupling of haptics technology and music, introducing the notion of tactile composition or aesthetic composision for the sense of touch. A system that facilitates the composision and perception of intricate, musically structured spatio-temporal patterns of vibration on the surface of the body is described. An initial test of the system in a performance context is discussed. The fundamental building blocks of a compositional language for touch are considered.
Visual Interfaces for Shareable Media ISEA 2000 International Symposium on Electronic Art December 2000. 2000-12-00 00:00:00
Shareable Media is an effort to provide a coherent structure that will facilitate distributed collaboration and communication among filmmakers, storytellers, artists and audiences. The extensible architecture of the Shareable Media Project has the capability to deploy multiple applications targeted towards a variety of uses and audiences. The visual interfaces that have been developed for current Shareable Media applications, illustrate our intention to provide easy-to-use tools and effective content visualizations, that operate coherently together to form new and engaging video-based story forms. These applications facilitate users' creation of multiple sequences from a video clip database, and explore the juxtaposition of heterogeneous elements, the integration of video and text, and the description of edit structure. PlusShorts uses punctuation as an iconic system for describing and augmenting edited video, where the punctuation symbols are used to detail the structure of a video sequence and inspire dialogue about the essence of that structure. Individeo features a browser that visualizes the sharing of video content through a dynamic, sociable interface. Individeo also allows editing of video integrated with text, allowing media-based dialogs and collaborative cinematic productions. In combination with multifarious content, these interfaces provide prototypes of new story forms where the edited structure and the shared context are made explicit. We are partnering with several physical storytelling communities from different countries, and we wish to explore emergent storytelling techniques, virtual collaboration, community-oriented self-organization and global communication.
Documenting Digital Dialogues: Engaging Audience in the Construction of a Collective Documentary Across Time and Space Proceedings, TIDSE '03 (Darmstadt, Germany, March 24-26, 2003), Springer-Verlag, pp. 248 - 259. 2003-03-24 00:00:00
Stories naturally reveal themselves to us through space and over time. Today's digitally networked society provides a fertile environment for the exploration of narrative forms in new and diverse ways. The Digital Dialogues Symposium provided the setting for a series of experimental approaches to the recording and documenting of an event in time. Using custom designed software, participants collaboratively constructed their interpretations and impressions of the conference events, using interfaces that encouraged real-world discussion and supported continuing online dialogue.
Animist Interface: Mapping Character Animation to Computational State IJCAI Workshop on Animated Interface Agents, Nagoya, Japan. August 1997-08-00 00:00:00
This paper describes a series of experiments mapping the state of a computer system to emotional behaviour in an animated character. Rather than focussing on intelligence in the interface agent, the projects seek ways to use the agent as an avatar for the computer. Facts, recommendations and statistics about the host-system affect behaviour through the subtle cues of gaze-direction, body language, and effects on the environment around the character. The domains of email-filtering, web-browsing and system-performance are the subjects of current explorations, using modified Unix web- and email-servers and a Windows NT client.
An Extensible Architecture for Multiple Applications with Shareable Media International Workshop on Networked Appliances, November 2000. 2000-11-20 00:00:00
Shareable Media is a network-based system that explores how a community of users can share stories and express ideas through a shared database of digital video clips. To adapt this to the rapidly evolving Internet, we need to design and experiment with an extensible architecture for Shareable Media, which has the capability to deploy multiple applications on wired and wireless devices through connections of both broad and narrow bandwidth. The current architecture consists of three modules: Application Manager, Shareable Media Framework, and Storage Manager. Through the Application Manager, application designers register, test, release, and monitor their products on top of the architecture. The Shareable Media Framework provides application programming interfaces that allow user-defined applications to access data in the system. Both modules retrieve and store content through the Storage Manager. Currently, three applications are under development for the extensible architecture: PlusShorts, Individeo, and M-Views.
Tangible Viewpoints: A Physical Approach to Multimedia Stories Proceedings of ACM Multimedia '02 (Juan-les-Pins, France, December 1-6, 2002), ACM Press, pp. 153 - 160. 2002-12-01 00:00:00
We present a multimedia storytelling system that couples a tangible interface with a multiple viewpoint approach to interactive narratives. Over the centuries, stories have moved from the physical environment (around campfires and on the stage), to the printed page, then to movie, television and computer screens. Today, using wireless and tag sensing technologies, storytellers are able to bring digital stories back into our physical environment. The Tangible Viewpoints system explores how physical objects and augmented surfaces can be used as tangible embodiments of different character perspectives in an interactive tale. These graspable surrogates provide a direct mode of navigation to the story world, a means of bridging the gap between cyberspace and our physical environment as we engage with digital stories. The system supports stories told in a range of media, including audio, video, still image and text.
Tangible Viewpoints: A Physical Interface for Exploring Character-Driven Narratives Conference Abstracts and Applications of SIGGRAPH '02 (San Antonio, Texas, USA, July 21-26, 2002), ACM Press. 2002-07-21 00:00:00
The Tangible Viewpoints project explores how physical objects and augmented surfaces can be used as tangible embodiments of different character perspectives in a multiple point-of-view interactive narrative. These graspable surrogates provide a more direct mode of navigation to the story world, bringing us closer to bridging the gap between the separate realms of bits and atoms within the field of digital storytelling.
Tangible Viewpoints: Physical Navigation through Interactive Stories Proceedings of the Participatory Design Conference (PDC '02), (Malmo, Sweden, June 23-25, 2002), CPSR, pp. 401-405. 2002-06-23 00:00:00
Over the centuries, stories have moved from the physical environment (around campfires and on the stage), to the printed page, then to movie, television and computer screens. Today, using wireless and tag sensing technologies, researchers and storytellers are able to bring digital stories back into our physical environment. The Tangible Viewpoints system explores how physical objects and augmented surfaces can be used as tangible embodiments of different character perspectives in an interactive tale. These graspable surrogates provide a direct mode of navigation to the story world, a means of bridging the gap between cyberspace and our physical environment as we engage with digital stories.
GuideShoes: Navigation based on musical patterns CHI '99 extended abstracts on Human Factors in Computing Systems , Pittsburgh, PA, May 15-20, 1999, pp. 266 - 267. 1999-05-00 00:00:00
One of the most ubiquitous tasks we have to perform is the need to find our way to unknown destinations. We are left alone to deal with maps, ask people for directions, and understand their instructions. How can we avoid this frustrating and time-consuming process? How can we help all the people who canÍt or wonÍt use printed or spoken instructions (little kids, the visually-impaired, or users occupied with other urgent tasks)? This paper describes GuideShoes, a wearable system that uses aesthetic forms of expression for direct information delivery. This is a first tool to utilize music as an information medium and musical patterns as a means for navigation in an open space, such as a street. GuideShoes provides musical navigational cues in the background, thus reducing the problem of cognitive information overload. The system consists of a pair of shoes, equipped with a GPS, wireless modem, MIDI synthesizer, CPU, and a base station that acts as the central unit for data processing.
Aesthetic Forms of Expression as Information Delivery Units Proceedings, CSNLP-8 Workshop, Dublin City University, Dublin. August 1999. 1999-08-00 00:00:00
In this paper we explore the hypothesis that aesthetic forms of expression - such as music, painting, video - can be used for direct information delivery. In contrast to text or verbal narrative techniques, which require a conscious act of transcoding, these aesthetic forms stimulate a more direct, emothional response. If shown viable, such a hypothesis could open new channels for the delivery of various types of information, providing us with a background information channel in situations of information overload, leaving our foreground concentrated on the more thought-demanding tasks.
The Birth of \"Another Alice\" COMPUTERS AND FUN 4, King's Manor, University of York, UK, November 29, 2001. 2001-11-00 00:00:00
"Another Alice" is an experimental fiction video story, designed solely for a new mobile media platform, M-views. The M-views platform includes an iPaq based PDA, a GPS receiver, an 802.11b wireless card and software agents. Optimized for video, the device facilitates location-aware story making and playback. Compared to any traditional media platforms, such as TV, Cinema, and Streaming Media, M-views has two unique features: (1) it knows the viewer's location, (2) it can receive streaming video from an established 802.11b wireless network. M-views provides story creators an opportunity to construct location-aware mobile video stories. In order to trigger the stories, the viewer needs to become more actively involved either by going to the location of the next clip or activating an object.
M-Studio: an Authoring Application for Context-Aware Multimedia ACM Multimedia 2002, Juan-les-Pins, France, December 1-6, 2002, pp. 351-354. 2002-12-00 00:00:00
Broadband wireless networks coupled with handheld computers and appropriate sensing technologies provide a channel for the delivery of mobile cinema. Mobile cinema changes the consumer experience of motion picture stories in that discrete cinematic sequences are delivered based on the consumer's location and the a story-real-time metric. The M-Studio authoring tool helps mobile story creators design, simulate and adjust mobile narratives. The tool provides the author with a graphical manipulation interface for linking content with a specific geographical space and a simulator allows the author to evaluate and iterate the content for continuity of story threads as they may be presented. The tool directly generates the code that is required for the server to deliver the cinematic sequences appropriately. This tool is discussed in the context of the two mobile narratives that have been created.
Wonderland in Pocket International Symposium on Electronic Art, Nagoya, Japan, 2002 2002-10-00 00:00:00
M-Views is an experimental video story-making and sharing system designed for distribution to mobile hand-held video capable devices. Video stories are constructed using the M-Views authoring tool, which allows makers to preview how segments will be sequenced based on any possible navigation path of the viewer. Inspired by environmental artworks and multiple perspective films, narratives designed for the MViews system tend to incorporate the opportunity for the story to merge with the architectural surroundings and, in the future, with the activity of the participant. As we explore the mobile story form of the future, the following questions guide our inquiry: What story structures/gaming strategies most actively engage the audience as a participant in locationbased video drama? Given a few prototypes and a network, will a community of makers emerge who want to develop this genre of video art? What special tools does the mobile story-maker/artist need to create engaging location-based cinema? In this paper, we describe the MViews platform and our experience in two experimental story productions. progress and results.
The Stratification System: A Design Environment for Random Access Video Lecture Notes in Computer Science, No. 712, November 1992, pp. 250 - 261. 1992-11-00 00:00:00
Content of a movie is produced in two different types of design environments. The first is the design environment of shooting where a camera is used to capture what is happening at a particular place and time. The second is the design environment of editing where the rushes are interpreted relative to a movie maker's intent. Annotation of the video stream allows the movie maker to make decisions based on specific content of video and in the best case enables a machine to help in that process.
Wearable Cinema/Wearable City: bridging physical and virtual spaces through wearable computing IMAGINA 2000, invited presentation, Monte Carlo, Jan. 31 - Feb. 3, 2000. 2000-01-31 00:00:00
Wearable computing provides a means to transform the architecture and the space surrounding us into a memory device and a storytelling agent. We assembled a wearable computer specifically aimed to the purpose of mapping architecture into an experience comparable to that of watching a movie from inside the movie set, or being immersed in an information city whose constructions are made of words, pictures, and living bits. The wearable is outfitted with a private eye which shows video and graphics superimposed on the user's real-surround view. It uses real-time computer vision techniques for location finding and object recognition. We describe two applications. Wearable City is the mobile version of a 3D WWW browser we created called "City of News." It grows an urban-like information landscape by fetching information from the web, and facilitates the recollection of information by creating associations of information with geography. Wearable Cinema extends the previous system to generate an interactive audio-visual narration driven by the physical path of the wearer in a museum space.
City of News: cataloguing the World Wide Web through Virtual Architecture SIGGRAPH 99, Visual Proceedings, Emerging Technologies, Los Angeles, CA, August 8-13, 1999. 1999-08-08 00:00:00
"How do we explore the digital box of fragments that pastes together disjunctive arrays of images and sets of data into a seemingly continuous display?"... We "need to develop new modes of perception with which to receive, absorb, criticize, and produce new combinations of information." M. Christine Boyer In a 1995 article, appeared in "La Monde Diplomatique," the French theorist of Technology, Paul Virilio, describes the phenomenon of the loss of orientation experienced by the exponentially increasing crowd which is relentlessly enthralled in cyberspace. Virilio observes that the construction of information superhighways, which are globalized and instantaneously updated, presents us with a threat, a menace to our perception of what reality is, of what it means for us to exist, as individuals, here and now. Induced by the splitting of the sensible world into real and virtual in parallel with the "invention of the perspective of real-time," this threat causes a shock, a "mental concussion," that hooks the happenings of events to a globalized monorail track. We have extended Virilio's concern to the varied world of the Net, and observed that for many, the Web is a wasteland of information, a Babel without dictionary, an encyclopaedia with no table of contents, an unstructured territory without a map...
Media actors: characters in search of an author IEEE International Conference on Multimedia Computing and Systems, vol. 2, pg. 439 - 446. 1999-06-07 00:00:00
Interactive experiences benefit from natural interactions, compelling communication, and ease of implementation. We show how, according to these principles, interactive media architectures can be categorized as scripted, responsive, learning, or behavioral, and give examples of applications in each category. We then propose the perceptive architecture based on media actors. We endow media objects-expressive text, photographs, movie clips, audio, and sound clips-with coordinated perceptual intelligence, behaviors and personality. Such media actors are able to engage the public in an encounter with a virtual character which expresses itself through one or more of these agents. The result is a novel method for interactive media modeling which finds applications in multimedia, electronic art, interactive performance and entertainment.
Digital Circus: a computer-vision based interactive Virtual Studio IMAGINA, Monte Carlo, Monaco. 1999-01-18 00:00:00
We have built a low-cost virtual studio which does not need a blue screen to operate. Hence, such a setup can be used at home and allows to distribute the production outside of the physical TV studio. Our studio is based on Pfinder: a real-time computer-vision body tracking and gesture recognition system. We use Pfinder also to enable interaction among the composited participants, and the objects and creatures in the virtual setting. To demonstrate our system, we have built a digital circus in which multiple individual participants can connect from remote locations, see all their images composited in the virtual circus, and interact with the objects on the virtual set. Other possible applications range from full body real-time teleconferencing to remote collaborative work, networked performance, entertainment, and education.
Responsive Portraits The Eighth International Symposium on Electronic Art, Chicago, IL. 1997-09-00 00:00:00
Modern techniques for high resolution, still-image display offer new expressive possibilities for photographic portraiture and exhibition. "Responsive portraits" challenge the notion of static photographic portraiture as the unique, ideal visual representation of its subject. Editors are usually confronted with choosing ONE ideal portrait from a limited set of pictures which represent poses, gestures, and expressions which ALL contribute to defining the character. In our view the entire set of a subject's typical portraits should be kept for interactive exhibits. A responsive portrait consists of a multiplicity of views whose dynamic presentation results from the interaction between the viewer and the image. The viewer's proximity to the image, head movements, and facial expressions elicit dynamic responses from the portrait, driven by the portrait's own set of autonomous behaviors. This type of interaction reproduces an encounter between two people: the viewer and the character portrayed. The experience of an individual viewer with the portrait is unique, because it is based on the dynamics of the encounter rather than on the existance of a unique, ideal portrait of the subject. The sensing technology that we used is a computer vision system which tracks the viewer's head movements and facial expressions as she interacts with the digital portrait; therefore, the whole notion of "who is watching who" is reversed: the object becomes the subject, the subject is observed...
Improvisational Theater Space The Sixth Biennal Symposium for Arts and Technology, Connecticut College, New London, CT. 1997-02-27 00:00:00
The Improvisational Theater Space is an interactive stage where human actors can perform accompanied by virtual actors. Virtual actors are modeled as animated "Media Creatures" that are behavior-based automous software agents. It uses real time computer vision, speech recognition and and speech anlaysis to sense the perfomer's actions on stage. We used Artifical Life programming methods and behavior-based design to avoid rigid scripting of user and content interaction. The main result of this work is the construction of animated media creatures endowed with intentionality and autonomous behaviors. Media Creatures allow content to be active and to present itself to the user by dynamically adapting to the context of the interaction. We used Media Creatures to create an engaging Improvisational Theater Space where the user/performer is engaged in an improvisational dialogue with a typographic actor.
Technologies and methods for interactive exhibit design: from wireless object and body tracking to wearable computers International Conference on Hypertext and Interactive Museums (ICHIM 99), Washington, DC, September 22-26, 1999 1999-09-00 00:00:00
We present three interactive exhibit projects which add technology to the museum space or to the museum visitor. We propose a technological intervention which helps curators and designers to achieve a balance between leisure and learning and help them be more effective in conveying story and meaning. This is made possible by tracking people and objects with wireless and uncumbering real time computer-vision techniques, and with wearable computers which can respond to the context and presence of the visitor along the path of the exhibit. By using these techniques museums can present a larger variety and more connected material in an engaging manner within the limited physical space available. They can also enrich and personalize the visit with a wearable computer-driven visual and auditory storyteller which can adapt and guide the public through the path of the exhibit. All these systems enhance the memory of the visit and help build a constructivist-style learning experience for the public.
Wearable Performance First International Symposium on Wearable Computers, Cambridge, MA, October 13-14, 1997, pg. 181 - 182. 1997-10-00 00:00:00
Wearable computers offer the street performer powerful tools with which to create innovative experiences for the audience. As wearable technology moves computation from the desktop onto the user's body, it provides an invitation to free performance from the indoor stage, bringing a new adaptive richness to the mobile world of street theatre. We find it therefore compelling to explore several application contexts in which wearable technology enhances, extends and creates new examples of street performance. In this paper we offer a taxonomy of different genres of street performance and we describe how the wearable computer transforms, augments and enriches the traditional form in an original and witty manner.
City of News Ars Electronica Festival Catalogue, linz, Austria, September 8-13, 1997 1997-09-00 00:00:00
"Is there a way for us to define ourselves and the space in which we dwell, when the city is increasingly referenced as a space of disappearance, a space of the future but not of the present, a space of anxiety and loss ?" M. Christine Boyer
Augmented Performance in Dance and Theater International Dance and Technology 99 (IDAT99), at Arizona State University,Tempe, AZ. 1999-02-00 00:00:00
This paper describes motivations and techniques to extend the expressive grammar of dance and theatrical performances. We first give an outline of previous work in performance, which has inspired our research, and explain how our technology can contribute along historical directions of exploration. We then present real-time computer vision based body tracking and gesture recognition techniques which is used in conjunction with a Media Actors software architecture to choreograph digital media together with human performers. We show applications to dance and theater which augment the traditional performance stage with images, video, music, text, able to respond to movement and gesture in believable, esthetical, and expressive manners. Finally, we describe a scenario and work in progress, which allows us to apply our artistic and technological advances to street performance.
HyperPlex: a World of 3D Interactive Digital Movies IJCAI'95 Workshop on Entertainment and AI/Alife, Montreal, Canada. 1995-08-00 00:00:00
We present a new environment for browsing a visual landscape inhabited by digital movies that live, interact and play in a graphical virtual world. The movies are modeled as autonomous agents which have their own sensors and goals and which can interpret the actions of the participant and react to them. Our environment allows one---or more---people to interact with the HyperPlex world through the use of vision techniques. No goggles, gloves or wires are needed: interaction takes place with the use of computer vision techniques that analyze the image of the person. An extension of this system to a multi-user game is currently being considered.
Improvisational Media Space :: Architecture and Strategies for Evolution EvoMusArt, EuroGP2004, Coimbra, Portugal, April 5-7, 2004, pp. 455 - 464. 2004-04-00 00:00:00
This paper presents the current state in an ongoing development of the Emonic Environment (EE): a real-time improvisational system employing evolutionary principles for the mutation of media space. We position the associated problems in the context of user interaction, provide eight principles essential for creating an improvisational media environment, follow with a description of how the EE implements these principles, and conclude with a description of the evolutionary algorithms’ functionality.
Mixer-Subverter :: an Online Improvisational Video System AINA 2004 International Conference on Advanced Information Networking and Applications. Fukuoka, Japan, March 29-21, 2004 2004-03-00 00:00:00
This paper describes the Mixer-Subverter; an online system that allows children to integrate the activities of play (from giving to stealing; from sharing to forcing to receive) and the activities of video editing (creating, juxtaposing, controlling) into a neverending process of mix and subversion. It invites the storyteller within each one of us to compose and visualize movies, images and sound environments while writing a story. In addition, the Mixer-Subverter encourages playful collaboration in an exchange network of unique media artifacts. The operation of the system is based on improvisational principles; an idea that there is not a particular plan or goal to the editing process. Instead, pieces that populate the Mixer-Subverter’s media space acquire their meaning through their patterns of use and practices of exchange. This paper is a report on a work in progress. As such, it presents the underlying rationale and provides a description of the first prototype version of the system.
Live Role-Playing Games: Implications for Pervasive Gaming ICEC 2004 : international conference on entertainment computing No3, Eindhoven , PAYS-BAS (01/09/2004), pp. 127-138. 2004-09-00 00:00:00
Live role-playing (LRP) games stand as powerful metaphorical models for the various digital and ubiquitous forms of entertainment that gather under the term pervasive games. Offering what can be regarded as the holy grail of interactive entertainment – the fully immersive experience – LRP games provide a tangible and distributed interface to a gaming activity that is emergent, improvised, collaboratively and socially created, and have the immediacy of personal experience. Supported by studies of LRP games, specifically aspects of costume, set design and props, we outline the interface culture specific to LRP and in which ways this culture may inform the design of pervasive games.
Live Role-Playing Games: Implications for Ubiquitous Computer Game Design Proceedings: the 3rd International Conference on Entertainment Computing (ICEC). Eindhoven, The Netherlands, September 1-3. 2004-09-00 00:00:00
Live Role-Playing Games: Implications for Ubiquitious Computer Game Interfaces 2004-00-00 00:00:00
Live role-playing (LRP) games stand as powerful models for the design of ubiquitous computer games. Offering what can be regarded as the holy grail of interactive entertainment - the fully immersive experience - LRP games provide a tangible and distributed interface to a gaming activity that is emergent, improvised, collaboratively and socially created, and have the immediacy of personal experience. Supported by studies of LRP games, specifically aspects of costume, set design and props, we suggest principles for the design of interfaces to ubiquitous computer games.
A System to Compose Movies for Cross-Cultural Storytelling: Textable Movie Proceedings: TIDSE 2004 (Darmstadt, Germany), June 24-26, 2004, vol. 3105 pp. 126-131. 2004-06-00 00:00:00
This paper presents Textable Movie, an open-ended interface that allows anyone to become rdquovideo-jockey.rdquo In the framework of computational storytelling, Textable Movie promotes the idea of maker controlled media and can be contrasted to automatic presentation systems. Its graphical interface takes text as input and allows users to improvise a movie in real-time based on the content of what they are writing. Media segments are selected according to how the users label their personal audio and video database. As the user types in a story, the media segments appear on the screen, connecting writers to their past experiences and inviting further story-telling. By improvising movie-stories created from their personal video database and by suddenly being projected into someone elsersquos video database during the same story, young adults are challenged in their beliefs about other communities.
An open-ended tool to compose movies for cross-cultural digital storytelling: Textable Movie In Proceedings of ICHIM 04 "Digital Culture & Heritage", Berlin, Germany, August 30-Sept 2004 2004-08-00 00:00:00
This paper presents Textable Movie, an open-ended tool that allows any storyteller to become "video-jockey" able to improvise a media story in real-time drawing from an available collection of annotated images and videos. In the framework of digital storytelling, Textable Movie promotes the idea of maker controlled media and can be contrasted to automatic presentation systems. Its graphical interface takes text as input and allows users to improvise a movie in real-time based on the content of what they are writing. Media segments are selected according to how the users label their personal audio and video database. As the user types in a story, the media segments appear on the screen, connecting writers to their past experiences and inviting further story-telling. Video Jockeys perform using their own or someone else’s video database presenting different visual perspectives on the same story. By co-creating movie stories that are first improvised from a personal video database and then projected into someone else’s video database as two tellers merge their stories, young adults are challenged in their beliefs about other communities. In this paper we will present our ongoing research on a future tangible version of Textable Movie for a direct control and visualization of multiple point of views.
Hopstory: an interactive, location-based narrative distributed in space and time Technologies for Interactive Digital Storytelling and Entertainment: Second International Conference, TIDSE 2004, Darmstadt, Germany, June 24-26, 2004. Proceedings 2004-06-24 00:00:00
As computing and communications technologies evolve, there is the potential for new forms of digitally orchestrated interactive narratives to emerge. In this process, balanced attention has to be paid to audience experience, creative constraints, and presence and role of the enabling technology. This paper describes the implementation of HopStory, an interactive, location-based narrative distributed in space and time, which was designed with this balance in mind. In HopStory, cinematic media is housed within wireless sculptures distributed throughout a building. The audience, through physical contact with a sculpture, collects scenes for later viewing. Inspired by the history of the installation space the narrative relates a day in the life of four characters. By binding the story to local time and space and inviting the audience to wander, we amplify the meaning and impact of the HopStory content and introduce an innovative approach to a day-in-the-life story structure.
Redefining Digital Audience :: Models and Actions. Interact2003, Zurich, Switzerland. 2003-09-00 00:00:00
This paper presents a new theoretical model for audience participation in the context of HCI. Such a model is necessary because, while a great amount of new interactive solutions are unveiled each year, the assumptions in regards to users’ roles remain largely unchanged since the dawn of the computer era. This paper questions these roles and articulates a number of principles that interactive environments must employ to bring about new audiences – active communities based on the principles of non-idiomatic improvisation. This theoretical exercise is supplemented by a brief description of the Emonic Environment, our system for creation, modification, exchange, and performance of audiovisual media in an improvisational fashion. The paper concludes with a description of system’s ongoing expansion into the domain of mobile multi-user collaboration.
Genetic Improvisation Model :: a framework for real-time performance environments. EvoWorkshops 2003, Essex, UK, April 14-16, 2003, pp. 547 - 558. 2003-04-14 00:00:00
This paper presents the current state in an ongoing development of the Genetic Improvisation Model (GIM): a framework for the design of real-time improvisational systems. The aesthetic rationale for the model is presented, followed by a discussion of its general principles. A discussion of the Emonic Environment, a networked system for audiovisual creation built on GIM’s principles, follows.
Hydrogen Wishes Proceedings: SIGGRAPH 2003 Conference on Sketches & Applications (San Diego, CA, July 27 - 31, 2003), pg. 1 2003-07-00 00:00:00
Hydrogen Wishes, presented at MIT's Center for Advanced Visual Studies, explores the themes of wishes and peace. It dramatizes the intimacy and power of transforming one's breath and vocalized wishes into a floating sphere, a bubble charged with hydrogen. The floating bubble represents transitory anticipation as a wish is sent on its trajectory toward fulfillment. Light, heat sensors, microphones, projected imagery, hydrogen and ordinary soap bubbles come together in this exploration of human aspiration. As in our lives, many wishes escape, but many others are catalyzed by the heat of the candle and become ethereal. The fulfilled wishes then become living artifacts within projected photographs of Earth cities as seen from outer space.
Us++: tools and methodologies for personal reflective story construction Third International Conference on the Dialogical Self, Warsaw, Poland 2004-08-26 00:00:00
In telling our personal stories we seek to humanize fragments of experienced time and shape them in collusion with an audience into a communicative narrative unity. The transition from a life lived to a life recounted involves a complex interplay of agency between the author, actor and the audience that ultimately provides us with our sense of selfhood and indeed our narrative identity, both as individuals and as social, cultural beings. Advances in digital technologies and network communications provide opportunities for exploring alternative modes of mediated storytelling that invigorate and re-imagine our represented experiences. This paper introduces Us++, a research initiative that explores methodologies and tools for seamlessly and fluidly documenting our experiences and sharing them with others in a reflective, thoughtful and engaging manner. Using media-rich weblogs and custom-designed software, this process adopts an integrated, conversational and co-constructed approach. Multiple participant inputs and perspectives expand and surprise our story-selves as the roles of creator, interpreter and audience are continuously exchanged. Experiments with a variety of participating communities are discussed, including descriptions of group-specific software iterations, story emergence and development, and evaluation of the approach as a methodology and practice for representing, sharing and understanding our life experiences.
Everyday Cinema ACM MM 2004 Workshop on Story Representation, Mechanism and Process, pp. 59 - 62. 2004-10-15 00:00:00
Stories make our experiences memorable over time. In constructing and sharing our personal and communal memories, we move in a reflective manner back and forth between our life-world and our life-stories. Advances in network communications and the growing abundance of personal media recording devices provide new opportunities for collecting, examining and re-imagining our life experiences. This paper describes the ‘Media Fabrics Experiment’, an online rich-media weblog populated by the day-to-day media messages submitted by a group of participants using camera cellphones. The construction, participant use and activity trends of the weblog are discussed and the impact of this approach for the collection and sharing of everyday story experiences is evaluated.
A Workstation-Based Multi-Media Environment For Broadcast Television USENIX - Summer '91 - Nashville, TN. pp. 455-460. 1991-06-00 00:00:00
This presentation will describe two applications which illustrate expanding workstation power in the real broadcasting world. One is a sophisticated application of virtual reality in which we simulated the environment of the green of the 18th hole in real time during a professional golf tournament for a broadcast TV program. The other is a professional baseball information management system for TV production in which we incorporate distributed computing, mixed network, database managing, and computer graphics.
The WANDerful Alcove: Encouraging constructive social interaction with a socially transforming interface Proceedings, INTERACT 2003 International Conference on Human-Computer Interaction, Zurich, 1 - 5 September 2003, IOS Press. 2003-09-00 00:00:00
In this paper, we introduce the notion of a socially transforming interface, which is an interface that, when wielded, transforms its user into a more social character in a digital interactive experience than he or she would be normally, one in which the user is more likely to interact and collaborate in an ad-hoc and constructive way with other people as part of the experience. We also describe a work-in-progress story installation called the WANDerful Alcove, an interactive play space in which participants wield magic wands and practice wizardry, as a potential example of this concept. A study is now underway to verify this intuition.
Texting Glances: Ambient Interludes from the Dublin Cityscape Proceedings of Enarrative 5, Hypertext. Narrative. Art. Tech., Boston, MA. 2003-05-00 00:00:00
This paper concerns a system, texting glances, that can create ambient interludes, moments of entertaining interaction in public urban space at which people gather such as a bus stop. The system proposes to introduce a personal yet sociable and visual activity into urban "waiting" spaces. Personal, because the input device is a cell phone; sociable and visual, because people can work together to co-construct a visual narrative. As people wait, they text to the system; the system responds to their texting by providing an image; as more people text the sequence of visuals plus text forms a multi-authored narrative. Texting glances is an ambient "waiting" game in which transient audience participants use SMS texting to evolve a visual story on a large display which is installed in a public space such as a bus or train station.
Textable Movie: improvising with a personal movie database SIGGRAPH'03 Conference Abstracts and Applications, San Diego, CA, pg. 1. 2003-07-27 00:00:00
This sketch presents a new approach to improvising movies according to the inter-relationship between personal videos and the story of an experience. Textable Movie is a graphical interface that invites a storyteller of any age to compose and visualize movies, images and sound environments while writing a story; the system self-selects and self-edits movies in real time based on textual input from the teller. Textable Movie aims to exalt the imagination of its authors (writer, and film-maker) by immersing them in real time, in a co-constructed narration.
Art and Other Anxieties: Steerable Stories Sutra: Storytelling in the Digital Age, National Institute of Design, Ahmenabad, India. 2002-12-00 00:00:00
"He looked into the water and saw that it was made up of a thousand thousand thousand and one different currents, each one a different colour, weaving in and out of one another like a liquid tapestry of breathtaking complexity; and Iff explained that these were the Streams of Story, that each colored strand represented and contained a single tale"
As We Progress: Network Innovation and Social Change Contel '99, 5th Intl. Conference on Telecommunications and 2nd Broadband and Multimedia Workshop Proceedings, June 1999, Zagreb, Croatia, p. 3-7 1999-06-00 00:00:00
Technologically-enriched communication environments are already changing many aspects of the human social fabric, including the bonds of friendship, the definition of neighborhood, the give-and-take of education, and the transacting of business. As the virtual network grows in scope, expressiveness, and functionality, it will affect our sense of sociability, our understanding of world affairs, and our engagement in the global economy.
Extending the Documentary Tradition: Oberhausen International Film Festival 1997-04-00 00:00:00
Cinema is constantly being reinvented by young makers who boldly embrace the latest technologies while bearing minimal allegiance to the aesthetic conventions of the past. This process makes the technology of cinema remarkably robust in comparison to the fragility and "datedness" of cinematic images themselves, whose specific content, composition, and physical condition speak to us of times past.
Everyone\'s Cinema: Towards the Future of Cinematics Oberhausen International Film Festival 1997-04-00 00:00:00
What is cinema becoming? All around us -- in the laboratory, in theme parks and museums, on CD-ROMs and home computers, across the World Wide Web -- stories are being transformed by technological possibility. The proliferation of VCRs, the remote control, and affordable home video cameras have already created a society of audience which expects a certain amount of individual control over their own information destiny. As new and more powerful devices appear and proliferate, storytelling media will transform into something more personalized and conversational; as "stories with a sense of themselves" come into being, narrative will evolve from fixed, monolithic forms into something more personalized and responsive to the wishes (and whims) of their audience...
The Once and Future Story Proceedings of the Medientage Munchen Fall '96 Conference, Munich, Germany, September 14-18, 1996. 1986-09-00 00:00:00
New Orleans in Transition: The Interactive Delivery of a Cinematic Case Study Revised from remarks given at the International Congress for Design Planning and Theory, Park Plaza Hotel, Boston, 1987 1987-08-19 00:00:00
How does a city change? New Orleans in Transition, 1983-1986 is a 3-hour cinematic case study or urban change and design negotiation. The edited film is being released as a videodisc set and can be played linearly or accessed interactively in a workstation environment. In the interactive mode, students and researchers will be able to selectively view movie sequences. At any time during the viewing session they can pause, take notes and review support material such as maps, architectural plans, personal or site dossiers, reports, and other relevant data. The intent of this enhanced cinematic experiment is to deepen the understanding of the subject matter and to provide students and researchers with the ability to explore the case-study material in relation to their particular interests. The first interactive implementation of this movie will take place in an introductory Urban Planning course at MIT this fall.
The Digital Media Story in a Wireless World 2001-00-00 00:00:00
Storytelling is a fundamental means of human communication. Shaped by advances in the technologies of production and distribution, the story form has evolved to communicate knowledge frameworks, behavioral norms and spiritual insights within contemporary culture. The 21st century will be no different.
Touching Tales: Design Issues in Creating Haptic Content ACM, CHI 2003, Fort Lauderdale, Florida, April 5-10, 2003 2003-04-00 00:00:00
Despite continual consumer demand for richer broadcast media, there have been few examinations of senses other than vision and hearing in this domain. This paper considers the role that touch may be able to play in future broadcast systems. We have begun to explore the addition of haptic cues to children's cartoons, and though this process unearthed a number of practical design issues unique to this domain. Some of these are discussed in this paper, including how the psychological distinction between passive and active touch influences broadcast media, and how this in turn affects notions of interactivity. We also discuss focus as it relates to the haptic display of individual aspects of complex scenes. The goal of this paper is to introduce this novel and unexplored topic, and to provide a discussion that motivates further research.
Authoring Flexible Story for the Wild 2003-00-00 00:00:00
This paper addresses the problem of authoring an environmentally-responsive narrative which is embedded throughout a wild space. Through research at the Media Lab Europe (MLE) we investigate how mobile digital stories may provide a rich experience of a place. An appropriate narrative framework evolved during our development of a prototype navigation ad storytelling system. Our audience wanders across a remote outdoor landscape gathering location and weather-based multimedia scenes that aim to amplify a rich remote setting rather than compete with it. We seek to provide a coherent cumulative story experience tat enable character development, narrative climax, and a singular conclusion to everyone regardless of their strategy for navigating the story space. With these goals as a guide we develop a simple, generalizeable framework for use by content creators. We detail our challenges in developing a narrative which fulfills these goals, including a description of the final story framework. We discuss the results of user trials and suggest future application possibilities; the evolution of the framework in addition to the framework itself, may be instructive to developers of distributed, context-aware digital stories.
A Web-based Environment for Assembling Multimedia Learning Stories in Irish Primary Education ICALT 2002, Kazan, Russia, September 9-12, 2002 2002-09-00 00:00:00
The Empowering Minds Learning Network is a web-based environment that supports discussion and reflection on classroom activities. The environment collects and organizes multimedia files and documents from participating students and teachers, allowing them to share their work with each other, with their communities, and with the world. The environment also gathers extensive data on teachers' usage of the service, providing a means for study and personal reflection on each teacher’s emergent interests and pedagogical development. The application is currently being used to support Constructionalist learning with new digital technologies in Irish primary schools.
An Authoring Tool for Context-aware Mobile Multimedia Creation 2001-12-12 00:00:00
With the rapid availability of 802.1 lb networks that provide high bandwidth, mobile Internet connectivity, we become able to research context-aware mobile cinema. M-Studio is a multimedia authoring tool that helps mobile story creators design location and time sensitive stories in the prototype as well as the production phases. These stories are created to be viewed on handheld computers, such as Compaq iPAWQ. M-Studio includes a Storyboard to allow authors to layout story scenes, and a Story Simulator to allow authors to play back all possible viewer paths of these multithreaded location-aware movies. M-Studio was evaluated though its use in the realization of location-aware video fiction story, "Another Alice." Based on the requirements of this mobile entertainment movie prototype, we have modeled and extensible architecture in which the M-Studio tools produce the instruction set for the run-time movie. The tool-set continues to be tested and evolve based on new production concepts.
The Shape and Look of Video Streams 0000-00-00 00:00:00
Recordings of a succession of pictures over time can be displayed in a variety of different ways to show what they hold. The historical and most absorbing way is to display the images as a rapid succession of full screen frames. However different forms of presentation can be used to emphasize different attributes. The video streamer positions digitized frames of video sequentially in front of each other with a slight offset; visually this appears as an extrusion of the video stream in time which emphasizes differences along the side and top edges of adjacent frames. In this way the video streamer helps us see characteristics between frames and across shots such as transition types and cutting rhythms. While viewing the video stream one can select bounds of interest; this area can be changed using a rubbing motion along the stream. The micro-viewer shows us more precise frame to frame relationships, based on the portion of the video stream we have currently selected. The shot parser uses a frame differencing algorithm to offer a helpful element of machine assisted abstract analysis.
Structured Content Modeling for Cinematic Information SIGCHI Bulletin, October 1989, Vol. 21 No. 2, pp. 78-79. 1989-10-00 00:00:00
For cinematic material to become useful as an online information resource, a structured content model must be developed. This model should enable both a viewer and an automated retrieval/presentation system to navigate and manipulate picture and sound information. Here we present the functional requirements for such a model, and then discuss a video editing and viewing environment, currently under development at the MIT Media Laboratory, which utilizes content information.
A Tangible Platform for Documenting Experiences and Sharing Multimedia Stories Proceedings, ACM SIGMM 2003 Workshop on Experiential Telepresence (ETP '03) Berkeley, California, November 7, 2003, ACM Press, pp. 105 - 109. 2002-11-00 00:00:00
Stories are a cultural universal that allow us to refect on the way we live. Through the act of storytelling, we structure and express our perosonal experiences and understandings of the world in a form that can be passed on to others -- as words, imagery, sounds, and gestures. In today's increasingly digital and networked society, we can create media platforms that allow our personal stories to becoe public and shared. In this paper, we present a tangible platform that has been designed to support the sharing of collaboratively constructed multimedia stoies in a social setting. We discuss the way this platform was used in two large-scale personalized storytelling workshops, and examine how it served to engage participants in a self-reflective story revealing process.
Making your own Stories: Ad Hoc Networking and Audience Performance Proceedings: 1st International Conference for Digital Technologies and Performance Arts, Doncaster, UK, June 2003. 2003-06-00 00:00:00
Nature Trailer- Physically Navigate Stories in the Wild "Design Methods for Ubiquitous Computing in the Wild" Workshop, Mobile and Ubiquitous Multimedia Conference, Norrkoping, Sweden. 2003-12-00 00:00:00
Nature Trailer is an entertainment and navigation platform that attempts to facilitate recreational exploration. Designed for a hiker wandering through a remote place, Nature Trailer virtually embeds context-aware stories throughout a landscape and provides clues of their locations. In this scenario a user is mobile, on foot, outdoors, and with little access to network infrastructure. The wireless platform we describe in this paper consists of sensors, movie scenes, a context aware media browser and a navigation tool, all supported on an iPAQ handheld computer. We discuss our motivations behind combining a time-based decision-making tool that supports individual, on-foot navigation of place with delivery of contextually aware "just in time" stories that further inform about the remote place. We explain the iterative methodology used by the researchers, including the design, analysis and evaluation of a series of multiple small-scale field tests and the design and utilization of real-time sensing as contextual input. We conclude with a report on the current state of the project and a projection of further uses for the logged user interaction and context data in iterating the Nature Trailer design.
Stories for Remote Place: Structure, Content, Device, Trial ICHIM 04 "Digital Culture & Heritage", Berlin, August 30-Sept 2004 2004-08-00 00:00:00
Multimedia story, when invested with mobile sensing technology, is a powerful medium for enhancing the experience of a place. We introduce a mobile computational story application that mediates and enhances the experience of an audience wandering in an outdoor physical space. The system provides a contextualized, place-based narrative, via media screened on a handheld computer, to an audience who is exploring a remote landscape on foot. We fully prototyped for a 'sample' wild setting, an island off the west Irish coast, in order to understand the issues involved in constructing audience experience, system architecture, and rich content for this scenario. We discuss each of the components of such a system, and the relationship between place, experience, and system design. Our results emerged from this specific prototype setting and scenario. However, we apply them throughout this paper to a generalized discussion of the expressive possibilities and issues of creating a mobile, location-based experience that uses multimedia to tell the story of a culture and place.
Moving Pictures: Looking Out/Looking In SIGGRAPH'05 2005-08-00 00:00:00
In this paper, the authors present Moving Pictures: Looking Out/Looking In, a robust, tangible, multi-user system that invites young users to create, explore, manipulate and share video content with others. Moving Pictures enables a meaningful, spontaneous and collaborative approach to video creation, selection and sequencing. The authors discuss their motivation in relationship to research in the domain of video editing. Their contribution in the domain of tangible interfaces for constructionist learning has been introduced with the implementation of participatory design sessions. They discuss workshop studies with 10-12 years old children from Sweden and Ireland playing with the Moving Pictures system.
2000 & Beyond: Intelligent Access Devices for Multimedia in Tomorrow\'s Telecommunications Networks NCF '90, July 1990. 1990-07-00 00:00:00
A new generation of interactive multimedia applications will emerge as an all-digital video signal becomes accessible via networked telecommunications channels. The issue of how we interact with newly available resources should depend on the nature of the intended experience or structured task. Over the past 10 years, some interactive multimedia projects have focused on the relation between task and input device, while others have extended the language of representation to include graphical cues. These prototype projects have established the desktop and conversation as paradigms for interactivity. This paper uses a thought experiment to explore devices which can extend these paradigms to encompass the complexity of tasks and operations which will define networked interactions for digital media.
Ambient Urban Interludes: Passing Glances CHI 2004, April 24-29, 2004, pg. 1534. 2004-04-00 00:00:00
This paper describes a system in which transient audience participants co-create emergent narratives that are revealed in public space. ”Passing Glances” enables users to create these ambient urban interludes through the use of SMS text messages. The Passing Glances system contains a wealth of keyword-associated imagery that is stored 'in the city'. Images are revealed to the transient audiences when SMS message keywords trigger the system. The mobile phone therefore acts as an expressive device revealing hidden layers of the city to construct short-lived stories.
Car as Story-Mate Presentation at C++, Modena, Italy, June 1, 2001. 2001-06-00 00:00:00
As networks become mobile, the car will become a media-receptive device. How will media extend and enhance our future automotive experience?
From Cinematic Journalism to Hypermedia Optical Technologies: New Horizons in Information Processing,Worcester State College, November 28, 1988. 1988-11-00 00:00:00
When I consider what I am producing now and what I might like to produce 10 years from now, I am fairly optimistic. Recent advances in consumer video equipment, the feeling that digital video is "just around the corner," and the speed with which almost anyone can learn to use a Macintosh mark a considerable distance traveled; only 19 years ago, I chose to tote a black-and-white video camera and a 30 pound reel-to-reel video recorder around to make real-life journals, which even when they gained the "most-wanted" stamp of approval could not be broadcast without twisting a video engineer's bottom line...
ComicKit: Acquiring Story Scripts Using Common Sense Feedback IUP'05, January 10-13, 2005, San Diego, CA. 2005-01-00 00:00:00
At the Media Lab we are developing a resource called StoryNet, a very-large database of story scripts that can be used for commonsense reasoning by computers. This paper introduces ComicKit, an interface for acquiring StoryNet scripts from casual internet users. The core element of the interface is its ability to dynamically make common-sense suggestions that guide user story construction. We describe the encouraging results of a preliminary user study, and discuss future directions for ComicKit.
CyberBELT: Multi-Modal Interaction with a Multi-Threaded Documentary CHI'95 Mosaic of Creativity, May 7-11, 1995, pp. 322 - 323. 1995-05-00 00:00:00
CyberBELT allows a viewer to interact with a multi-threaded documentary using a multi-modal interface. The viewer interacts with the documentary by speaking, pointing, and looking around the display. The viewer selects the threads of the story to follow or lets the viewer navigate through the story. Feedback from the viewer evolves the story to present concepts she is interested in. We discuss the suitability of combining multi-modal interaction and multi-threaded narrative.
Desire versus Destiny: the question of payoff in narrative Position statement: for Caixa Forum MetaNarratives Conference, Barcelona, Spain, January 29, 2005. 2005-01-00 00:00:00
To pit desire against destiny in narrative is like pitting fire against water -- beyond their elemental nature, there is little by way of similarity to bargain about. Desire is an unconstrained flight; destiny is a straight drive along a paved road.
Emonic Environment - Implementation Report MAXIS2003, Essex, UK 2003-04-00 00:00:00
This paper presents a progress report on the implementation of the Emonic Environment (EE) - a Java-based system for improvisational creation, modification, performance, and exchange of audiovisual media. The protagonist users of the EE are non-artists whose creative drive has been impeded by the prevalent interactive interfaces that are largely passive (click-response) and discourage experimentation. We take non-idiomatic improvisation as our inspiration, and seek to present the performers with an environment where the tools for media exploration are "alive". In doing so, we hope to encourage the creativity of people otherwise afraid to experiment. This paper describes the functionality of the EE, focusing on the user interface, multi-user network capabilities, audio (performance & synchronization) and genetic algorithms used to explore a media landscape.
InterElastique: A system for control of an audio-visual experience using novel stretchable sensors CIM'2000, La'Aquila, Italy, September 2-5, 2000. 2000-09-00 00:00:00
Interactive installations today typically limit the interaction by allowing only one active user at a time, and are commonly based on traditional forms of user input (i.e., keyboards, trackballs, touchscreens, etc.). This paper describes InterÉlastique, a system we have created that allows collaborative tangible interactions through a set of innovative stretchy sensors called eRopes.
Q: What makes a good research environment? A: Creativity, Openness, Sociability IST Panel "On Creativity: A research perspective", Helsinki, Finland, November 1999. 1999-11-00 00:00:00
A researcher by definition explores that which is not known. Researchers -- particularly those in academia -- build theoretical ships and sail forth into uncharted waters, optimistically searching for new knowledge which will in some way change civilization. Some bring back recognizable treasure, while others find less obvious booty (sometimes, the crew's education is the only tangible result of the trip); but, every voyage of discovery helps to flesh-out and improve our collective maps of past, present, and future worlds.
The Mindful Camera: Common Sense for Documentary Videography Proceedings of the eleventh ACM international conference on Multimedia, Berkeley, CA, November 2-8, 2003, pp. 648 - 649. 2003-11-00 00:00:00
Cameras with story understanding can help videographers reflect on their process of content capture during documentary construction. This paper describes a set of tools that use common sense knowledge to support documentary videography.
Mobile Context-Aware Stories Proceedings of the IEEE conference on Multimedia and Expo, August 2002, Lausanne, Switzerland, pp.345-348. 2002-08-00 00:00:00
An interactive narrative is a story that is shaped by digital technology and that allows the dynamic presentation of scenes or sequences based on input from the user. In this paper we present a new foundation for interactive story-telling that allows a mobile user to interact with a story. The user is placed at the center of the story and the story comes to the user in transit. The behavior and actions of the user influence the scenes and sequences of the story the user experiences. The framework for this system is an ad-hoc network. Ad-hoc networks allow localized presentation of story elements to users who are in transit and able to receive story elements on a mobile device. Furthermore ad-hoc networks have the capability to be context-aware and respond to the physical and social context of the user both on an individual and group level. This mobile context-aware story form is a powerful format in the fields of education and entertainment. It allows the story to be connected with the surrounding environment and it allows the user to see cause and effect of individual and group behavior. In this paper we present an ad-hoc network story system and examine a case study of its use for a prototype mobile context-aware story.
Next Generation Interface for Multimedia Publications Friend21 Symposium on Next Generation Human Interfaces, 1991. 1991-00-00 00:00:00
Future multimedia publications will consist of content, identified as the stuff which invites interpretation. and interface. identified as all computational elements which orchestrate delivery. This paper focuses on information architectures for electronic multimedia publications which include digital video as a principal carrier of content. These publications will rely on multiplexed secondary databases to augment content. Images will be composited and sequenced at run-time in the delivery environment. User interactions will be conversational and will enable annotation to personal data structures.
Optical Tracking for Music and Dance Performance Fourth Conference on Optical 3-D Measurement Techniques, Zurich, Switzerland (September 29 - October 2, 1997). 1997-09-00 00:00:00
This paper describes three different types of real-time optical tracking systems developed at the MIT Media Laboratory for use as expressive human-computer interfaces in music, dance, and interactive multimedia performances. Two of these, a multimodal conducing baton and a scanning laser rangefinder, are essentially hardware-based, while the third is a computer vision system that can identify and track different segments of the performer's body. We discuss the technical concepts behind these devices and outline their applications in music and dance environments.
Interactive Multimedia on a Single Screen Display Videotechnology Technical Session, Current Applications of Videotechnology in Computer Graphics Systems, National Computer Graphics Association Conference, March 22, 1988. 1988-03-00 00:00:00
Interactive delivery of multimedia material destined for educational courseware or large reference archives imposes complex constraints on both the delivery system and on the application design. "A City in Transition: New Orleans, 1983-86," a cinematic case study of urban change, combines 3 hours of movie sequences; a still frame library of characters, places and maps, mastered on optical videodisc; a wide variety of text documentation; and relevant demographic and economic statistics.
Software Considerations for Multimedia Video Projects X11 Video Extensions Technical Meeting, Bellcore, June 9-10, 1988. 1988-06-00 00:00:00
This paper describes a multi-media application, "A City in Transition: New Orleans, 1983-86." The project is being developed on Project Athena Visual Workstations as a curriculum resource for the study of architecture, urban planning, and political science. The project includes 3 hours of movie sequences and stills mastered on optical videodisc, as well as an extensive set of ASCII files containing support material.
StoryBeads: a wearable for story construction and trade Proceedings of the IEEE International Workshop on Networked Appliances 2000. Newark, NJ, IEEE, 2000. 2000-11-00 00:00:00
Stories take hundreds of different forms and serve many functions. They can be as energetic as an entire life story or as simple as directions to a favorite beach. Technological developments challenge and change storytelling processes. The invention of writing changed the story from an orally recounted form, mediated by the storyteller, to a recorded version which was technologically reproducible. The fleeting experience of a storyteller's woven tale became an immutable object. In cinema stories are told with a sequence of juxtaposed still images moving at a speed fast enough to fool the eye into seeing a continuously changing image instead of one image after another. The invention of the computer with its capacity for storage and manipulation of information let authors design stories and present them to different viewing audiences in different ways. Mobile computing, like the technological developments that came before it, will demand its own storytelling processes and story forms.
The Newspaper of the Future: A Straw-man Proposal in Four Parts Symposium on the Future of Newspapers, MIT Media Laboratory, Cambridge, MA, July 24-25, 1991. 1991-07-00 00:00:00
The newspaper industry has been referred to as the smoke-stack industry of the communications age. On the contrary, it is unsurpassed in its ability to gather and organize vast quantities of time-sensitive information. The weakness of the industry lies in its outmoded distribution and presentation of that information, as well as its inability to be responsive to the needs of both individual readers and advertisers. Mass media no longer need be monolithic, impersonal, synchronous, colloquial or prepackaged. Rather, newspapers can be redefined to be distributed, responsive to personal needs and interests, timely, international and dynamically presented.
Computational Multimedia: Today\'s Challenge, Tomorrow\'s Products Multimedia '91 Conference Proceedings, London, June 1991. 1991-06-00 00:00:00
Computational multimedia is defined as a user-directed form of storytelling in which the computer orchestrates the presentation of information by mediating user input/inquiry and representations of media content. The following remarks focus on issues and challenges of the research environment today, particularly as regards applications which incorporate video and sound, 3D and 2D animation, text, and computer programs...
Video Streamer Proceedings of the CHI '94 conference companion on Human Factors in Computing Systems, April 24-28, 1994, pp. 65 - 68. 1994-04-00 00:00:00
Motion images are usually conveyed full-screen, coming to life through a rapid sequence of individual frames. The tools presented here allow a viewer to step back from the full-screen view to gain perspective of time, and then to transfer from sequential image streams to collages of parallel images. The Video Streamer presents motion picture time as a three dimensional block of images flowing away from us in distance and in time. The Streamer's rendering reveals a number of temporal aspects of a video stream. The accompanying shot parser automatically segments any given video stream into separate shots, as the streamer flows. The Collage provides an environment for arranging clips plucked from a sequential stream as associations of parallel elements. This process of arranging motion images is posed as an engaging viewing activity. The focus is on viewing utensils, but these tools provide an alternative perspective to video elements that also has bearing on editing.
Digital Multimedia: Yesterday, Today and Tomorrow BISCAP International, the 1990 Digital Multimedia Conference, The Lafayette Hotel, Boston, MA, May 30- June 1, 1990. 1991-06-00 00:00:00
Today, the expression "Digital Multimedia" triggers a range of overlapping dreams and expectations. Cognitive biases, emerging computational techniques, prototype applications and available products stimulate a constant stream of definitions. From "Sight, sound, motion -it's as simple as that" to "Hundreds of megabytes of content -it's a fundamental paradigm shift," promoters herald a revolution in communication.  Minimally digital implies a computer-driven system which manages storage, retrieval and display of information across media types. In concert the content and platform should be capable of generating conversations between the user, the machine and chunks of information. Looking out past today's limited applications, with their somewhat clunky sensual and cognitive transitions, into the crystal ball of future information demands and technologies, we anticipate the realization of computational television. Responding to a range of stimuli -- including voice, eye motion and gesture -this incarnation of an electronically networked all-digital media system should be able to generate automatic on-the-fly selection and compositing of information segments from multiple data sources, as well as virtual representations of objects and motions structured to mirror known physical and paraphysical behaviors.
My Storyteller Knows Me: The Challenge of Interactive Narrative Proceedings: IDATE Conference: Investing in the Digital Image Personal and Interactive Television (November 1993), pp. 516-520. 1993-11-25 00:00:00
History suggests that when technological advances become available at the right price, they become ubiquitous and spin out into other inventions. In the process they generate new ways of thinking. Over the last 100 years, technological advance and communication forms have developed a kinship. Telephone, motion picture camera and projector, radio and television, Minitel and the Internet, and cellular phones are some examples of the rapid pace of innovation. As media becomes digital, we are witnessing a confluence of trends which invite the invention of new forms. These forms highlight personalization at all levels of the information and entertainment spectrum.
The Viscous Display: a transient adaptive interface for collective play in public space Proceedings of the 2nd international Conference on Computer Graphics and interactive Techniques in Australasia and South East Asia (Singapore, June 15 - 18, 2004). S. N. Spencer, Ed. GRAPHITE '04. ACM Press, New York, NY, pp. 259-263. 2004-06-00 00:00:00
The Viscous Display explores the exchange of social information through transient public interfaces. Shaped by principles of 'underground public art', the Viscous Display is conceived as a novel mobile communication medium, where messages can be shared in public spaces. Inspired by biological learning systems; the Viscous Display learns sensorial information that form along traces of a participant's touch and maps this information onto a flexible display. Because it is made up of inexpensive materials, the Viscous Display is also a disposable artifact that may be collected in public spaces. It combines multi-modal sensing, learning algorithms, and a pliable silicone display.
Agent Stories AAAI Spring Symposium Series Interactive Story Systems: Plot and Character (Stanford University), pp. 19-22. 1995-00-00 00:00:00
Writers of stories for both print and screen have a deeply ingrained tendency to construct their stories in ways geared toward experiencing the finished work in a linear fashion. With the exception of some videodisc experiments and a few recent video game applications, stories for the screen are usually written, produced, assembled and viewed in and for the linear form. Although viewing a story must always be linear, as a linear sequence of pictures and sounds conveying some meaning, it should be possible to structure and produce a story in a non-linear way for the purpose of providing many different linear play outs...
The BT/MIT Project on Advanced Image Tools for Telecommunications: An Overview Image'Com '93, 2nd International Conference on Image Communications. March 1993. 1993-03-00 00:00:00
Tilting at a Dreamer\'s Windmills: Gesture-Based Constructivist Interaction with Character Consciousness Reframed Conference. July 1997 1997-06-07 00:00:00
In the installation Sashay/Sleep Depraved, a participant uses emotionally evocative gestures to onteract with a larger-than-life-sized virtual character, the Sleeper. Research during the installation's construction has explored methods of interactive narrative and traditional cinema in three principal ways. First, the participant is positioned, not as a spectator or navigator, but as a role-player interacting with the Sleeper by altering her subconscious environment. Second, the participant's proximity to, and gestures toward the Sleeper foster a strong sense of immersion and engagement. lastly, in Sashay's constructivist environment, the participant enjoys the expressive, associative process of constructing an animated, surrealist dream.
Innovative Story Models for Ambient Media EuroPrix Scholars Conference, Tampere, 11-12 November, 2004. 2004-11-00 00:00:00
Recent advances in technology have generated new paradigms and potentials for multimedia story environments. Among these the realization of a wired nomadic consumer heralds the disappearance of the desktop metaphor in favour of one that highlights immersive wearable media and wireless connectivity. As we progress toward the goal of immersive multimedia experience, research into virtual reality is eclipsed by research that focuses on mixed reality at the interface – so called ambient multimedia. Using multiple senses of the human, ambient multimedia is able to distribute the interface more naturally into the fabric daily life. As nomadic experiences offer an increasingly rich media experience, there is a need to introduce denser and more complex multi-sensor networks as well as feedback strategies for evaluation of the experience. The goal of the current research is to elaborate story models that are enabled by ambient multimedia paradigms. A special focus is given to sensible media stories, where the benefits of matching content to sensing technologies and location will be considered. A review of relevant research and its implication on the design and structure of sensible story will be discussed.
Live Cinema: Designing an Instrument for Cinema Editing as a Live Performance Proceedings: New Interfaces for Musical Expression 2004, Hamamatsu, Japan, June 3-5, 2004, pp. 144-149. 2004-06-00 00:00:00
This paper describes the design of an expressive tangible interface for cinema editing as a live performance. A short survey of live video practices is provided. The Live Cinema instrument is a cross between a musical instrument and a film editing tool, tailored for improvisational control as well as performance presence. Design specifications for the instrument evolved based on several types of observations including: our own performances in which we used a prototype based on available tools; an analysis of performative aspects of contemporary DJ equipment; and an evaluation of organizational aspects of several generations of film editing tools. Our instrument presents the performer with a large canvas where projected images can be grabbed and moved around with both hands simultaneously; the performer also has access to two video drums featuring haptic display to manipulate the shots and cut between streams. The paper ends with a discussion of issues related to the tensions between narrative structure and hands-on control, live and recorded arts and the scoring of improvised films.
Live Cinema: an instrument for cinema editing as a live performance Proceedings: SIGGRAPH 2004, Los Angeles, CA, USA. 2004-08-00 00:00:00
The Live Cinema research project aims at building an instrument for cinema editing as a live performance. Both an advanced visual interface for sample-based media performance and a novel tangible editing tool for motion picture, our prototype is a large touch-sensitive image canvas equipped with haptic turntables. A combination of adaptive cinema scoring and accurate hands-on control render possible feature-length narrative video improvisation.
Office Voodoo : a real-time editing engine for an algorithmic sitcom Proceedings: SIGGRAPH 2003 Sketches and Applications, San Diego, CA, July 27-31, 2003 2003-07-00 00:00:00
Office Voodoo is an interactive film installation using exclusively live action footage and running on a real-time, shot-based editing engine that fluidly assembles the film as it is being watched, while respecting the conventions of continuity editing. Each character in the film is represented by a physical voodoo doll. As viewers manipulate these dolls, they affect the emotions of the people on screen. They can also call the people in the film using their phones.
Design Decisions for Interactive Environments: Evaluating the KidsRoom Intelligent Environments, Papers from the 1998 AAAI Spring Symposium, March 23-25, 1998, Technical Report SS-98-02, AAAI Press 1998-03-00 00:00:00
We believe the KidsRoom is the first multiperson, fully-automated, interactive, narrative environment ever constructed using nonencumbering sensors. The perceptual system that drives the KidsRoom is outlined elsewhere (Bobick et al. 1996). This paper describes our design goals, successes, and failures including several general observations that may be of interest to other designers of perceptually-based interactive environments.
The MATRIX: a novel controller for musical expression Workshop at CHI '01 conference on New interfaces for musical expression, Seattle, WA, April 1-2, 2001, pp. 1-4. 2001-04-00 00:00:00
The MATRIX (Multipurpose Array of Tactile Rods for Interactive eXpression) is a new musical interface for amateurs and professionals alike. It gives users a 3- dimensional tangible interface to control music using their hands, and can be used in conjunction with a traditional musical instrument and a microphone, or as a stand-alone gestural input device. The surface of the MATRIX acts as a real-time interface that can manipulate the parameters of a synthesis engine or effect algorithm in response to a performer's expressive gestures. One example is to have the rods of the MATRIX control the individual grains of a granular synthesizer, thereby "sonically sculpting" the microstructure of a sound. In this way, the MATRIX provides an intuitive method of manipulating sound with a very high level of real-time control.
Multipurpose Array of Tactile Rods for Interactive eXpression Conference Abstracts and Applications of SIGGRAPH '01 (Los Angeles, California, USA, August 12-17, 2001). 2001-08-00 00:00:00
The MATRIX (Multipurpose Array of Tactile Rods for Interactive eXpression) is a device that offers real-time control of a deformable surface, enabling the manipulation of a wide range of audiovisual effects. The interface functions as a versatile controller that can be adapted to many different tasks in a variety of application domains. It was first used as a new type of musical instrument in the Emonator project. New domains are now being explored in areas such as real-time graphics animation, sculptural design and rendering, still and moving image modification, and the control of physical simulations.
Re-thinking real time video making for the museum exhibition space In Art & Design, SIGGRAPH'05 (Los Angeles, California, USA, 31 July - 4 August 2005) 2005-07-00 00:00:00
This poster presents a new approach to creating video stories at an art, science and technology exhibition. Within the context of an interactive exhibition space, dividing the tasks of recording and editing of digital media between production and post-production can be disruptive to the visitors’ experience. Terraria is a graphical and tangible interface which synthesizes performance and editing into a simultaneous act. Pilot studies suggested that young users find this integrated interface engaging for the performance and visualization of movies in real time.
The MATRIX: A New Musical Instrument for Interactive Performance Proceedings: the International Computer Music Conference '01 (Havana, Cuba, September 17-22, 2001). 2001-09-00 00:00:00
The MATRIX (Multipurpose Array of Tactile Rods for Interactive eXpression) is a new musical instrument for amateurs and professionals alike. It gives musicians a 3- dimensional tangible interface to control music using their hand(s), and can either be used in conjunction with a traditional musical instrument and a microphone, or as a stand-alone gestural input device. The surface of the MATRIX acts as a real-time interface that can manipulate parameters of a synthesis engine or effect algorithm in response to a performer's expressive gestures. One example uses the rods of the MATRIX control the individual grains of a granular synthesizer, thereby sonically ‘sculpting’ the microstructure of a sound. In this way, the MATRIX provides an intuitive method of manipulating sound with a very high level of real-time control.
Media Portrait Of The Liberties: Design and Experience of a location based non linear narrative 2004-00-00 00:00:00
In this paper, we present the Media Portrait of the Liberties (MPL), a hands-on investigation of a new, digitally mediated form of narrative that makes extensive use of mobile computing technology. We position our work among other locative media project by means of differences and similarities and share the preliminary findings resulting from an extensive pilot study. MPL is an evolving collection of historically inspired stories drawn from written accounts of the rich inner city area in Dublin, Ireland known as ï¿½the Liberties.ï¿½ The objective of MPL is to provide viewers with a nuanced and evocative sense of place as they walk the streets of this striking neighborhood. The project also functions as catalyst for members of the Liberties community to contribute new stories, potentially enriching and evolving the portrait.
Exploring and Constructing Video in Improvisational Manner 7th Generative Art Conference (GA2004), Milano 2004-12-16 00:00:00
How can machines help us to manipulate and structure audiovisual media in ways that are always novel and are uniquely ours? How can such construction happen in real time, with no precise planning or guidance given by the user? The Emonic Environment (EE), the system described in this paper, enables improvisational construction and navigation of media space, both by individuals and by groups. Participants either control the system directly (e.g., real-time recording, processing, and performance of audio, video, and text, or exchange with remote users and online databases), or provide only a higher-level structural guidance, letting the underlying genetic algorithms control the low-level details. The system’s behaviour and content is controlled utilizing keyboard/mouse, as well as microphones, cameras, sensors, MIDI controllers, and cell phones.
System Architecture for Developing Mobile Cinema ACM Multimedia Conference'2003, November 2-8, 2003, Berkeley, CA, USA. 2003-11-00 00:00:00
Mobile Cinema is embodied in temporally and spatially discontinuous narrative segments that can be delivered on wireless PDAs as users navigate physical locations and interact with the environment. Mobile Cinema takes as its starting point the truism that "every story is a journey" and bends this idea into a new form in which the narrative is augmented by physical surroundings, social engagement, and contextual awareness.
The Problem of Time in Personal Media Making Proceedings: 16th Eureopean Conference on Artificial Intelligence (ECAI'2004), Valencia, Spain, August 22-27, 2004, pg. 1121 2004-08-00 00:00:00
Storytelling is an activity of intelligent play generated by humans for the benefit of humans. Qualities of intelligent play reflect the intentions, processes and tools that are available to humans for the purpose of story construction. Personal media collections speak to intention and can serve as a tool to help us better communicate who we are and who we would like to become. Augmented by computer readable meta-data, computers can help us navigate these collections. However, if computers are to become collaborators and provocateurs, they need to better “understand” story mechanisms. In particular, the computer needs a model that allows it to reconfigure the temporality of the narrative. In this paper we focus on mental models used by the observational filmmaker in image capture and editing, and propose an approach to temporal representation of media segments that could serve future interactions with the media fabric.
Story networks: \"the medium is the message\"; the content, your souvenir Proceedings: the sixth Eurographics workshop on Multimedia 2001 (Manchester, UK, September 08 - 09, 2001), pg. 7 2001-09-00 00:00:00
Storytelling -- a fundamental mode of human communication -- has adapted in form, content, and technique as new expressive technologies have appeared and evolved. The past century has witnessed the growth of storytelling tools, electronic media channels, and the mass media one-to-many "broadcast" model. Today -- as we transition to digital media, ubiquitous networking, audience-sensing devices, and computer-aided content delivery-new models of media storytelling are emerging. These forms may be designed to find you (as opposed to your finding them); to be tradable (in a peer to peer fashion) and modifiable; to be highly distributed in the space/time; to interconnect and invite browsable exploration by crowds and/or to be aggregated over time by one or more participant authors. This talk considers the form, content and technologies associated with customizable, personalizable stories of the future.
A Mediated Portrait of the Dublin Liberties Proceedings of the Spark! Design and Locality Conference (Oslo, Norway, May 2004). pp. 84-91. 2004-05-00 00:00:00
"The Media portrait of the Liberties" cpnsists of a collection of multiple short historically informed narratives about the Liberties community. These media segments reveal "a sense of place" and were designed with the intention that they be delivered to the audience as the audience wanders through the neighborhood. The conceptual development of the project began with getting to know the neighborhood and its people using observation and ethnographic interviews, which evolved as the director of the project entered into collaboration with Maireen Johnston, a writer whose book "Around the Banks of Pimlico" serves as the basis for the scripted media segments. Drawn from Johnston's personal memories of growing up in the neighborhood, the book describes the life, lore and colour of the Liberties. The Liberties script made use of Johnston's anecdotal accounts of the lives of real individuals and the way in which she interweaves these with historical descriptions of the social condition of the people living in the area in past times.
Sharing video memory: goals, strategies, and technology 2005-10-00 00:00:00
In this short paper, I explore video as a tool for recording and presenting our perception of reality.
Cati Dance: self-edited, self-synchronized music video Conference Abstracts and Applications of SIGGRAPH '03 (San Diego, July 27-31, 2003), 2003-07-00 00:00:00
This sketch presents a real-time system that aims to bridge a gap between machine listening technology, and self-editing, self- synchronizing video: a movie organizes itself from “listening” to music. In our current demonstration, a series of short video clips of “Cati” dancing, originally shot at different tempi, are arbitrarily sequenced, always in sync with the analyzed beat, i.e., if the music slows down, the dance slows down accordingly. When no beat is found, e.g., the music is mellow, then Cati stops dancing and waits, apparently bored.
Touch TV: Adding Feeling to Broadcast Media Proceedings of the 1st European Conference on Interactive Television:from Viewers to Actors, Brighton, UK December 2003. 2003-12-00 00:00:00
In this paper, we discuss the potential role haptic, or touch, feedback might play in supporting a greater sense of immersion in broadcast content and describe some preliminary scenarios we have developed to explore how haptic content might be created and delivered within the context of a broadcast programme. In particular, this work has looked at two potential programme scenarios - the creation of authored haptic effects for children’s' cartoons and the automatic capture of motion data to be streamed and displayed in the context of a sports broadcast. We believe that the interactive nature of this touch media has the potential to greatly enrich interactive TV by physically engaging the viewer in the programmed experience.
CINEMA: A System for Procedural Camera Movements Proceedings of the 1992 Symposium on Interactive 3D Graphics, Special Issue of Computer Graphics, Vol. 26, pp. 67-70. 1992-03-00 00:00:00
This paper presents a general system for camera movement upon which a wide variety of higher-level methods and applications can be built. In addition to the basic commands for camera placement, a key attribute of the CINEMA system is the ability to inquire information directly about the 3D world through which the camera is moving. With this information high-level procedures can be written that closely correspond to more natural camera specifications. Examples of some high-level procedures are presented. In addition, methods for overcoming deficiencies of this procedural approach are proposed.
Narrative Guidance AAAI Spring Symposium on Interactive Story Systems: Plot and Character, pages 52-55, March 1995. 1995-03-00 00:00:00
To date most interactive narratives have put the emphasis on the word "interactive." In other words, asking the question "How can interactivity empower the user to influence his or her experience?" This has meant giving the user control to construct the narrative by providing the freedom to steer and the ability to influence how the narrative space is navigated. However, there is an alternative approach. That is to ask the questions "How can Interactivity be employed by the author to better tell his/her story?" and "How can the narrative be used to guide the interaction of the user?" In this approach the story environment is manipulated to ensure that the user experiences the narrative that the author intends. I call this "Narrative Guidance."
Stories as Dynamic Adaptive Environments EuroPar '98, pp. 293-300. 1994-00-00 00:00:00
Stories are invitations to understand ourselves, our community, and the world around us. During a live conversation or performance, an active feedback loop exists between an audience and a teller of tales. The greatest benefit of the feedback loop is that it allows for personalization and individual learning. In computational modes of storytelling, the designer can promote feedback as a natural extension of the story situation by careful development of the story modules, by attention to the voice of the audience, and by introducing visible content frameworks. In some experimental works, strategies of "narrative guidance" and "society of audience" are juxtaposed in order to insure a more cohesive story experience.
Everyday Storytelling: supporting the mediated expression of online personal testimony 12th International Conference, HCI International 2007, Beijing, China, July 22-27, 2007, Proceedings, Part IV 2007-07-22 00:00:00
Personal stories make our everyday experiences memorable over time. This paper presents â€˜Everyday Mediated Storytellingâ€™, a model of the casual storytellerâ€™s process of capturing, creating and sharing personal mediated narratives. Based on this model, an online authoring and publishing application for sharing everyday rich-media narratives named â€˜Confectionaryâ€™ was developed. Results from a lengthy study with a group of committed users signify the success of the Confectionary system as an engaging everyday tool for personal storytelling that stimulated self-reflection, broadened the scope of storytelling strategies demonstrated by its users and supported active audience interpretation. The model, methodology, and system presented in this paper provide a basis for understanding how we move fluidly between our direct experiences, our cognitive and emotional reflections and our storied representations and interpretations. This paper also demonstrates how a spatial everyday authoring and publishing application advances the digital storytelling process from one of media collection to one of storied reflection.