For an enriched reading experience you may visit the online version of this documentation at https://do.meni.co/phd/exhibition DOI 10.26180/5d4c3457a6875
The materials documented here were exhibited as partial fulfilment of the requirements for the degree of Doctor of Philosophy
Exhibition of Bridging the
Virtual and Physical: from Screens to Costume
by
Domenico Mazza
Bachelor of Design (Visual Communication)(Hons), 2014
Faculty of Information Technology
Monash University, Caulfield, Melbourne
May 2019
DOI 10.26180/5d4c3457a6875
PDF DOI 10.26180/5d5245127202c
© Domenico Mazza (2019)
Under the Australian Copyright Act 1968, this exegesis must be used only under the normal conditions of scholarly fair dealing. In particular no results or conclusions should be extracted from it, nor should it be copied or closely paraphrased in whole or in part without the written consent of the author. Proper written acknowledgement should be made for any assistance obtained from this exegesis.
Any third-party content herein that has been reproduced without permissions comply with the fair dealing exemption in the Australian Copyright Act 1968 that permits reproduction of such material for the purposes of criticism and review.
I certify that I have made all reasonable efforts to secure copyright permissions for third-party content included in this exegesis and have not knowingly added copyright content to my work without the owner's permission.
Introduction
The following webpage is a digital documentation of the physical exhibition for the PhD Bridging the Virtual and Physical: from Screens to Costume. The exhibition took place 27–31 May 2019 at SensiLab (G119 Monash University Caulfield) as part of the practice-based PhD examination and closing event.
My PhD research enables a wide range of designers to engage in a speculative prototyping and presentation of new digital media that addresses the divide between the physical and virtual known as Computational Costume.
This work has emerged from investigations into: support of people's spatial memory on screen‑based devices; ways of supporting people across a range of digital media by interviewing a variety of experienced designers and communicators; reviewing advancements from screen‑based devices to ubiquitous and tangible computing; and developing an accessible speculative design process using lo‑fi physical materials to imagine wearable virtual identities that ground interactions through digital media.
Domenico Mazza
Exhibition map
The exhibition was divided into two areas: an antechamber and the main exhibition space. The antechamber foregrounded the work presented ahead in the main exhibition space.
Exegesis
Works presented in the exhibition reference specific sections in the exegesis document Bridging the Virtual and Physical: from Screens to Costume. The appropriate section numbers are listed in this guide with targeted links.
The exegesis can be viewed at: https://do.meni.co/phd
Memory Menu study
To improve interactions through digital media, my research has sought to close the gap between practices in the world and practices conducted through digital devices. Through the Memory Menu I began presenting information that would otherwise be lost on-screen. The Memory Menu study evaluated a subtle application of a highlighting (or 'use-wear') effect on a large menu to support audiences' spatial memory. The effect was applied to guide navigation through a menu by providing a visual reference of what had or had not been explored.
A demonstration of the Memory Menu study can be accessed at: https://do.meni.co/phd/memory-menu/
Exegesis reference: § A.1
Cardboard poster and hand interface
The cardboard poster and hand interface show how interactions through digital media could more closely integrate into audiences' day-to-day practices. Interactions are grounded by the body and surrounding world, instead of screen-based devices. The poster is a functional conference poster set in an imagined speculative future where augmented reality technology can superimpose visuals onto any surface. The hand interface presents a way for the hand and forearm to allow the use and selection of modalities for creating, capturing and sharing virtual objects such as the poster. Rotating dials surrounding the fingertips present the lowest level options while the wrists are reserved for switching modality.
The poster and hand interface design marks the beginning of the use of lo-fi physical materials to create and display speculative designs for augmented reality. The approach adopted enables a wide range of designers to develop speculative designs and communicate works to audiences in an accessible way.
A 3D model of the cardboard poster can be accessed at: https://do.meni.co/phd/3d-cardboard-poster/
Exegesis reference: § 4.3.2
Computational Costume v0
Computational Costume v0 builds upon the idea of a speculative whole-body interface as imagined through the cardboard poster and interface. v0 gives form to situations where wearable whole-body interfaces could be useful for particular contexts. The work illustrates how Computational Costume can be useful for work and personal life. Complementary map and timeline tools indicate how areas outside of wearers' visual field can also be accessed. This grounds the viability of the whole-body interface as a replacement for screen-based-device interaction.
Exegesis reference: § 4.3.3
Computational Costume v1
Computational Costume v1 clarifies and builds upon Computational Costume v0 by revealing three distinct costumes for three distinct scenarios. The work engages relatable scenarios and a live-action performance to show the costume in action. The work was originally shown as part of a choreographed performance for a science communication competition. The costumes and objects were designed to be quickly removed and arranged to express how the speculative design would work.
Exegesis reference: § 4.3.4
Computational Costume v2 video
Computational Costume v2 presents speculative interaction design ideas through a video of physical props in action. This supports audiences' imagination of how Computational Costume would behave. The accuracy of representation is enhanced through carefully captured perspectives and editing. The use of first-person and third-person perspectives allows audiences to imagine they are using the Computational Costume or observing it in action. This approach contrasts with the choreographed live performance of v1, which only affords viewers one perspective.
Exegesis reference: § 4.3.5
Computational Costume v2 props
Props in Computational Costume v2 are used for both filming and complementing the exhibition of the video. In exhibition: the props are imbued with meaning from the video and allow viewers to observe any fine details they may have missed in the video.
Exegesis reference: § 4.3.5
Participatory prototypes
Prototypes preceding Computational Costume v2 indicate modes of participatory engagement for both Computational Costume actors and audiences. These prototypes allow for direct live-action performances and to studying how wearers might use Computational Costume. For live-action: modified coveralls could allow specially made objects to be quickly removed from and attached to a costume. For studies: an easily wearable poncho with clips and pockets could allow wearers to use any lightweight mixed media to make and store imagined objects.
Exegesis reference: § C.3.2.2
Exhibition event
The following is a documentation of the exhibition space as it was during examination and at the exhibition closing event.
Cardboard guides
The use of cardboard was extended to exhibition signage and a reusable exhibition guide for the examiners.
Exhibition space
The following shows how the works listed here were presented to audiences at SensiLab, Monash University Caulfield.
Closing event
The closing event was an opportunity to show the work in its entirety to colleagues, staff, close friends and family to thank them all for their contributions to supporting my PhD. It was a wonderful event hosted by the faculty and attended by almost 40 people—far exceeding my expectations! Some photos from the event capture the proceedings.
Credits
Firstly, this is not all of the credits—please refer to my exegesis for all (non-exhibition documentation related) acknowledgements.
Thanks go to Jon McCormack for helping me out with his stellar imaging skills. Jon gave his time generously to make sure I documented my work here in the best quality possible. I might have otherwise just taken the photos shown in Exhibition space.
Thanks go to my mum and my good friend Ching2 for taking photos on the night and sharing them with me. I was covertly overwhelmed with the adrenaline/nerves that overcome an artist when they present their collective work to a crowd for the first time.
Also thanks Jesse for coming to visit early and sneakily taking over the camera while I spoke to some early guests.
Y'all the best. Including you, if you are reading this far. Stop it!
Dom.
Reference list
- Mazza, D. (2018). Computational Costume v1. https://doi.org/10.26180/5d4bc68e3357e. Video available at https://player.vimeo.com/video/338686312
- Mazza, D. (2018). Computational Costume v2. https://doi.org/10.26180/5d4bc13d2caa3. Video available at https://player.vimeo.com/video/274045926
- Mazza, D., & Tilley, J. A. (2019). Bridging the Virtual and Physical: from Screens to Costume exhibition video. https://doi.org/10.26180/5d52480bd700f. Video available at https://player.vimeo.com/video/343803515